koc3(dba)

175
Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 3: The Lévy-Itô Decomposition David Applebaum School of Mathematics and Statistics, University of Sheffield, UK 7th December 2011 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 1 / 44

Upload: serhat-yucel

Post on 01-Nov-2014

990 views

Category:

Economy & Finance


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Koc3(dba)

Lectures on Lévy Processes and StochasticCalculus (Koc University)

Lecture 3: The Lévy-Itô Decomposition

David Applebaum

School of Mathematics and Statistics, University of Sheffield, UK

7th December 2011

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 1 / 44

Page 2: Koc3(dba)

Filtrations, Markov Processes and Martingales

We recall the probability space (Ω,F ,P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t .We denote by Ft this sub-σ-algebra of F . To be able to consider alltime instants on an equal footing, we define a filtration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F , , i.e.

0 ≤ s ≤ t <∞⇒ Fs ⊆ Ft .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44

Page 3: Koc3(dba)

Filtrations, Markov Processes and Martingales

We recall the probability space (Ω,F ,P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t .We denote by Ft this sub-σ-algebra of F . To be able to consider alltime instants on an equal footing, we define a filtration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F , , i.e.

0 ≤ s ≤ t <∞⇒ Fs ⊆ Ft .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44

Page 4: Koc3(dba)

Filtrations, Markov Processes and Martingales

We recall the probability space (Ω,F ,P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t .We denote by Ft this sub-σ-algebra of F . To be able to consider alltime instants on an equal footing, we define a filtration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F , , i.e.

0 ≤ s ≤ t <∞⇒ Fs ⊆ Ft .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44

Page 5: Koc3(dba)

Filtrations, Markov Processes and Martingales

We recall the probability space (Ω,F ,P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t .We denote by Ft this sub-σ-algebra of F . To be able to consider alltime instants on an equal footing, we define a filtration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F , , i.e.

0 ≤ s ≤ t <∞⇒ Fs ⊆ Ft .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44

Page 6: Koc3(dba)

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

Page 7: Koc3(dba)

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

Page 8: Koc3(dba)

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

Page 9: Koc3(dba)

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

Page 10: Koc3(dba)

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

Page 11: Koc3(dba)

TheoremIf X is a Lévy process (adapted to its own natural filtration) whereineach X (t) has law qt , then it is a Markov process with transitionprobabilities ps,t (x ,A) = qt−s(A− x).

Proof. This essentially follows from

E(f (X (t))|Fs) = E(f (X (s) + X (t)− X (s))|Fs)

=

∫Rd

f (X (s) + y)qt−s(dy). 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 4 / 44

Page 12: Koc3(dba)

TheoremIf X is a Lévy process (adapted to its own natural filtration) whereineach X (t) has law qt , then it is a Markov process with transitionprobabilities ps,t (x ,A) = qt−s(A− x).

Proof. This essentially follows from

E(f (X (t))|Fs) = E(f (X (s) + X (t)− X (s))|Fs)

=

∫Rd

f (X (s) + y)qt−s(dy). 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 4 / 44

Page 13: Koc3(dba)

TheoremIf X is a Lévy process (adapted to its own natural filtration) whereineach X (t) has law qt , then it is a Markov process with transitionprobabilities ps,t (x ,A) = qt−s(A− x).

Proof. This essentially follows from

E(f (X (t))|Fs) = E(f (X (s) + X (t)− X (s))|Fs)

=

∫Rd

f (X (s) + y)qt−s(dy). 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 4 / 44

Page 14: Koc3(dba)

Now let X be an adapted process defined on a filtered probabilityspace which also satisfies the integrability requirement E(|X (t)|) <∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t <∞,

E(X (t)|Fs) = X (s) a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44

Page 15: Koc3(dba)

Now let X be an adapted process defined on a filtered probabilityspace which also satisfies the integrability requirement E(|X (t)|) <∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t <∞,

E(X (t)|Fs) = X (s) a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44

Page 16: Koc3(dba)

Now let X be an adapted process defined on a filtered probabilityspace which also satisfies the integrability requirement E(|X (t)|) <∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t <∞,

E(X (t)|Fs) = X (s) a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44

Page 17: Koc3(dba)

Now let X be an adapted process defined on a filtered probabilityspace which also satisfies the integrability requirement E(|X (t)|) <∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t <∞,

E(X (t)|Fs) = X (s) a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44

Page 18: Koc3(dba)

An adapted Lévy process with zero mean is a martingale (with respectto its natural filtration)since in this case, for 0 ≤ s ≤ t <∞ and using the convenient notationEs(·) := E(·|Fs):

Es(X (t)) = Es(X (s) + X (t)− X (s))

= X (s) + E(X (t)− X (s)) = X (s)

Although there is no good reason why a generic Lévy process shouldbe a martingale (or even have finite mean), there are some importantexamples:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 6 / 44

Page 19: Koc3(dba)

An adapted Lévy process with zero mean is a martingale (with respectto its natural filtration)since in this case, for 0 ≤ s ≤ t <∞ and using the convenient notationEs(·) := E(·|Fs):

Es(X (t)) = Es(X (s) + X (t)− X (s))

= X (s) + E(X (t)− X (s)) = X (s)

Although there is no good reason why a generic Lévy process shouldbe a martingale (or even have finite mean), there are some importantexamples:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 6 / 44

Page 20: Koc3(dba)

An adapted Lévy process with zero mean is a martingale (with respectto its natural filtration)since in this case, for 0 ≤ s ≤ t <∞ and using the convenient notationEs(·) := E(·|Fs):

Es(X (t)) = Es(X (s) + X (t)− X (s))

= X (s) + E(X (t)− X (s)) = X (s)

Although there is no good reason why a generic Lévy process shouldbe a martingale (or even have finite mean), there are some importantexamples:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 6 / 44

Page 21: Koc3(dba)

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

Page 22: Koc3(dba)

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

Page 23: Koc3(dba)

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

Page 24: Koc3(dba)

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

Page 25: Koc3(dba)

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

Page 26: Koc3(dba)

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

Page 27: Koc3(dba)

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

Page 28: Koc3(dba)

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

Page 29: Koc3(dba)

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

Page 30: Koc3(dba)

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

Page 31: Koc3(dba)

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

Page 32: Koc3(dba)

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

Page 33: Koc3(dba)

From now on, we will always make the following assumptions:-

(Ω,F ,P) will be a fixed probability space equipped with a filtration(Ft , t ≥ 0) which satisfies the “usual hypotheses”.Every Lévy process X = (X (t), t ≥ 0) will be assumed to beFt -adapted and have càdlàg sample paths.X (t)− X (s) is independent of Fs for all 0 ≤ s < t <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44

Page 34: Koc3(dba)

From now on, we will always make the following assumptions:-

(Ω,F ,P) will be a fixed probability space equipped with a filtration(Ft , t ≥ 0) which satisfies the “usual hypotheses”.Every Lévy process X = (X (t), t ≥ 0) will be assumed to beFt -adapted and have càdlàg sample paths.X (t)− X (s) is independent of Fs for all 0 ≤ s < t <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44

Page 35: Koc3(dba)

From now on, we will always make the following assumptions:-

(Ω,F ,P) will be a fixed probability space equipped with a filtration(Ft , t ≥ 0) which satisfies the “usual hypotheses”.Every Lévy process X = (X (t), t ≥ 0) will be assumed to beFt -adapted and have càdlàg sample paths.X (t)− X (s) is independent of Fs for all 0 ≤ s < t <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44

Page 36: Koc3(dba)

From now on, we will always make the following assumptions:-

(Ω,F ,P) will be a fixed probability space equipped with a filtration(Ft , t ≥ 0) which satisfies the “usual hypotheses”.Every Lévy process X = (X (t), t ≥ 0) will be assumed to beFt -adapted and have càdlàg sample paths.X (t)− X (s) is independent of Fs for all 0 ≤ s < t <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44

Page 37: Koc3(dba)

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

Page 38: Koc3(dba)

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

Page 39: Koc3(dba)

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

Page 40: Koc3(dba)

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

Page 41: Koc3(dba)

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

Page 42: Koc3(dba)

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

Page 43: Koc3(dba)

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

Page 44: Koc3(dba)

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

Page 45: Koc3(dba)

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

Page 46: Koc3(dba)

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

Page 47: Koc3(dba)

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

Page 48: Koc3(dba)

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

Page 49: Koc3(dba)

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

Page 50: Koc3(dba)

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

Page 51: Koc3(dba)

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

Page 52: Koc3(dba)

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

Page 53: Koc3(dba)

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

Page 54: Koc3(dba)

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

Page 55: Koc3(dba)

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

Page 56: Koc3(dba)

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

Page 57: Koc3(dba)

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

Page 58: Koc3(dba)

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

Page 59: Koc3(dba)

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

Page 60: Koc3(dba)

Much of the analytic difficulty in manipulating Lévy processes arisesfrom the fact that it is possible for them to have∑

0≤s≤t

|∆X (s)| =∞ a.s.

and the way in which these difficulties is overcome exploits the fact thatwe always have ∑

0≤s≤t

|∆X (s)|2 <∞ a.s.

We will gain more insight into these ideas as the discussionprogresses.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 14 / 44

Page 61: Koc3(dba)

Much of the analytic difficulty in manipulating Lévy processes arisesfrom the fact that it is possible for them to have∑

0≤s≤t

|∆X (s)| =∞ a.s.

and the way in which these difficulties is overcome exploits the fact thatwe always have ∑

0≤s≤t

|∆X (s)|2 <∞ a.s.

We will gain more insight into these ideas as the discussionprogresses.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 14 / 44

Page 62: Koc3(dba)

Much of the analytic difficulty in manipulating Lévy processes arisesfrom the fact that it is possible for them to have∑

0≤s≤t

|∆X (s)| =∞ a.s.

and the way in which these difficulties is overcome exploits the fact thatwe always have ∑

0≤s≤t

|∆X (s)|2 <∞ a.s.

We will gain more insight into these ideas as the discussionprogresses.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 14 / 44

Page 63: Koc3(dba)

Rather than exploring ∆X itself further, we will find it more profitable tocount jumps of specified size. More precisely, let 0 ≤ t <∞ andA ∈ B(Rd − 0). Define

N(t ,A) = #0 ≤ s ≤ t ; ∆X (s) ∈ A=

∑0≤s≤t

1A(∆X (s)).

Note that for each ω ∈ Ω, t ≥ 0, the set function A→ N(t ,A)(ω) is acounting measure on B(Rd − 0) and hence

E(N(t ,A)) =

∫N(t ,A)(ω)dP(ω)

is a Borel measure on B(Rd − 0). We write µ(·) = E(N(1, ·)).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44

Page 64: Koc3(dba)

Rather than exploring ∆X itself further, we will find it more profitable tocount jumps of specified size. More precisely, let 0 ≤ t <∞ andA ∈ B(Rd − 0). Define

N(t ,A) = #0 ≤ s ≤ t ; ∆X (s) ∈ A=

∑0≤s≤t

1A(∆X (s)).

Note that for each ω ∈ Ω, t ≥ 0, the set function A→ N(t ,A)(ω) is acounting measure on B(Rd − 0) and hence

E(N(t ,A)) =

∫N(t ,A)(ω)dP(ω)

is a Borel measure on B(Rd − 0). We write µ(·) = E(N(1, ·)).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44

Page 65: Koc3(dba)

Rather than exploring ∆X itself further, we will find it more profitable tocount jumps of specified size. More precisely, let 0 ≤ t <∞ andA ∈ B(Rd − 0). Define

N(t ,A) = #0 ≤ s ≤ t ; ∆X (s) ∈ A=

∑0≤s≤t

1A(∆X (s)).

Note that for each ω ∈ Ω, t ≥ 0, the set function A→ N(t ,A)(ω) is acounting measure on B(Rd − 0) and hence

E(N(t ,A)) =

∫N(t ,A)(ω)dP(ω)

is a Borel measure on B(Rd − 0). We write µ(·) = E(N(1, ·)).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44

Page 66: Koc3(dba)

Rather than exploring ∆X itself further, we will find it more profitable tocount jumps of specified size. More precisely, let 0 ≤ t <∞ andA ∈ B(Rd − 0). Define

N(t ,A) = #0 ≤ s ≤ t ; ∆X (s) ∈ A=

∑0≤s≤t

1A(∆X (s)).

Note that for each ω ∈ Ω, t ≥ 0, the set function A→ N(t ,A)(ω) is acounting measure on B(Rd − 0) and hence

E(N(t ,A)) =

∫N(t ,A)(ω)dP(ω)

is a Borel measure on B(Rd − 0). We write µ(·) = E(N(1, ·)).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44

Page 67: Koc3(dba)

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

Page 68: Koc3(dba)

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

Page 69: Koc3(dba)

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

Page 70: Koc3(dba)

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

Page 71: Koc3(dba)

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

Page 72: Koc3(dba)

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

Page 73: Koc3(dba)

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

Page 74: Koc3(dba)

Similarly, we assume that limn→∞ T An = T A <∞ with non-zero

probability and defineM = ω ∈ Ω : limn→∞ T An =∞. If ω ∈ Ω−M

then we obtain a contradiction with the fact that X has a left limit(almost surely) at T A(ω).Hence, for each t ≥ 0,

N(t ,A) =∑n∈N

1T An ≤t <∞ a.s. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 17 / 44

Page 75: Koc3(dba)

Similarly, we assume that limn→∞ T An = T A <∞ with non-zero

probability and defineM = ω ∈ Ω : limn→∞ T An =∞. If ω ∈ Ω−M

then we obtain a contradiction with the fact that X has a left limit(almost surely) at T A(ω).Hence, for each t ≥ 0,

N(t ,A) =∑n∈N

1T An ≤t <∞ a.s. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 17 / 44

Page 76: Koc3(dba)

Similarly, we assume that limn→∞ T An = T A <∞ with non-zero

probability and defineM = ω ∈ Ω : limn→∞ T An =∞. If ω ∈ Ω−M

then we obtain a contradiction with the fact that X has a left limit(almost surely) at T A(ω).Hence, for each t ≥ 0,

N(t ,A) =∑n∈N

1T An ≤t <∞ a.s. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 17 / 44

Page 77: Koc3(dba)

Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.

Theorem

1 If A is bounded below, then (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A).

2 If A1, . . . ,Am ∈ B(Rd −0) are disjoint, then the random variablesN(t ,A1), . . . ,N(t ,Am) are independent.

It follows immediately that µ(A) <∞ whenever A is bounded below,hence the measure µ is σ-finite.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44

Page 78: Koc3(dba)

Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.

Theorem

1 If A is bounded below, then (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A).

2 If A1, . . . ,Am ∈ B(Rd −0) are disjoint, then the random variablesN(t ,A1), . . . ,N(t ,Am) are independent.

It follows immediately that µ(A) <∞ whenever A is bounded below,hence the measure µ is σ-finite.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44

Page 79: Koc3(dba)

Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.

Theorem

1 If A is bounded below, then (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A).

2 If A1, . . . ,Am ∈ B(Rd −0) are disjoint, then the random variablesN(t ,A1), . . . ,N(t ,Am) are independent.

It follows immediately that µ(A) <∞ whenever A is bounded below,hence the measure µ is σ-finite.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44

Page 80: Koc3(dba)

Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.

Theorem

1 If A is bounded below, then (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A).

2 If A1, . . . ,Am ∈ B(Rd −0) are disjoint, then the random variablesN(t ,A1), . . . ,N(t ,Am) are independent.

It follows immediately that µ(A) <∞ whenever A is bounded below,hence the measure µ is σ-finite.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44

Page 81: Koc3(dba)

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

Page 82: Koc3(dba)

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

Page 83: Koc3(dba)

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

Page 84: Koc3(dba)

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

Page 85: Koc3(dba)

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

Page 86: Koc3(dba)

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

Page 87: Koc3(dba)

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

Page 88: Koc3(dba)

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

Page 89: Koc3(dba)

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

Page 90: Koc3(dba)

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

Page 91: Koc3(dba)

In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.

Theorem

Let A be bounded below, then

1(∫

A f (x)N(t ,dx), t ≥ 0)

is a compound Poisson process, withcharacteristic function

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

[t∫Rd

(ei(u,x) − 1)µf ,A(dx)

]

for each u ∈ Rd , where µf ,A(B) := µ(A ∩ f−1(B)), for eachB ∈ B(Rd ).

2 If f ∈ L1(A, µA), then

E(∫

Af (x)N(t ,dx)

)= t

∫A

f (x)µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44

Page 92: Koc3(dba)

In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.

Theorem

Let A be bounded below, then

1(∫

A f (x)N(t ,dx), t ≥ 0)

is a compound Poisson process, withcharacteristic function

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

[t∫Rd

(ei(u,x) − 1)µf ,A(dx)

]

for each u ∈ Rd , where µf ,A(B) := µ(A ∩ f−1(B)), for eachB ∈ B(Rd ).

2 If f ∈ L1(A, µA), then

E(∫

Af (x)N(t ,dx)

)= t

∫A

f (x)µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44

Page 93: Koc3(dba)

In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.

Theorem

Let A be bounded below, then

1(∫

A f (x)N(t ,dx), t ≥ 0)

is a compound Poisson process, withcharacteristic function

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

[t∫Rd

(ei(u,x) − 1)µf ,A(dx)

]

for each u ∈ Rd , where µf ,A(B) := µ(A ∩ f−1(B)), for eachB ∈ B(Rd ).

2 If f ∈ L1(A, µA), then

E(∫

Af (x)N(t ,dx)

)= t

∫A

f (x)µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44

Page 94: Koc3(dba)

In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.

Theorem

Let A be bounded below, then

1(∫

A f (x)N(t ,dx), t ≥ 0)

is a compound Poisson process, withcharacteristic function

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

[t∫Rd

(ei(u,x) − 1)µf ,A(dx)

]

for each u ∈ Rd , where µf ,A(B) := µ(A ∩ f−1(B)), for eachB ∈ B(Rd ).

2 If f ∈ L1(A, µA), then

E(∫

Af (x)N(t ,dx)

)= t

∫A

f (x)µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44

Page 95: Koc3(dba)

Theorem3 If f ∈ L2(A, µA), then

Var(∣∣∣∣∫

Af (x)N(t ,dx)

∣∣∣∣) = t∫

A|f (x)|2µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 22 / 44

Page 96: Koc3(dba)

Theorem3 If f ∈ L2(A, µA), then

Var(∣∣∣∣∫

Af (x)N(t ,dx)

∣∣∣∣) = t∫

A|f (x)|2µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 22 / 44

Page 97: Koc3(dba)

Proof. - part of it!1) For simplicity, we will prove this result in the case wheref ∈ L1(A, µA). First let f be a simple function and write f =

∑nj=1 cj1Aj

where each cj ∈ Rd . We can assume, without loss of generality, thatthe Aj ’s are disjoint Borel subsets of A.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 23 / 44

Page 98: Koc3(dba)

Proof. - part of it!1) For simplicity, we will prove this result in the case wheref ∈ L1(A, µA). First let f be a simple function and write f =

∑nj=1 cj1Aj

where each cj ∈ Rd . We can assume, without loss of generality, thatthe Aj ’s are disjoint Borel subsets of A.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 23 / 44

Page 99: Koc3(dba)

Proof. - part of it!1) For simplicity, we will prove this result in the case wheref ∈ L1(A, µA). First let f be a simple function and write f =

∑nj=1 cj1Aj

where each cj ∈ Rd . We can assume, without loss of generality, thatthe Aj ’s are disjoint Borel subsets of A.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 23 / 44

Page 100: Koc3(dba)

By Theorem 5, we find that

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= E

exp

i

u,n∑

j=1

cjN(t ,Aj)

=n∏

j=1

E(exp

i(u, cjN(t ,Aj)

))=

n∏j=1

exp

t(

ei(u,cj ) − 1)µ(Aj)

= exp

t∫

A(ei(u,f (x)) − 1)µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44

Page 101: Koc3(dba)

By Theorem 5, we find that

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= E

exp

i

u,n∑

j=1

cjN(t ,Aj)

=n∏

j=1

E(exp

i(u, cjN(t ,Aj)

))=

n∏j=1

exp

t(

ei(u,cj ) − 1)µ(Aj)

= exp

t∫

A(ei(u,f (x)) − 1)µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44

Page 102: Koc3(dba)

By Theorem 5, we find that

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= E

exp

i

u,n∑

j=1

cjN(t ,Aj)

=n∏

j=1

E(exp

i(u, cjN(t ,Aj)

))=

n∏j=1

exp

t(

ei(u,cj ) − 1)µ(Aj)

= exp

t∫

A(ei(u,f (x)) − 1)µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44

Page 103: Koc3(dba)

By Theorem 5, we find that

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= E

exp

i

u,n∑

j=1

cjN(t ,Aj)

=n∏

j=1

E(exp

i(u, cjN(t ,Aj)

))=

n∏j=1

exp

t(

ei(u,cj ) − 1)µ(Aj)

= exp

t∫

A(ei(u,f (x)) − 1)µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44

Page 104: Koc3(dba)

Now for an arbitrary f ∈ L1(A, µA), we can find a sequence of simplefunctions converging to f in L1 and hence a subsequence whichconverges to f almost surely. Passing to the limit along thissubsequence in the above yields the required result, via dominatedconvergence.(2) and (3) follow from (1) by differentiation. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 25 / 44

Page 105: Koc3(dba)

Now for an arbitrary f ∈ L1(A, µA), we can find a sequence of simplefunctions converging to f in L1 and hence a subsequence whichconverges to f almost surely. Passing to the limit along thissubsequence in the above yields the required result, via dominatedconvergence.(2) and (3) follow from (1) by differentiation. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 25 / 44

Page 106: Koc3(dba)

Now for an arbitrary f ∈ L1(A, µA), we can find a sequence of simplefunctions converging to f in L1 and hence a subsequence whichconverges to f almost surely. Passing to the limit along thissubsequence in the above yields the required result, via dominatedconvergence.(2) and (3) follow from (1) by differentiation. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 25 / 44

Page 107: Koc3(dba)

It follows from Theorem 6 (2) that a Poisson integral will fail to have afinite mean if f /∈ L1(A, µ).

For each f ∈ L1(A, µA), t ≥ 0, we define the compensated Poissonintegral by∫

Af (x)N(t ,dx) =

∫A

f (x)N(t ,dx)− t∫

Af (x)µ(dx).

A straightforward argument shows that(∫A f (x)N(t ,dx), t ≥ 0

)is a martingale and we will use this fact

extensively in the sequel.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 26 / 44

Page 108: Koc3(dba)

It follows from Theorem 6 (2) that a Poisson integral will fail to have afinite mean if f /∈ L1(A, µ).

For each f ∈ L1(A, µA), t ≥ 0, we define the compensated Poissonintegral by∫

Af (x)N(t ,dx) =

∫A

f (x)N(t ,dx)− t∫

Af (x)µ(dx).

A straightforward argument shows that(∫A f (x)N(t ,dx), t ≥ 0

)is a martingale and we will use this fact

extensively in the sequel.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 26 / 44

Page 109: Koc3(dba)

It follows from Theorem 6 (2) that a Poisson integral will fail to have afinite mean if f /∈ L1(A, µ).

For each f ∈ L1(A, µA), t ≥ 0, we define the compensated Poissonintegral by∫

Af (x)N(t ,dx) =

∫A

f (x)N(t ,dx)− t∫

Af (x)µ(dx).

A straightforward argument shows that(∫A f (x)N(t ,dx), t ≥ 0

)is a martingale and we will use this fact

extensively in the sequel.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 26 / 44

Page 110: Koc3(dba)

Note that by Theorem 6 (2) and (3), we can easily deduce the followingtwo important facts:

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

t∫Rd

(ei(u,x) − 1− i(u, x))µf ,A(dx)

, (0.3)

for each u ∈ Rd , and for f ∈ L2(A, µA),

E

(∣∣∣∣∫A

f (x)N(t ,dx)

∣∣∣∣2)

= t∫

A|f (x)|2µ(dx). (0.4)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 27 / 44

Page 111: Koc3(dba)

Note that by Theorem 6 (2) and (3), we can easily deduce the followingtwo important facts:

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

t∫Rd

(ei(u,x) − 1− i(u, x))µf ,A(dx)

, (0.3)

for each u ∈ Rd , and for f ∈ L2(A, µA),

E

(∣∣∣∣∫A

f (x)N(t ,dx)

∣∣∣∣2)

= t∫

A|f (x)|2µ(dx). (0.4)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 27 / 44

Page 112: Koc3(dba)

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

Page 113: Koc3(dba)

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

Page 114: Koc3(dba)

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

Page 115: Koc3(dba)

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

Page 116: Koc3(dba)

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

Page 117: Koc3(dba)

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

Page 118: Koc3(dba)

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

Page 119: Koc3(dba)

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

Page 120: Koc3(dba)

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

Page 121: Koc3(dba)

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

Page 122: Koc3(dba)

Functions of finite variation are important in integration, for supposethat we are given a function g which we are proposing as an integrator,then as a minimum we will want to be able to define the Stieltjesintegral

∫I fdg, for all continuous functions f (where I is some finite

interval). In fact a necessary and sufficient condition for obtaining suchan integral as a limit of Riemann sums is that g has finite variation.A stochastic process (X (t), t ≥ 0) is of finite variation if the paths(X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 30 / 44

Page 123: Koc3(dba)

Functions of finite variation are important in integration, for supposethat we are given a function g which we are proposing as an integrator,then as a minimum we will want to be able to define the Stieltjesintegral

∫I fdg, for all continuous functions f (where I is some finite

interval). In fact a necessary and sufficient condition for obtaining suchan integral as a limit of Riemann sums is that g has finite variation.A stochastic process (X (t), t ≥ 0) is of finite variation if the paths(X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 30 / 44

Page 124: Koc3(dba)

Functions of finite variation are important in integration, for supposethat we are given a function g which we are proposing as an integrator,then as a minimum we will want to be able to define the Stieltjesintegral

∫I fdg, for all continuous functions f (where I is some finite

interval). In fact a necessary and sufficient condition for obtaining suchan integral as a limit of Riemann sums is that g has finite variation.A stochastic process (X (t), t ≥ 0) is of finite variation if the paths(X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 30 / 44

Page 125: Koc3(dba)

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

Page 126: Koc3(dba)

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

Page 127: Koc3(dba)

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

Page 128: Koc3(dba)

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

Page 129: Koc3(dba)

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

Page 130: Koc3(dba)

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

Page 131: Koc3(dba)

In fact, a necessary and sufficient condition for a Lévy process to be offinite variation is that there is no Brownian part (i.e. a = 0 in theLévy-Khinchine formula) , and

∫|x |<1 |x |ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 32 / 44

Page 132: Koc3(dba)

In fact, a necessary and sufficient condition for a Lévy process to be offinite variation is that there is no Brownian part (i.e. a = 0 in theLévy-Khinchine formula) , and

∫|x |<1 |x |ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 32 / 44

Page 133: Koc3(dba)

The Lévy-Itô Decomposition

This is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0∫

AxN(t ,dx) =

∑0≤u≤t

∆X (u)1A(∆X (u))

is the sum of all the jumps taking values in the set A up to the time t .Since the paths of X are càdlàg, this is clearly a finite random sum. Inparticular,

∫|x |≥1 xN(t ,dx) is the sum of all jumps of size bigger than

one. It is a compound Poisson process, has finite variation but mayhave no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44

Page 134: Koc3(dba)

The Lévy-Itô Decomposition

This is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0∫

AxN(t ,dx) =

∑0≤u≤t

∆X (u)1A(∆X (u))

is the sum of all the jumps taking values in the set A up to the time t .Since the paths of X are càdlàg, this is clearly a finite random sum. Inparticular,

∫|x |≥1 xN(t ,dx) is the sum of all jumps of size bigger than

one. It is a compound Poisson process, has finite variation but mayhave no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44

Page 135: Koc3(dba)

The Lévy-Itô Decomposition

This is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0∫

AxN(t ,dx) =

∑0≤u≤t

∆X (u)1A(∆X (u))

is the sum of all the jumps taking values in the set A up to the time t .Since the paths of X are càdlàg, this is clearly a finite random sum. Inparticular,

∫|x |≥1 xN(t ,dx) is the sum of all jumps of size bigger than

one. It is a compound Poisson process, has finite variation but mayhave no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44

Page 136: Koc3(dba)

The Lévy-Itô Decomposition

This is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0∫

AxN(t ,dx) =

∑0≤u≤t

∆X (u)1A(∆X (u))

is the sum of all the jumps taking values in the set A up to the time t .Since the paths of X are càdlàg, this is clearly a finite random sum. Inparticular,

∫|x |≥1 xN(t ,dx) is the sum of all jumps of size bigger than

one. It is a compound Poisson process, has finite variation but mayhave no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44

Page 137: Koc3(dba)

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

Page 138: Koc3(dba)

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

Page 139: Koc3(dba)

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

Page 140: Koc3(dba)

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

Page 141: Koc3(dba)

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

Page 142: Koc3(dba)

Define ∫|x |<1

xN(t ,dx) := L2 − limn→∞

M(t ,An),

which is a martingale. Moreover, on taking limits in (0.3), we get

E

(exp i

(u,∫|x |<1

xN(t ,dx)

))= exp

t∫|x |<1

(ei(u,x) − 1− i(u, x))µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 35 / 44

Page 143: Koc3(dba)

Define ∫|x |<1

xN(t ,dx) := L2 − limn→∞

M(t ,An),

which is a martingale. Moreover, on taking limits in (0.3), we get

E

(exp i

(u,∫|x |<1

xN(t ,dx)

))= exp

t∫|x |<1

(ei(u,x) − 1− i(u, x))µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 35 / 44

Page 144: Koc3(dba)

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Page 145: Koc3(dba)

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Page 146: Koc3(dba)

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Page 147: Koc3(dba)

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Page 148: Koc3(dba)

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Page 149: Koc3(dba)

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Page 150: Koc3(dba)

Theorem (The Lévy-Itô Decomposition)

If X is a Lévy process, then there exists b ∈ Rd , a Brownian motion Bawith covariance matrix A in Rd and an independent Poisson randommeasure N on R+ × (Rd − 0) such that for each t ≥ 0,

X (t) = bt + BA(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx) (0.6)

Note that the three processes in this decomposition are allindependent.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 37 / 44

Page 151: Koc3(dba)

Theorem (The Lévy-Itô Decomposition)

If X is a Lévy process, then there exists b ∈ Rd , a Brownian motion Bawith covariance matrix A in Rd and an independent Poisson randommeasure N on R+ × (Rd − 0) such that for each t ≥ 0,

X (t) = bt + BA(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx) (0.6)

Note that the three processes in this decomposition are allindependent.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 37 / 44

Page 152: Koc3(dba)

Theorem (The Lévy-Itô Decomposition)

If X is a Lévy process, then there exists b ∈ Rd , a Brownian motion Bawith covariance matrix A in Rd and an independent Poisson randommeasure N on R+ × (Rd − 0) such that for each t ≥ 0,

X (t) = bt + BA(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx) (0.6)

Note that the three processes in this decomposition are allindependent.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 37 / 44

Page 153: Koc3(dba)

An interesting by-product of the Lévy-Itô decomposition is theLévy-Khintchine formula, which follows easily by independence in theLévy-Itô decomposition:-

Corollary

If X is a Lévy process, then for each u ∈ Rd , t ≥ 0,

E(ei(u,X(t))) =

exp(

t[i(b,u)− 1

2(u,Au)

+

∫Rd−0

(ei(u,y) − 1− i(u, y)1B(y))µ(dy)

])(0.7)

so the intensity measure µ is the Lévy measure for X and from now onwe write µ as ν.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 38 / 44

Page 154: Koc3(dba)

An interesting by-product of the Lévy-Itô decomposition is theLévy-Khintchine formula, which follows easily by independence in theLévy-Itô decomposition:-

Corollary

If X is a Lévy process, then for each u ∈ Rd , t ≥ 0,

E(ei(u,X(t))) =

exp(

t[i(b,u)− 1

2(u,Au)

+

∫Rd−0

(ei(u,y) − 1− i(u, y)1B(y))µ(dy)

])(0.7)

so the intensity measure µ is the Lévy measure for X and from now onwe write µ as ν.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 38 / 44

Page 155: Koc3(dba)

An interesting by-product of the Lévy-Itô decomposition is theLévy-Khintchine formula, which follows easily by independence in theLévy-Itô decomposition:-

Corollary

If X is a Lévy process, then for each u ∈ Rd , t ≥ 0,

E(ei(u,X(t))) =

exp(

t[i(b,u)− 1

2(u,Au)

+

∫Rd−0

(ei(u,y) − 1− i(u, y)1B(y))µ(dy)

])(0.7)

so the intensity measure µ is the Lévy measure for X and from now onwe write µ as ν.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 38 / 44

Page 156: Koc3(dba)

An interesting by-product of the Lévy-Itô decomposition is theLévy-Khintchine formula, which follows easily by independence in theLévy-Itô decomposition:-

Corollary

If X is a Lévy process, then for each u ∈ Rd , t ≥ 0,

E(ei(u,X(t))) =

exp(

t[i(b,u)− 1

2(u,Au)

+

∫Rd−0

(ei(u,y) − 1− i(u, y)1B(y))µ(dy)

])(0.7)

so the intensity measure µ is the Lévy measure for X and from now onwe write µ as ν.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 38 / 44

Page 157: Koc3(dba)

The process∫|x |<1 xN(t ,dx) is the compensated sum of small jumps.

The compensation takes care of the analytic complications in theLévy-Khintchine formula in a probabilistically pleasing way, since it isan L2-martingale.The process

∫|x |≥1 xN(t ,dx) describes the “large jumps” - it is a

compound Poisson process, but may have no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 39 / 44

Page 158: Koc3(dba)

The process∫|x |<1 xN(t ,dx) is the compensated sum of small jumps.

The compensation takes care of the analytic complications in theLévy-Khintchine formula in a probabilistically pleasing way, since it isan L2-martingale.The process

∫|x |≥1 xN(t ,dx) describes the “large jumps” - it is a

compound Poisson process, but may have no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 39 / 44

Page 159: Koc3(dba)

The process∫|x |<1 xN(t ,dx) is the compensated sum of small jumps.

The compensation takes care of the analytic complications in theLévy-Khintchine formula in a probabilistically pleasing way, since it isan L2-martingale.The process

∫|x |≥1 xN(t ,dx) describes the “large jumps” - it is a

compound Poisson process, but may have no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 39 / 44

Page 160: Koc3(dba)

A Lévy process has finite variation iff its Lévy-Itô decomposition takesthe form

X (t) = γt +

∫x 6=0

xN(t ,dx)

= γt +∑

0≤s≤t

∆X (s),

where γ = b −∫|x |<1 xν(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 40 / 44

Page 161: Koc3(dba)

A Lévy process has finite variation iff its Lévy-Itô decomposition takesthe form

X (t) = γt +

∫x 6=0

xN(t ,dx)

= γt +∑

0≤s≤t

∆X (s),

where γ = b −∫|x |<1 xν(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 40 / 44

Page 162: Koc3(dba)

H.Geman, D.Madan and M.Yor have proposed a nice financialinterpretation for the jump terms in the Lévy-Itô decomposition:- wherethe intensity measure is infinite, the stock price manifests “infiniteactivity” and this is the mathematical signature of the jitter arising fromthe interaction of pure supply shocks and pure demand shocks. On theother hand, where the intensity measure is finite, we have “finiteactivity”, and this corresponds to sudden shocks that can causeunexpected movements in the market, such as a terrorist atrocity or amajor earthquake.

If a pure jump Lévy process (no Brownian part) has finite activity thenit has finite variation. The converse is false.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 41 / 44

Page 163: Koc3(dba)

H.Geman, D.Madan and M.Yor have proposed a nice financialinterpretation for the jump terms in the Lévy-Itô decomposition:- wherethe intensity measure is infinite, the stock price manifests “infiniteactivity” and this is the mathematical signature of the jitter arising fromthe interaction of pure supply shocks and pure demand shocks. On theother hand, where the intensity measure is finite, we have “finiteactivity”, and this corresponds to sudden shocks that can causeunexpected movements in the market, such as a terrorist atrocity or amajor earthquake.

If a pure jump Lévy process (no Brownian part) has finite activity thenit has finite variation. The converse is false.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 41 / 44

Page 164: Koc3(dba)

H.Geman, D.Madan and M.Yor have proposed a nice financialinterpretation for the jump terms in the Lévy-Itô decomposition:- wherethe intensity measure is infinite, the stock price manifests “infiniteactivity” and this is the mathematical signature of the jitter arising fromthe interaction of pure supply shocks and pure demand shocks. On theother hand, where the intensity measure is finite, we have “finiteactivity”, and this corresponds to sudden shocks that can causeunexpected movements in the market, such as a terrorist atrocity or amajor earthquake.

If a pure jump Lévy process (no Brownian part) has finite activity thenit has finite variation. The converse is false.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 41 / 44

Page 165: Koc3(dba)

H.Geman, D.Madan and M.Yor have proposed a nice financialinterpretation for the jump terms in the Lévy-Itô decomposition:- wherethe intensity measure is infinite, the stock price manifests “infiniteactivity” and this is the mathematical signature of the jitter arising fromthe interaction of pure supply shocks and pure demand shocks. On theother hand, where the intensity measure is finite, we have “finiteactivity”, and this corresponds to sudden shocks that can causeunexpected movements in the market, such as a terrorist atrocity or amajor earthquake.

If a pure jump Lévy process (no Brownian part) has finite activity thenit has finite variation. The converse is false.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 41 / 44

Page 166: Koc3(dba)

The first three terms on the rhs of (0.6) have finite moments to allorders, so if a Lévy process fails to have a moment, this is due entirelyto the “large jumps”/“finite activity” part. In fact:

E(|X (t)|n) <∞ for all t > 0 if and only if∫|x |≥1 |x |

nν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 42 / 44

Page 167: Koc3(dba)

The first three terms on the rhs of (0.6) have finite moments to allorders, so if a Lévy process fails to have a moment, this is due entirelyto the “large jumps”/“finite activity” part. In fact:

E(|X (t)|n) <∞ for all t > 0 if and only if∫|x |≥1 |x |

nν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 42 / 44

Page 168: Koc3(dba)

A Lévy process is a martingale iff it is integrable and

b +

∫|x |≥1

xν(dx) = 0.

A square-integrable Lévy process is a martingale iff it is centred andthen

X (t) = BA(t) +

∫Rd−0

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 43 / 44

Page 169: Koc3(dba)

A Lévy process is a martingale iff it is integrable and

b +

∫|x |≥1

xν(dx) = 0.

A square-integrable Lévy process is a martingale iff it is centred andthen

X (t) = BA(t) +

∫Rd−0

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 43 / 44

Page 170: Koc3(dba)

A Lévy process is a martingale iff it is integrable and

b +

∫|x |≥1

xν(dx) = 0.

A square-integrable Lévy process is a martingale iff it is centred andthen

X (t) = BA(t) +

∫Rd−0

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 43 / 44

Page 171: Koc3(dba)

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44

Page 172: Koc3(dba)

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44

Page 173: Koc3(dba)

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44

Page 174: Koc3(dba)

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44

Page 175: Koc3(dba)

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44