lecture 19 - markov chains in insurancearmerin/fininsmathrwanda/lecture19.pdf · continuous time...

Post on 04-Jul-2020

8 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Lecture 19Markov chains in insurance

Lecture 19 1 / 14

Introduction

We will describe how certain types of markov processes can be used tomodel behavior that are useful in insurance applications.

The focus in this lecture is on applications.

Lecture 19 2 / 14

Introduction

We will describe how certain types of markov processes can be used tomodel behavior that are useful in insurance applications.

The focus in this lecture is on applications.

Lecture 19 2 / 14

Continuous time Markov chains (1)

A continuous time Markov chain defined on a finite or countable infinitestate space S is a stochastic process Xt , t ≥ 0, such that for any 0 ≤ s ≤ t

P(Xt = x |Is) = P(Xt = x |Xs),

whereIs = All information generated by Xu for u ∈ [0, s].

Hence, when calculating the probability P(Xt = x |Is), the only thing thatmatters is the value of Xs . This is the Markov property.

Here and onwards, all states we consider are assumed to be elements in S .

Lecture 19 3 / 14

Continuous time Markov chains (1)

A continuous time Markov chain defined on a finite or countable infinitestate space S is a stochastic process Xt , t ≥ 0, such that for any 0 ≤ s ≤ t

P(Xt = x |Is) = P(Xt = x |Xs),

whereIs = All information generated by Xu for u ∈ [0, s].

Hence, when calculating the probability P(Xt = x |Is), the only thing thatmatters is the value of Xs . This is the Markov property.

Here and onwards, all states we consider are assumed to be elements in S .

Lecture 19 3 / 14

Continuous time Markov chains (1)

A continuous time Markov chain defined on a finite or countable infinitestate space S is a stochastic process Xt , t ≥ 0, such that for any 0 ≤ s ≤ t

P(Xt = x |Is) = P(Xt = x |Xs),

whereIs = All information generated by Xu for u ∈ [0, s].

Hence, when calculating the probability P(Xt = x |Is), the only thing thatmatters is the value of Xs . This is the Markov property.

Here and onwards, all states we consider are assumed to be elements in S .

Lecture 19 3 / 14

Continuous time Markov chains (2)

We only consider time-homogeneous Markov chains, which means that allMarkov chains Xt we consider have the property

P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).

We call the function

pt(x , y) = P(Xt = y |X0 = x)

the transition function.

Note that

P(Xt = x |Is) = {Markov property}= P(Xt = x |Xs)

= {Defintion of the transition function}= pt(Xs , x).

Lecture 19 4 / 14

Continuous time Markov chains (2)

We only consider time-homogeneous Markov chains, which means that allMarkov chains Xt we consider have the property

P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).

We call the function

pt(x , y) = P(Xt = y |X0 = x)

the transition function.

Note that

P(Xt = x |Is) = {Markov property}= P(Xt = x |Xs)

= {Defintion of the transition function}= pt(Xs , x).

Lecture 19 4 / 14

Continuous time Markov chains (2)

We only consider time-homogeneous Markov chains, which means that allMarkov chains Xt we consider have the property

P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).

We call the function

pt(x , y) = P(Xt = y |X0 = x)

the transition function.

Note that

P(Xt = x |Is) = {Markov property}

= P(Xt = x |Xs)

= {Defintion of the transition function}= pt(Xs , x).

Lecture 19 4 / 14

Continuous time Markov chains (2)

We only consider time-homogeneous Markov chains, which means that allMarkov chains Xt we consider have the property

P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).

We call the function

pt(x , y) = P(Xt = y |X0 = x)

the transition function.

Note that

P(Xt = x |Is) = {Markov property}= P(Xt = x |Xs)

= {Defintion of the transition function}= pt(Xs , x).

Lecture 19 4 / 14

Continuous time Markov chains (2)

We only consider time-homogeneous Markov chains, which means that allMarkov chains Xt we consider have the property

P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).

We call the function

pt(x , y) = P(Xt = y |X0 = x)

the transition function.

Note that

P(Xt = x |Is) = {Markov property}= P(Xt = x |Xs)

= {Defintion of the transition function}

= pt(Xs , x).

Lecture 19 4 / 14

Continuous time Markov chains (2)

We only consider time-homogeneous Markov chains, which means that allMarkov chains Xt we consider have the property

P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).

We call the function

pt(x , y) = P(Xt = y |X0 = x)

the transition function.

Note that

P(Xt = x |Is) = {Markov property}= P(Xt = x |Xs)

= {Defintion of the transition function}= pt(Xs , x).

Lecture 19 4 / 14

Continuous time Markov chains (3)

The transition intensity from state x to state y is defined by

λ(x , y) = limt↓0

pt(x , y)

t.

It is not unusual to define a Markov process in terms of its transitionintensities.

Note that λ(x , y) is a constant for fixed states x and y – it is neitherdependent on time nor on any randomness.

Lecture 19 5 / 14

Continuous time Markov chains (3)

The transition intensity from state x to state y is defined by

λ(x , y) = limt↓0

pt(x , y)

t.

It is not unusual to define a Markov process in terms of its transitionintensities.

Note that λ(x , y) is a constant for fixed states x and y – it is neitherdependent on time nor on any randomness.

Lecture 19 5 / 14

Continuous time Markov chains (3)

The transition intensity from state x to state y is defined by

λ(x , y) = limt↓0

pt(x , y)

t.

It is not unusual to define a Markov process in terms of its transitionintensities.

Note that λ(x , y) is a constant for fixed states x and y

– it is neitherdependent on time nor on any randomness.

Lecture 19 5 / 14

Continuous time Markov chains (3)

The transition intensity from state x to state y is defined by

λ(x , y) = limt↓0

pt(x , y)

t.

It is not unusual to define a Markov process in terms of its transitionintensities.

Note that λ(x , y) is a constant for fixed states x and y – it is neitherdependent on time nor on any randomness.

Lecture 19 5 / 14

Continuous time Markov chains (4)

Using the transition intensities, we can define the intensity matrix

:

Λ(x , y) =

{λ(x , y) if x 6= y

−∑

y 6=x λ(x , y) if x = y

LetP(t) = [pt(x , y)]

be the matrix of transition probabilities.

In general it holds that

P ′(t) = ΛP(t) = P(t)Λ.

These are the backward and forward equations respectively.

Lecture 19 6 / 14

Continuous time Markov chains (4)

Using the transition intensities, we can define the intensity matrix:

Λ(x , y) =

{λ(x , y) if x 6= y

−∑

y 6=x λ(x , y) if x = y

LetP(t) = [pt(x , y)]

be the matrix of transition probabilities.

In general it holds that

P ′(t) = ΛP(t) = P(t)Λ.

These are the backward and forward equations respectively.

Lecture 19 6 / 14

Continuous time Markov chains (4)

Using the transition intensities, we can define the intensity matrix:

Λ(x , y) =

{λ(x , y) if x 6= y

−∑

y 6=x λ(x , y) if x = y

LetP(t) = [pt(x , y)]

be the matrix of transition probabilities.

In general it holds that

P ′(t) = ΛP(t) = P(t)Λ.

These are the backward and forward equations respectively.

Lecture 19 6 / 14

Continuous time Markov chains (4)

Using the transition intensities, we can define the intensity matrix:

Λ(x , y) =

{λ(x , y) if x 6= y

−∑

y 6=x λ(x , y) if x = y

LetP(t) = [pt(x , y)]

be the matrix of transition probabilities.

In general it holds that

P ′(t) = ΛP(t) = P(t)Λ.

These are the backward and forward equations respectively.

Lecture 19 6 / 14

Continuous time Markov chains (4)

Using the transition intensities, we can define the intensity matrix:

Λ(x , y) =

{λ(x , y) if x 6= y

−∑

y 6=x λ(x , y) if x = y

LetP(t) = [pt(x , y)]

be the matrix of transition probabilities.

In general it holds that

P ′(t) = ΛP(t) = P(t)Λ.

These are the backward and forward equations respectively.

Lecture 19 6 / 14

The Poisson process

A Poisson process is a Markov process with intensity matrix

Λ =

−λ λ 0 0 0 · · ·0 −λ λ 0 0 · · ·0 0 −λ λ 0 · · ·...

......

.... . .

.

It is a counting process: the only transitions possible is from n to n + 1.

We can solve the equation for the transition probabilities to get

P(X (t) = n) = e−λtλntn

n!, n = 0, 1, 2, . . . .

Lecture 19 7 / 14

The Poisson process

A Poisson process is a Markov process with intensity matrix

Λ =

−λ λ 0 0 0 · · ·0 −λ λ 0 0 · · ·0 0 −λ λ 0 · · ·...

......

.... . .

.It is a counting process

: the only transitions possible is from n to n + 1.

We can solve the equation for the transition probabilities to get

P(X (t) = n) = e−λtλntn

n!, n = 0, 1, 2, . . . .

Lecture 19 7 / 14

The Poisson process

A Poisson process is a Markov process with intensity matrix

Λ =

−λ λ 0 0 0 · · ·0 −λ λ 0 0 · · ·0 0 −λ λ 0 · · ·...

......

.... . .

.It is a counting process: the only transitions possible is from n to n + 1.

We can solve the equation for the transition probabilities to get

P(X (t) = n) = e−λtλntn

n!, n = 0, 1, 2, . . . .

Lecture 19 7 / 14

The Poisson process

A Poisson process is a Markov process with intensity matrix

Λ =

−λ λ 0 0 0 · · ·0 −λ λ 0 0 · · ·0 0 −λ λ 0 · · ·...

......

.... . .

.It is a counting process: the only transitions possible is from n to n + 1.

We can solve the equation for the transition probabilities to get

P(X (t) = n) = e−λtλntn

n!, n = 0, 1, 2, . . . .

Lecture 19 7 / 14

A whole-life insurance model (1)

Let us consider a simple model of a whole-life insurance.

To fit into the Markovian model, we assume a constant force of mortalityλ.

This means that we have the picture

λ

Alive → Dead

Lecture 19 8 / 14

A whole-life insurance model (1)

Let us consider a simple model of a whole-life insurance.

To fit into the Markovian model, we assume a constant force of mortalityλ.

This means that we have the picture

λ

Alive → Dead

Lecture 19 8 / 14

A whole-life insurance model (1)

Let us consider a simple model of a whole-life insurance.

To fit into the Markovian model, we assume a constant force of mortalityλ.

This means that we have the picture

λ

Alive → Dead

Lecture 19 8 / 14

A whole-life insurance model (2)

Let us define.

State 1 = Alive

State 2 = Dead

This implies that the intensity matrix is given by[−λ λ0 0

]Note that a row of zeros for a state means that that state is absorbing.

Lecture 19 9 / 14

A whole-life insurance model (2)

Let us define.

State 1 = Alive

State 2 = Dead

This implies that the intensity matrix is given by[−λ λ0 0

]

Note that a row of zeros for a state means that that state is absorbing.

Lecture 19 9 / 14

A whole-life insurance model (2)

Let us define.

State 1 = Alive

State 2 = Dead

This implies that the intensity matrix is given by[−λ λ0 0

]Note that a row of zeros for a state means that that state is absorbing.

Lecture 19 9 / 14

A more complex life insurance model (1)

In this previous models we only considered if the individual was alive ordead.

In some cases we want to know if the individual is alive and healthy, oralive and an invalid.

This leads to the following model:

µ1Invalid � Healthy

µ2λ2 ↘ ↙ λ1

Dead

Lecture 19 10 / 14

A more complex life insurance model (1)

In this previous models we only considered if the individual was alive ordead.

In some cases we want to know if the individual is alive and healthy, oralive and an invalid.

This leads to the following model:

µ1Invalid � Healthy

µ2λ2 ↘ ↙ λ1

Dead

Lecture 19 10 / 14

A more complex life insurance model (1)

In this previous models we only considered if the individual was alive ordead.

In some cases we want to know if the individual is alive and healthy, oralive and an invalid.

This leads to the following model

:

µ1Invalid � Healthy

µ2λ2 ↘ ↙ λ1

Dead

Lecture 19 10 / 14

A more complex life insurance model (1)

In this previous models we only considered if the individual was alive ordead.

In some cases we want to know if the individual is alive and healthy, oralive and an invalid.

This leads to the following model:

µ1Invalid � Healthy

µ2λ2 ↘ ↙ λ1

Dead

Lecture 19 10 / 14

A more complex life insurance model (2)

With

State 1 = Healthy

State 2 = Invalid

State 3 = Dead

the intensity matrix is given by −λ1 − µ1 µ1 λ1µ2 −λ2 − µ2 λ20 0 0

Lecture 19 11 / 14

A more complex life insurance model (2)

With

State 1 = Healthy

State 2 = Invalid

State 3 = Dead

the intensity matrix is given by −λ1 − µ1 µ1 λ1µ2 −λ2 − µ2 λ20 0 0

Lecture 19 11 / 14

A model of more than one life (1)

So far we have only considered one individual.

Now assume that an insurance company has insured n individuals.

For each of the individuals, the force of mortaility is the constant λ, andthe individuals die independently of each other.

Let N be the number of individuals alive after 1 year.

What is the distribution of N?

Lecture 19 12 / 14

A model of more than one life (1)

So far we have only considered one individual.

Now assume that an insurance company has insured n individuals.

For each of the individuals, the force of mortaility is the constant λ, andthe individuals die independently of each other.

Let N be the number of individuals alive after 1 year.

What is the distribution of N?

Lecture 19 12 / 14

A model of more than one life (1)

So far we have only considered one individual.

Now assume that an insurance company has insured n individuals.

For each of the individuals, the force of mortaility is the constant λ, andthe individuals die independently of each other.

Let N be the number of individuals alive after 1 year.

What is the distribution of N?

Lecture 19 12 / 14

A model of more than one life (1)

So far we have only considered one individual.

Now assume that an insurance company has insured n individuals.

For each of the individuals, the force of mortaility is the constant λ, andthe individuals die independently of each other.

Let N be the number of individuals alive after 1 year.

What is the distribution of N?

Lecture 19 12 / 14

A model of more than one life (1)

So far we have only considered one individual.

Now assume that an insurance company has insured n individuals.

For each of the individuals, the force of mortaility is the constant λ, andthe individuals die independently of each other.

Let N be the number of individuals alive after 1 year.

What is the distribution of N?

Lecture 19 12 / 14

A model of more than one life (2)

Let X be the Markov process connected to one individual’s state

:

X (t) =

{1 if the individual is alive at time t2 if the individual is dead at time t

For one individual, the probability of being alive after 1 year is

P(X (1) = 1) = e−λ.

Since the individuals die independently of each other, it follows that

N ∼ Bin(n, e−λ).

Lecture 19 13 / 14

A model of more than one life (2)

Let X be the Markov process connected to one individual’s state:

X (t) =

{1 if the individual is alive at time t2 if the individual is dead at time t

For one individual, the probability of being alive after 1 year is

P(X (1) = 1) = e−λ.

Since the individuals die independently of each other, it follows that

N ∼ Bin(n, e−λ).

Lecture 19 13 / 14

A model of more than one life (2)

Let X be the Markov process connected to one individual’s state:

X (t) =

{1 if the individual is alive at time t2 if the individual is dead at time t

For one individual, the probability of being alive after 1 year is

P(X (1) = 1) = e−λ.

Since the individuals die independently of each other, it follows that

N ∼ Bin(n, e−λ).

Lecture 19 13 / 14

A model of more than one life (2)

Let X be the Markov process connected to one individual’s state:

X (t) =

{1 if the individual is alive at time t2 if the individual is dead at time t

For one individual, the probability of being alive after 1 year is

P(X (1) = 1) = e−λ.

Since the individuals die independently of each other, it follows that

N ∼ Bin(n, e−λ).

Lecture 19 13 / 14

Reference

I have partly used

Basic Life Insurance Mathematics by Ragnar Norberg

in this lecture.

Lecture 19 14 / 14

top related