introduction to discrete-time markov chain

19
1 Introduction to Introduction to Discrete-Time Markov Discrete-Time Markov Chain Chain

Upload: kimberly

Post on 10-Jan-2016

52 views

Category:

Documents


0 download

DESCRIPTION

Introduction to Discrete-Time Markov Chain. Motivation. many dependent systems, e.g., inventory across periods state of a machine customers unserved in a distribution system. excellent. good. fair. bad. time. Motivation. any nice limiting results for dependent X n ’s? - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Introduction to  Discrete-Time Markov Chain

1

Introduction to Introduction to Discrete-Time Markov ChainDiscrete-Time Markov Chain

Page 2: Introduction to  Discrete-Time Markov Chain

2

MotivationMotivation

many dependent systems, e.g., inventory across periods

state of a machine

customers unserved in a distribution system

time

excellent

good

fair

bad

Page 3: Introduction to  Discrete-Time Markov Chain

3

MotivationMotivation

any nice limiting results for dependent Xn’s?

no such result for general dependent Xn’s

nice results when Xn’s form a discrete-time

Markov Chain

1 ???

N

nNn

X

N

{ }11

???n

N

X sNn

N

Page 4: Introduction to  Discrete-Time Markov Chain

4

Discrete-Time, Discrete-State Discrete-Time, Discrete-State Stochastic ProcessStochastic Process

a stochastic process: a sequence of indexed random variables, e.g., {Xn}, {X(t)}

a discrete-time stochastic process: {Xn}

a discrete-state stochastic process, e.g., state {excellent, good, fair, bad}

set of states {e, g, f, b} {1, 2, 3, 4} {0, 1, 2, 3}

state to describe weather {windy, rainy, cloudy, sunny}

Page 5: Introduction to  Discrete-Time Markov Chain

5

Markov PropertyMarkov Property

a discrete-time, discrete-state stochastic process possesses the Markov property if P{Xn+1 = j|Xn = i, Xn−1 = in−1, . . . , X1 = i1, X0 = i0} = pij,

for all i0, i1, …, in1, in, i, j, n 0

time frame: presence n, future n+1, past {i0, i1, …, in1}

meaning of the statement: given presence, the past and the future are conditionally independent

the past and the future are certainly dependent

Page 6: Introduction to  Discrete-Time Markov Chain

6

One-Step Transition Probability MatrixOne-Step Transition Probability Matrix

pij 0, i, j 0,

00 01 02

10 11 12

0 1 2

...

...

i i i

p p p

p p p

p p p

P M M M

K

M M M

01, 0,1, 2,...ij

jp i

Page 7: Introduction to  Discrete-Time Markov Chain

7

Example 4-1 Example 4-1 Forecasting the WeatherForecasting the Weather

state {rain, not rain}

dynamics of the system rains today rains tomorrow w.p. does not rain today rains tomorrow w.p.

weather of the system across the days, {Xn}

1

1

P

Page 8: Introduction to  Discrete-Time Markov Chain

8

Example 4-3 Example 4-3 The Mood of a PersonThe Mood of a Person

mood {cheerful (C), so-so (S), or glum (G)} cheerful today C, S, or G tomorrow w.p. 0.5, 0.4, 0.1 so-so today C, S, or G tomorrow w.p. 0.3, 0.4, 0.3 glum today C, S, or G tomorrow w.p. 0.2, 0.3, 0.5

Xn: mood on the nth day, such that mood {C, S, G}

{Xn}: a 3-state Markov chain (state 0 = C, state 1 = S, state 2 = G) 0.5 0.4 0.1

0.3 0.4 0.3

0.2 0.3 0.5

P

Page 9: Introduction to  Discrete-Time Markov Chain

9

Example 4.5Example 4.5A Random Walk ModelA Random Walk Model

a discrete-time Markov chain of number of states {…, -2, -1, 0, 1, 2, …}

random walk: for 0 < p < 1,

pi,i+1 = p = 1 − pi,i−1, i = 0, 1, . . .

Page 10: Introduction to  Discrete-Time Markov Chain

10

Example 4.6Example 4.6A Gambling ModelA Gambling Model

each play of a game a gambler gaining $1 w.p. p, and losing $1 o.w.

end of the game: a gambler either broken or accumulating $N transition probabilities:

pi,i+1 = p = 1 − pi,i−1, i = 1, 2, . . . , N − 1; p00 = pNN = 1 example for N = 4

state: Xn, the gambler’s fortune after the n play {0, 1, 2, 3, 4}

1 0 0 0 0

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 0 1

p p

p p

p p

P

Page 11: Introduction to  Discrete-Time Markov Chain

11

Limiting Behavior of Irreducible Chains

Page 12: Introduction to  Discrete-Time Markov Chain

12

Limiting Behavior Limiting Behavior of a Positive Irreducible Chainof a Positive Irreducible Chain

cost of a visit state 1 = $5

state 2 = $8

what is the long-run cost of the above DTMC?

1 2

0.8

0.10.9

0.2

1 { 1} 2 { 2}1

1 1???

n n

N

X XNn

c c

N

Page 13: Introduction to  Discrete-Time Markov Chain

13

Limiting Behavior Limiting Behavior of a Positive Irreducible Chainof a Positive Irreducible Chain

j = fraction of time at state j

N: a very large positive integer

# of periods at state j j N

balance of flow j N i (i N)pij j = i ipij

[ ]ijpP

Page 14: Introduction to  Discrete-Time Markov Chain

14

Limiting Behavior Limiting Behavior of a Positive Irreducible Chainof a Positive Irreducible Chain

j = fraction of time at state j

j = i ipij

1 = 0.91 + 0.22

2 = 0.11 + 0.82

linearly dependent

normalization equation: 1 + 2 = 1

solving: 1 = 2/3, 2 = 1/3

1 2

0.8

0.10.9

C

0.2

( ) 0.666667 0.333333

0.666667 0.333333

P

Page 15: Introduction to  Discrete-Time Markov Chain

15

Limiting Behavior Limiting Behavior of a Positive Irreducible Chainof a Positive Irreducible Chain

1 = 0.752 + 0.013

3 = 0.252

1 + 2 + 3 = 1

1 = 301/801, 2 = 400/801, 3 = 100/801

1 2 3

0.25

0.99

1

0.75

0.01

0 1 0

0.75 0 0.25

0.01 0.99 0

P

0.3757803 0.4993758 0.1248439

0.3757803 0.4993758 0.1248439

0.3757803 0.4993758 0.1248439

P

Page 16: Introduction to  Discrete-Time Markov Chain

16

Limiting Behavior Limiting Behavior of a Positive Irreducible Chainof a Positive Irreducible Chain

an irreducible DTMC {Xn} is positive there exists a unique nonnegative solution to

 

j: stationary (steady-state) distribution of {Xn}

0

0

1 (normalization eqt)

, for all , (balance eqts)

jj

j i iji

p j

Page 17: Introduction to  Discrete-Time Markov Chain

17

Limiting Behavior Limiting Behavior of a Positive Irreducible Chainof a Positive Irreducible Chain

j = fraction of time at state j

j = fraction of expected time at state j

average cost cj for each visit at state j

random i.i.d. Cj for each visit at state j

for aperiodic chain:

1

0lim

k

n

Xk

j jn j

E cc

n

1

0lim ( )

k

n

Xk

j jn j

E CE C

n

0lim ( | )n jn

P X j X

Page 18: Introduction to  Discrete-Time Markov Chain

18

Limiting Behavior Limiting Behavior of a Positive Irreducible Chainof a Positive Irreducible Chain

1 = 301/801, 2 = 400/801, 3 = 100/801

profit per state: c1 = 4, c2 = 8, c3 = -2

average profit

1 2 3

0.25

0.99

1

0.75

0.01

0 1 0

0.75 0 0.25

0.01 0.99 0

P 301 400 100801 801 801

4201801

(4) (8) (2)

Page 19: Introduction to  Discrete-Time Markov Chain

19

Limiting Behavior Limiting Behavior of a Positive Irreducible Chainof a Positive Irreducible Chain

1 = 301/801, 2 = 400/801, 3 = 100/801

C1 ~ unif[0, 8], C2 ~ Geo(1/8), C3 = -4 w.p. 0.5; and = 0 w.p. 0.5 E(C1) = 4, E(C2) = 8, E(C3) = -2

average profit

1 2 3

0.25

0.99

1

0.75

0.01

0 1 0

0.75 0 0.25

0.01 0.99 0

P

301 400 100801 801 801

4201801

(4) (8) (2)