traffic modelling: statisticstraffic modelling: statistics · • classifications of stochastic...

60
Stochastic processes Lecturer: Dmitri A. Moltchanov E-mail: [email protected].fi http://www.cs.tut.fi/˜moltchan/modsim/ http://www.cs.tut.fi/kurssit/TLT-2706/

Upload: others

Post on 25-Jul-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Stochastic processes

Lecturer: Dmitri A. Moltchanov

E-mail: [email protected]

http://www.cs.tut.fi/˜moltchan/modsim/

http://www.cs.tut.fi/kurssit/TLT-2706/

Page 2: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

OUTLINE:

• Definition and basic notions;

• Characteristics of stochastic processes;

• Classifications of stochastic processes;

• Markov processes;

• Homogenous Markov chains:

– Memoryless property of exponential/geometric RVs;

– Discrete-time homogenous Markov chains;

– Continuous-time homogenous Markov chains;

• Classification of states;

• Ergodic Markov chains;

• Birth-death processes:

– Continuous-time birth-death processes;

– Existence of steady-state distribution.

Lecture: Stochastic processes 2

Page 3: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

1. Definitions and notionsAssume the following:

• (Ω, E, P ) is the probability space;

• there are infinite number of random variables defined on this space.

A set of random variables S ∈ E:

S = S(t), t ∈ T (1)

• is called a stochastic process, where E is the state space of the process;

• in other words for each t ∈ T , S(t) is a mapping from Ω to some set E.

For example set E can be given by:

E = ℵ = 0, 1, . . . ,E = = [0,∞). (2)

Lecture: Stochastic processes 3

Page 4: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

DEFINING SET T :

• set T is the index set of stochastic process;

• set T is often (not always!) referred to as time.

If T is countable:

T = ℵ = 0, 1, . . . ,T = Z = . . . ,−2,−1, 0, 1, 2, . . . (3)

• we are given a discrete-time stochastic process.

If T is not countable:

T = = (0,∞),

T = (−∞,∞). (4)

• we are given a continuous-time stochastic process.

Lecture: Stochastic processes 4

Page 5: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

DEFINING SET E:

• set E is called state space of the stochastic process S(t), t ∈ T.

If E is countable:

E = ℵ = 0, 1, . . . ,E = Z = . . . ,−2,−1, 0, 1, 2, . . . (5)

• we are given discrete-space stochastic process.

If E is not countable:

E = = (0,∞),

E = (−∞,∞). (6)

• we are given continuous-space stochastic process.

Lecture: Stochastic processes 5

Page 6: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Space E

10 15 20 25 30 35 40 45 502.2 10

4

2.4 104

2.6 104

2.8 104

3 104

Space T

Figure 1: Observations of discrete-time discrete-space stochastic process.

Lecture: Stochastic processes 6

Page 7: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Do the following:

• consider an arbitrary stochastic process;

• observe this stochastic process a number of times.

S( t )

0 1.25 2.5 3.75 5 6.25 7.5 8.75 100

2

4

6

8

10

t

Figure 2: Realizations and sections of stochastic process.

• fix certain curve: we get a realization of stochastic process;

• fix certain t from T : we get a section of stochastic process.

Lecture: Stochastic processes 7

Page 8: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

2. Characteristics of stochastic processesObserve the following:

• it is still unclear how to define a stochastic processes;

• recall: any RV can be fully characterized by its PDF.

Similarly, we can characterize an arbitrary section:

F (t, s) = PrS(t) ≤ s. (7)

• depends on both t and s;

• is called one-dimensional distribution function of S(t), t ∈ T.

F (t, s) does not completely characterize the stochastic process:

• example: what if there is dependence between subsequent sections?

Lecture: Stochastic processes 8

Page 9: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

More information: two-dimensional distribution function:

S(t1, s1, t2, s2) = PrS(t1) ≤ s1, S(t2) ≤ s2. (8)

By induction:

S(t1, s1, t2, s2, . . . ) = PrS(t1) ≤ s1, S(t2) ≤ s2, . . . . (9)

We observe the following:

• full description is given by joint distribution of all its sections;

• it is not easy to deal with such description.

What we usually do in practice:

• we usually do not use general processes:

– example: Markov processes: two-dimensional distribution is sufficient.

• we may also limit description of processes to moments:

– example: correlation theory: mean and autocorrelation function.

Lecture: Stochastic processes 9

Page 10: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

2.1. Mean

Mean of the stochastic process S(t), t ∈ T:• non-probabilistic function ms(t);

• for all t ∈ T equals to the mean of corresponding section:

ms(t) =

∫ ∞

−∞xs(t, x)dx. (10)

S( t )

0 1.25 2.5 3.75 5 6.25 7.5 8.75 100

2

4

6

8

10

t

Figure 3: Mean of stochastic process S(t), t ∈ T (denoted by black line).

Lecture: Stochastic processes 10

Page 11: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

2.2. Variance

Variance of the stochastic process S(t), t ∈ T:• non-probabilistic function Ds(t);

• for all t ∈ T equals to the variance of corresponding section:

Ds(t) =

∫ ∞

−∞(x − ms(t))

2s(t, x)dx. (11)

One can similarly define:

• excess;

• kurtosis;

• higher moments and central moments.

Important:

• these moments characterize sections in isolations!

• we still need descriptor of dependence.

Lecture: Stochastic processes 11

Page 12: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

2.3. Autocorrelation function

Autocorrelation function (ACF):

• non-probabilistic function Ks(t1, t2);

• for all pairs t1, t2 ∈ T is just a covariance of corresponding sections:

Ks(t1, t2) = E[S(t1), S(t2)] − mx(t1)mx(t2). (12)

– recall: this is a measure of linear dependence between sections.

Normalized autocorrelation function (NACF):

• non-probabilistic function Rs(t1, t2);

• for all pairs t1, t2 ∈ T is just a correlation coefficient of corresponding sections:

Rs(t1, t2) =E[S(t1), S(t2)] − ms(t1)ms(t2)√

Ds(t1)Ds(t2). (13)

Important notes: NACF and ACF can be used interchangeably:

• since −1 ≤ Rs(t1, t2) ≤ 1 NACF is often preferable;

• ACF includes variance as Ks(t1, t1)!

Lecture: Stochastic processes 12

Page 13: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

KY(i)

0 5 10 15 200

0.25

0.5

0.75

1

i, lag

KY(i)

0 5 10 15 200

0.25

0.5

0.75

1

i, lag

Figure 4: NACFs of positively correlated processes.

Understanding of (N)ACF:

• how subsequent sections of the process linearly(!) depends on the current one;

• if Rs(t1, t2) = 0 for all t1 and t2 it does not mean that there is no dependence!

Lecture: Stochastic processes 13

Page 14: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

3. Classifications

3.1. Based on the nature of index set T and space set E

The stochastic processes are classified to:

• discrete-time, discrete-space stochastic process:

– both E and T are countable;

– classic example: number of jobs seen by processor.

• discrete-time, continuous-space stochastic process:

– E is not countable, while and T is countable.

• continuous-time, discrete-space stochastic process:

– E is countable, while T is not;

– classic example: number of packets in the buffer.

• continuous-time, continuous-space stochastic process:

– both E and T are not countable.

Lecture: Stochastic processes 14

Page 15: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

3.2. Based on stationarity

General classification:

• stationary processes:

– at least mean and ACF do not depend on time.

• non-stationary processes:

– mean or ACF or both depend on time;

– practically, they may change in time.

Note: stationarity is advantageous property of ergodic processes:

• if the process is stationary, it is also ergodic;

• it is not always true otherwise.

Notes on non-stationary processes:

• very hard to deal with;

• it seems that most processes observed in networks are somewhat non-stationary!

Lecture: Stochastic processes 15

Page 16: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

3.3. Based on the type of stationarity

Stochastic process can be:

• strictly stationary:

– n-dimensional distribution function does not changes with the time shift τ for all n;

– for (t1, t2, . . . , tn) and (t1 + τ, t2 + τ, . . . , tn + τ): n-dimensional distribution is the same.

• covariance stationary processes:

– mean is constant in time;

– ACF depends on the time shift only:

Ks(t1, t2) = Ks(τ). (14)

Note, that strictly stationary processes are also called:

• first-order stationary processes.

Note, that covariance stationary processes are also called:

• weakly stationary processes;

• second-order stationary processes.

Lecture: Stochastic processes 16

Page 17: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

3.4. Based on memory

Stochastic process can be:

• Markovian processes;

– named after A.A. Markov;

– can be completely characterized by two-dimensional distribution;

– most important processes in teletraffic theory.

• other processes.

3.5. Based on ergodic property

Stochastic process can be:

• ergodic processes;

• not ergodic ones.

Lecture: Stochastic processes 17

Page 18: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

3.6. Ergodic stationary processes

The main property:

• single realization gives all information regarding characteristics of the process;

• for example: mean, variance, ACF, etc.

Notes on ergodic property:

• realization must be observed for long time...

• how long?...

Mean of stationary ergodic process is given by:

ms = E[S(t)] = limt→∞

1

2T

∫ T

−T

S(t)dt. (15)

Variance of stationary ergodic process is given by:

Ds = D[S(t)] = limt→∞

1

2T

∫ T

−T

(S(t) − ms)2dt. (16)

Lecture: Stochastic processes 18

Page 19: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

ACF of stationary ergodic process is given by:

Ks(t) = limt→∞

1

2T

∫ T

−T

(S(t) − ms)(S(t − τ) − ms)dt. (17)

Sufficient condition of ergodicity is given by:

limt→∞

Ks(t) = 0. (18)

Summarizing ergodic property:

• allows to easily compute characteristics using a single realization of the process;

• one have to get a proof that the process is ergodic.

Practically, process is not stationary ergodic:

• when some type of heterogeneity is found...

• could be deterministic influence at some instants of time, etc.

Lecture: Stochastic processes 19

Page 20: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

4. Markov processesHistorical facts:

• A.A. Markov is Russian mathematician (1856 − 1922):

– first results in 1906 (discrete-space Markov processes);

– extension of the law of large numbers for dependent events;

– generalization to countably infinite state space was given by Kolmogorov in 1936.

• note: queuing theory is mostly based on Markovian assumptions!

Assume the following:

• we are given a discrete-time, discrete-space process S(t), t ∈ T.• stochastic process S(t), t ∈ T is called Markov process if it satisfies:

PrS(tn+1) = sn+1|S(tn) = sn, . . . , S(tn−1) = sn−1 = PrS(tn+1) = sn+1|S(tn) = sn. (19)

– called Markov property;

– also called memoryless property, one-step memory, etc.

Lecture: Stochastic processes 20

Page 21: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Markov property limits the memory of the process to previous section:

• the future evolution of the process depends only on current section;

• all information regarding the future evolution of the process is concentrated in current section.

Y(k)

0 450 900 1350 18000

3.75 105

7.5 105

1.13 106

1.5 106

k, time

Y(k)

0 5 10 15 200

3.75 105

7.5 105

1.13 106

1.5 106

k, time

Figure 5: Realization of the discrete-time discrete-state Markov process.

Memoryless property: section of the system at time tn+1:

• is decided by the system state at the current time tn;

• does not depend on previous time instants tn−1, tn−2, . . . , t1.

Lecture: Stochastic processes 21

Page 22: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

4.1. Classification

Based on the nature of sets E and T :

• discrete-time, discrete-space Markov processes:

– both E and T are countable: discrete-time Markov chain.

• discrete-time, continuous-space Markov process:

– E is not countable, while and T is countable: discrete-time Markov process.

• continuous-time, discrete-space Markov process:

– E is countable, while T is not: continuous-time Markov chain.

• continuous-time, continuous-space Markov process:

– both E and T are not countable: continuous-time Markoev process.

Note: whenever E is countable we are dealing with Markov chain.

Lecture: Stochastic processes 22

Page 23: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

4.2. Memoryless property of exponential/geometric RVs

We are interesting in PDF F (t + x|x and pdf f(t + x|x

x t0

Figure 6: Residual time after some age x.

Assuming that PrT > x > 0 and t ≥ 0, we have:

PrT > t + x|T > x =Pr(T > t + x)

⋂(T > x)

PrT > x =

=PrT > t + x

PrT > x =1 − F (t + x)

1 − F (x). (20)

Therefore we have for PDF F (t + x|x:

F (t + x|x = Pr(T ≤ t + x)|T > x =F (t + x) − F (x)

1 − F (x). (21)

Lecture: Stochastic processes 23

Page 24: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

For pdf of residual lifetime we have the following expression:

f(t + x|x = Pr(T ≤ t + x)|T > x =f(t + x)

1 − F (x). (22)

Substituting exponential PDF into expression for F (t + x|x:

F (t + x|x) =(1 − e−(t+x)λ) − (1 − e−λx)

1 − (1 − e−λx)=

1

e−λx− e−λ(t+x)

e−λx− 1

e−λx+

e−λx

e−λx=

= −e−λ(t+x)−(−λx) + 1 = −e−λt−λx+λx + 1 = 1 − e−λt = F (t). (23)

Substituting exponential pdf into expression for f(t + x|x:

f(t + x|x) =λe−(t+x)λ

1 − (1 − e−λx)= λe−λt = f(t). (24)

Note the following:

• the same properties hold for geometric distribution;

• for other distribution F (t + x|x) and f(t + x|x) are different than F (x) and f(x).

Lecture: Stochastic processes 24

Page 25: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

4.3. Example

M/M/1 queuing system:

• arrivals are according to Poisson process with exponential interarrival times with mean 1/λ;

• service time is exponential with mean 1/µ.

State of the system: number of customers in it S(t), t ≥ 0:• state space E is countable (e.g. T ∈ 0, 1, . . . );• index of the process (time) T is not countable;

• we have continuous-time discrete-space process.

Whether S(t), t ≥ 0 Markovian? yes: the next state depends on the previous only:

• interarrival and service times are exponential then memoryless;

• it does not matter how much time elapsed since the arrival;

• it does not matter how long the current customet is in service.

Lecture: Stochastic processes 25

Page 26: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

t

...

0 1 2 ...

1 e

1 eS(t0) = 0

t0 = 0

1 e

1 e

S(t1) = 2

t1

Figure 7: Description of the M/M/1 queuing system.

Lecture: Stochastic processes 26

Page 27: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

5. Homogenous Markov chainHow to define a Markov chain:

• recall, we have one-step dependence;

• two-dimensional distribution is sufficient;

• how to better represent in a compact form?

Consider the discrete-time discrete-state Markov process:

• state space is S(n) ∈ 1, 2, . . . , 6;• index set is n ∈ 0, 1, . . . ;• we can denote it by S(n), n = 0, 1, . . . .Do the following:

• fix the time instant, say n;

• fix the state in that time instant, say state 2;

• consider possible transitions of the process out of the fixed state...

Lecture: Stochastic processes 27

Page 28: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

t

4

1

3

states

n

2

56

0 n+1 n+2 n+3…

p26(n)

p25(n)

p24(n)

p23(n)

p22(n)

p21(n)

t

4

1

3

states

n

2

56

0 n+1 n+2 n+3…

p36(n)

p35(n)

p34(n)

p33(n)

p32(n)

p31(n)

Figure 8: Probability to go in one step at time n.

A Markov process at time n is fully defined by:

pij(n) = PrS(n + 1) = j|S(n) = i, i, j ∈ E. (25)

A Markov process at time (n + 1) is fully defined by:

pij(n + 1) = PrS(n + 2) = j|S(n + 1) = i, i, j ∈ E. (26)

Lecture: Stochastic processes 28

Page 29: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Similarly, a Markov process at time (n + m) is fully defined by:

pij(n + m) = PrS(n + m + 1) = j|S(n + m) = i, i, j ∈ E. (27)

We may use matrix for one-step transitions from i to j between times n and (n+1):

P (n) =

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝

p11(n) p12(n) p13(n) · · · p1M(n)

p21(n) p22(n) p23(n) · · · p2M(n)

p31(n) p32(n) p33(n) · · · p3M(n)...

......

. . ....

pM1(n) pM2(n) pM3(n) · · · pMM(n)

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

. (28)

Definitions:

• Markov chain whose transition probabilities depend on time is non-homogenous;

• Markov chain whose transition probabilities do not depend on time is homogenous.

Note: non-homogenous Markov chain are not always non-stationary.

Lecture: Stochastic processes 29

Page 30: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

We can drop the dependence on time for these chains and write:

pij = PrS(n + 1) = j|S(n) = i, (29)

• as the transition probabilities from state i to state j;

• indices n and (n + 1) are used to denote one-step transition probabilities.

t

4

1

3

states

n

2

56

0 n+1 n+2 n+3…

p26

p25

p24

p23

p22

p21

p26

p25

p24

p23

p22

p21

t

4

1

3

states

n

2

56

0 n+1 n+2 n+3…

p36

p35

p34

p33

p32

p31

p36

p35

p34

p33

p32

p31

Figure 9: Transition probabilities of homogenous Markov chains do not depend on time.

Lecture: Stochastic processes 30

Page 31: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

5.1. Discrete-time homogenous Markov chains

Recall, the sequence of RVs forms the discrete-time Markov chain if:

PrS(n + 1) = j|S(n) = i, . . . , S(0) = m = PrS(n + 1) = j|S(n) = i, (30)

• state space is E ∈ 0, 1 . . . , M;• index set is time T ∈ 0, 1, . . . .

We consider only homogenous discrete-time Markov chains here, i.e.

PrS(n + 1) = j|S(n) = i = pij (31)

Questions we are interested:

• what is the holding time in the state?

• m-step transitions probabilities (where is the chain after m time intervals?).

Lecture: Stochastic processes 31

Page 32: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

t

0 1 2 ...1

2

3

tr: 0 - 1tr: 1 - 0

tr: 0 - 2

Figure 10: What is the holding time in the state?

F

p

1-p

Figure 11: Consider an arbitrary state F of discrete-time Markov chain..

• p – probability to jump from state F to state F ;

• (1 − p) – probability to jump from state F to any other state.

Lecture: Stochastic processes 32

Page 33: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Analysis:

• assume that the Markov chain is in the state F at certain time;

• Markov chain stays in state F for in the next slot given that currently it is in state F :

PrS(n + 1) = F |S(n) = F = p. (32)

• Markov chain jumps to other state in the next slot given that currently it is in state F :

PrS(n + 1) = F |S(n) = F = (1 − p). (33)

• Markov chain stays in state F for m time units given that currently it is in state F :

PrS(n + m) = F |S(n) = F = p × p × p . . . p =∏m

p = pm. (34)

• Markov chain stays in state F for m time units and then exit from F :

PrS(n + m) = F |S(n) = FPrS(n + m + 1) = F |S(n + m) = F = pm(1 − p). (35)

Important note:

• the latter gives geometric distribution, which is memoryless in nature.

Lecture: Stochastic processes 33

Page 34: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

One-step transition probabilities can be written in compact matrix form as:

P =

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝

p11 p12 p13 · · · p1M

p21 p22 p23 · · · p2M

p31 p32 p33 · · · p3M

......

.... . .

...

pM1 pM2 pM3 · · · pMM

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

(36)

• for homogenous Markov chain these probabilities are the same for any time n!

Note that for each row we must have:

M∑i=1

pji = 1, j ∈ 1, 2, . . . , M. (37)

We were also interested in m-step transition probabilities. We may write:

pij(m) = PrS(n + m) = j|S(n) = i =∑∀k

pik(m − 1)pkj, m = 2, 3, . . . , (38)

• pij(m) is the probability that the state changes from i to j in m steps.

Lecture: Stochastic processes 34

Page 35: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

m-step transition probabilities may also be written as:

pij(m) = PrS(n + m) = j|S(n) = i =∑∀k

pikpkj(m − 1), m = 2, 3, . . . , (39)

t

k

i

j

States

m-1 1t

ki

j

States

m-11

Figure 12: Illustration of m-step transition probabilities.

• given (m − 1) steps, there are a number of ways to reach state k from state i;

• then there is only one way to get to j from k.

Lecture: Stochastic processes 35

Page 36: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

5.2. Continuous-time homogenous Markov chains

We consider:

• homogenous continuous-time homogenous Markov chain;

• note: decision whether to jump to other states can be taken in any time instant!

t

0 1 2 ...1

2

3

tr: 0 - 1tr: 1 - 0

tr: 0 - 2

Figure 13: Transitions in continuous-time Markov chain.

We are looking for sojourn time in the state.

Lecture: Stochastic processes 36

Page 37: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Let us now:

• tag a time t0;

• system at time t0 is in the state i;

• τ is the RV of sojourn time in state i.

t0

s t

Figure 14: Time the process stays in state i.

Consider the following:

• Markov chain stays in state i after some time t = t0 + s;

• distribution of time it stays in state i after time t = t0 + s is the same as after time t = t0:

Prτ > s + t|τ > s = Prτ > t (40)

– future evolutions should not depend on the past!

Lecture: Stochastic processes 37

Page 38: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

What we have right now:

• distribution of the sojourn time must be memoryless;

• the only memoryless distribution is the exponential one:

Prτ ≤ t = 1 − e−λit, ∀i, t > 0. (41)

Parameter λ:

• λi is called transition rate out of state i;

• λi is non-negative.

Parameter λ:

• λi = 0: the process always stays in state i;

• 0 < λi < ∞, the probability that process change its state in ∆t is:

λi∆t. (42)

Lecture: Stochastic processes 38

Page 39: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

We take the following assumptions:

• the chain is in the state j at time t with probability pi(t);

• pij(∆t) be the probability of going to state j from state i after time ∆t:

pij(∆t) = PrS(t + ∆t) = j|S(t) = i. (43)

Note that when ∆t = 0 then:

pij(0) = PrS(t + 0) = j|S(t) = i = 0, i = j. (44)

Assume now that ∆t = 0, then the state of the system at time t + ∆t:

pj(t + ∆t) =∑∀i

pi(t)pij(∆t) (45)

Lecture: Stochastic processes 39

Page 40: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Consider M state continuous time Markov chain.

1 2 M3

12ë

21ë

32ë

23ë

13ë

31ë

3ië

i3ë Mi

ë

iMë

Figure 15: M -state continuous time Markov chain.

Transition rate from state i to any other state j is given by:

λij = lim∆t→0

pij(∆t)

∆t. (46)

Lecture: Stochastic processes 40

Page 41: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Note the following:

• Markov chain has M states;

• there should be (M − 1) transition rates that lead out of state i.

Let now i = j and consider again the previous expression:

λii = lim∆t→0

pii(∆t)

∆t. (47)

Since we have to have PF for pij(∆t) (e.g.∑M

j=1 pij(∆t) = 1, i = 1, 2, . . . , M):

pii(∆t) = 1 −∑j,j =i

pij(∆t), (48)

From previous expression it is easy to see that:

λii = −∑i,i=j

λij, ∀i. (49)

• often λii is denoted simply by λi;

• note that 1/∑

j,j =i λij is the mean sojourn time (exponential holding time in the state).

Lecture: Stochastic processes 41

Page 42: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Transition rates can be structured into the transition rate matrix:

Λ =

⎛⎜⎜⎜⎜⎜⎝

−∑j,j =1 λ1j λ12 λ13 · · · λ1M

λ21 −∑j,j =2 λ2j λ23 · · · λ2M

......

.... . .

...

λM1 λM2 λM3 · · · −∑j,j =M λMj

⎞⎟⎟⎟⎟⎟⎠ (50)

• this matrix is also called infinitesimal generator of the continuous-time Markov chain.

Lecture: Stochastic processes 42

Page 43: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

6. Classification of statesStates of Markov chain is classified to:

• recurrent:

– assume that the process leaves a certain state;

– if it may return to this state after some time with probability 1.

• transient:

– assume that the process leaves a certain state;

– if it may not return to this state (return with probability less than 1).

• absorbing:

– assume that the process enters a certain state;

– if it cannot visit any other state.

Note: any state is one of the above.

Lecture: Stochastic processes 43

Page 44: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

1 2 3 4

p12>0 p

23>0 p

34>0

p21>0 p

32>0 p

43>0

Figure 16: All states are recurrent.

1 2 3 4

p12>0 p

23>0 p

34>0

p21>0 p

32>0 p

43=0

Figure 17: States 1, 2, 3 are transient, while state 4 is absorbing.

Lecture: Stochastic processes 44

Page 45: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Do the following:

• fj(n) be the probability that the process returns to state j after n steps after leaving state j;

• define the following quantities for fj(n):

fj =∞∑

n=1

fj(n), E[fj] =∞∑

n=1

nfj(n), (51)

• fj: probability that the process returns to the state j some time after leaving it;

• E[fj]: mean number of steps needed to return to state j after leaving it:

– it is also called mean recurrence time for state j.

Therefore, the state is:

• recurrent if fj = 1;

• transient if fj < 1.

Lecture: Stochastic processes 45

Page 46: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Based on the mean recurrence time the recurrent states are:

• positive recurrent:

– if the mean recurrence time is finite: E[fj] < ∞.

• null recurrent:

– if the mean recurrence time is infinite: E[fj] → ∞.

Based on the properties of mean recurrence time we distinguish between:

• periodic states:

– if the recurrence time has a period α, α > 1;

– it means that the only possible steps in which state may occur is α, 2α, 3α, . . . .

• aperiodic ones:

– these states do not have a period.

Lecture: Stochastic processes 46

Page 47: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Definition: recurrent state is ergodic if it is positive recurrent and aperiodic:

E[fj] < ∞. (52)

Definition: A Markov chain is called ergodic if all its states are ergodic.

Definition: A Markov chain is called irreducible if either:

• all states are transient or;

• all states are recurrent null;

• all states are ergodic: if recurrent positive and aperiodic;

• if periodic: all states have the same period.

Definition: aperiodic irreducible Markov chain with finite number of states is always ergodic.

Important notes:

• Markov chain is simply a class of stochastic processes;

• for Markov chain ergodicity means that there should be stationary state probabilities!

Lecture: Stochastic processes 47

Page 48: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

7. Ergodic Markov chainsAssume the following:

• ergodic homogenous Markov chain;

• for such a chain limiting state probabilities exist and given by:

pj = limn→∞

PrS(n) = j, (53)

Important notes:

• limiting state probabilities are independent of the initial state probabilities;

• one can find them using mean recurrence time as follows:

pj =1

E[fj], ∀j. (54)

Limiting state distribution pi, ∀i, is also called:

• steady-state distribution, equilibrium distribution, stationary distribution, etc.

Lecture: Stochastic processes 48

Page 49: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Note the following:

• it is really complicated task to determine E[fj], ∀j;

• question: are there any other way to determine pj = limn→∞ PrS(n) = j?The stationary distribution of the discrete-time Markov chain is the solution of:∑

∀i

pipij = pj,

∑∀i

pi = 1, (55)

In matrix form:

pP = p pe = 1. (56)

• e is the unit column vector with all components equal to 1.

Note the following: if the system has finite number of states, N , we solve using:

• N − 1 equations from available N balance equations;

• normalization condition.

Lecture: Stochastic processes 49

Page 50: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

The stationary distribution of the continuous time Markov chain is the solution of:∑∀i

piλij = 0,

∑∀i

pi = 1, (57)

• first equations are balance equations;

• second equation is the normalizing condition.

In matrix form:

pΛ = 0, pe = 1. (58)

• e is unit column vector;

• Λ is infinitesimal generator (transition rate matrix).

Note the following: if the system has finite number of states, N , we solve using:

• N − 1 equations from available N balance equations;

• normalization condition.

Lecture: Stochastic processes 50

Page 51: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

8. Birth-death processesIf the current state at time instant n is S(n) = i, the state in the next time is:

• S(n + 1) = i + 1, i, i − 1.

We consider: aperiodic irreducible continuous time birth-death processes.

One of the first applications:

• modeling the number of preys in a closed population of animals:

– predator: is able to kill prey;

– prey: is able to born another prey.

There are two special cases of birth-death process:

• pure birth process;

• pure death process.

Lecture: Stochastic processes 51

Page 52: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

8.1. Continuous time birth death process

Let us introduce the following parameters:

• λk: the birth rate in state k;

• µk: the death rate in state k.

We are looking for:

• transient state probabilities;

• steady-state probabilities.

Consider then time interval ∆t such that ∆t → 0:

Prfrom state k to state k + 1 in ∆t = λk∆t

Prfrom state k to state k − 1 in ∆t = µk∆t

Prfrom state k to state k in ∆t = 1 − (λk + µk)∆t

Prany other transitions in ∆t = 0. (59)

Note: we do not allow more than one event in such a small time interval.

Lecture: Stochastic processes 52

Page 53: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Define the system state:

• the number of some items in the system defined as:

System state at t = pk(t) = Number of births at [0, t)−− Number of deaths at [0, t) = PrS(t) = k. (60)

• system state at time 0 is S(0) = 0.

Note: when we are interested in equilibrium distribution the initial condition does not matter.

0 1 2 k-1 k k+1...

1-kë

1kì

+

Figure 18: State transition diagram of continuous time birth-death process.

Lecture: Stochastic processes 53

Page 54: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

Analysis is as follows:

• consider time interval [t, t + ∆t);

• state transitions in this time interval can be described as follows:

p0(t + ∆t) = p0(t)(1 − λ0∆t) + p1(t)µ1∆t

pk(t + ∆t) = pk(t)(1 − λk∆t − µk∆t) + pk−1(t)λk−1∆t + pk+1(t)µk+1∆t, (61)

• note that for these transition probabilities the normalizing condition should hold:

∞∑k=0

pk(t) = 1. (62)

• assume ∆t → 0 we get the following set of equations:

dp0(t)

dt= λ0p0(t) + µ1p1(t)

dpk(t)

dt= (−λk − µk)pk(t) + λk−1pk−1(t) + µk+1pk+1(t). (63)

Lecture: Stochastic processes 54

Page 55: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

• to obtain equilibrium solution we have to note that when t → ∞:

dpi(t)

dt= 0, ∀i. (64)

• finally, we have the following set of equations to be solved for equilibrium:

0 = −λ0p0 + µ1p1

0 = (−λk − µk)pk + λk−1pk−1 + µk+1pk+1. (65)

• an normalizing condition is:

∞∑i=0

pi = 1. (66)

Note: we can also solve differential equations to get transient solution.

Lecture: Stochastic processes 55

Page 56: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

The solution is given by:

pk = p0

(k−1∏i=0

λi

µi+1

),

p0 =1

1 +∑∞

k=1

∏k−1i=0

λi

µi+1

. (67)

• the form of such solution is known as product form

Example of application: M/M/1 queuing system:

• arrivals are exponential with rate λ;

• service is exponential with rate µ;

• the number if jobs in the system: continuous-time birth-death process.

Other examples:

• queues of M/M/-/-/- type;

• some problems in reliability theory.

Lecture: Stochastic processes 56

Page 57: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

8.2. Global balance equations

How we analyzed:

• considered ∆t → 0;

• write equations;

• considered the steady-state behavior (t → ∞).

Is it possible to make it easier? Consider flow balance for each state:

• draw state transition diagram;

• draw closed boundaries and equate flows across this boundary;

flow entering state k = λk−1pk−1 + µk+1pk+1,

flow leaving state k = (λk + µk)pk. (68)

• equate flows entering and leaving state to get global balance equation:

λk−1pk−1 + µk+1pk+1 = (λk + µk)pk. (69)

• solve obtained equations along with normalization condition.

Lecture: Stochastic processes 57

Page 58: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

8.3. Detailed balance equations

Is it possible to get equations even more easily?

• yes: use detailed balance equations;

• how: close boundary at infinity.

Consider state k:

flow entering state k = λk−1pk−1,

flow leaving state k = µkpk,

equal to get: λk−1pk−1 = µkpk. (70)

k-1 k...

1-kë

...

Figure 19: Boundaries for detailed balance equations.

Lecture: Stochastic processes 58

Page 59: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

8.4. The form of global and detailed balance equations

Flow balance in a birth-death process are as follows:

• global balance equations:

– closed boundary across single state j:∑i=j

pipij = pj

∑i=j

pji. (71)

– equates flows around state(s).

• detailed balance equations:

– boundary between states i and j closed at +∞ and −∞:

pipij = pjpji. (72)

– equates flows between states i and j in a pair-wise manner.

Note: the solution is the same!

Lecture: Stochastic processes 59

Page 60: Traffic modelling: statisticsTraffic modelling: statistics · • Classifications of stochastic processes; • Markov processes; • Homogenous Markov chains: – Memoryless property

Simulations and modeling D.Moltchanov, TUT, 2006

8.5. Existence of equilibrium distribution

Consider two parameters:

α =∞∑

k=0

(k=1∏i=0

λi

µi+1

),

β =∞∑

k=0

1

λk

∏k=1i=0

λi

µi+1

. (73)

Note the following:

• steady-state distribution exists only if all states of the process are ergodic;

• all states of birth-death process are ergodic if:

α < ∞,

β → ∞. (74)

Note: for specific birth-death processes condition can be much simpler:

• example: M/M/1 queue: λ/µ < 1!

Lecture: Stochastic processes 60