markov chains and stationary...

244
Outline Markov Chains and Stationary Distributions Matt Williamson 1 1 Lane Department of Computer Science and Electrical Engineering West Virginia University March 19, 2012 Williamson Markov Chains and Stationary Distributions

Upload: trinhxuyen

Post on 09-Apr-2018

224 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Outline

Markov Chains and Stationary Distributions

Matt Williamson1

1Lane Department of Computer Science and Electrical EngineeringWest Virginia University

March 19, 2012

Williamson Markov Chains and Stationary Distributions

Page 2: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Outline

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 3: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Outline

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 4: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Outline

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 5: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 6: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Probability Matrix

Recall

Let P be a one-step probability matrix of a Markov chain such that

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

.

Let p(t) = (p0(t), p1(t), p2(t), . . . ) be the vector giving the probability distribution ofthe state of the chain at time t . Then,

p(t) = p(t − 1) · P

We are now interested in state probability distributions that do not change after atransition.

Williamson Markov Chains and Stationary Distributions

Page 7: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Probability Matrix

Recall

Let P be a one-step probability matrix of a Markov chain such that

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

.

Let p(t) = (p0(t), p1(t), p2(t), . . . ) be the vector giving the probability distribution ofthe state of the chain at time t .

Then,

p(t) = p(t − 1) · P

We are now interested in state probability distributions that do not change after atransition.

Williamson Markov Chains and Stationary Distributions

Page 8: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Probability Matrix

Recall

Let P be a one-step probability matrix of a Markov chain such that

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

.

Let p(t) = (p0(t), p1(t), p2(t), . . . ) be the vector giving the probability distribution ofthe state of the chain at time t . Then,

p(t) = p(t − 1) · P

We are now interested in state probability distributions that do not change after atransition.

Williamson Markov Chains and Stationary Distributions

Page 9: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Probability Matrix

Recall

Let P be a one-step probability matrix of a Markov chain such that

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

.

Let p(t) = (p0(t), p1(t), p2(t), . . . ) be the vector giving the probability distribution ofthe state of the chain at time t . Then,

p(t) = p(t − 1) · P

We are now interested in state probability distributions that do not change after atransition.

Williamson Markov Chains and Stationary Distributions

Page 10: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Definition

A stationary distribution (also called an equilibrium distribution) of a Markov chain isa probability distribution π such that

π = π · P.

Notes

If a chain reaches a stationary distribution, then it maintains that distribution for allfuture time.

A stationary distribution represents a steady state (or an equilibrium) in the chain’sbehavior.

Stationary distributions play a key role in analyzing Markov chains.

Williamson Markov Chains and Stationary Distributions

Page 11: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Definition

A stationary distribution (also called an equilibrium distribution) of a Markov chain isa probability distribution π such that

π = π · P.

Notes

If a chain reaches a stationary distribution, then it maintains that distribution for allfuture time.

A stationary distribution represents a steady state (or an equilibrium) in the chain’sbehavior.

Stationary distributions play a key role in analyzing Markov chains.

Williamson Markov Chains and Stationary Distributions

Page 12: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Definition

A stationary distribution (also called an equilibrium distribution) of a Markov chain isa probability distribution π such that

π = π · P.

Notes

If a chain reaches a stationary distribution, then it maintains that distribution for allfuture time.

A stationary distribution represents a steady state (or an equilibrium) in the chain’sbehavior.

Stationary distributions play a key role in analyzing Markov chains.

Williamson Markov Chains and Stationary Distributions

Page 13: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Definition

A stationary distribution (also called an equilibrium distribution) of a Markov chain isa probability distribution π such that

π = π · P.

Notes

If a chain reaches a stationary distribution, then it maintains that distribution for allfuture time.

A stationary distribution represents a steady state (or an equilibrium) in the chain’sbehavior.

Stationary distributions play a key role in analyzing Markov chains.

Williamson Markov Chains and Stationary Distributions

Page 14: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Example

Suppose we have a Markov chain having state space S = 0, 1, 2 and transitionmatrix

P =

13

13

13

14

12

14

16

13

12

.

The stationary distribution π of this Markov chain is

π0 =625, π1 =

1025, π2 =

925.

What does this mean?

Consider the total time spent once the chain reaches the stationary distribution.625 = 24% of the time is spent in state 0.1025 = 40% of the time is spent in state 1.925 = 36% of the time is spent in state 2.

Williamson Markov Chains and Stationary Distributions

Page 15: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Example

Suppose we have a Markov chain having state space S = 0, 1, 2 and transitionmatrix

P =

13

13

13

14

12

14

16

13

12

.The stationary distribution π of this Markov chain is

π0 =625, π1 =

1025, π2 =

925.

What does this mean?

Consider the total time spent once the chain reaches the stationary distribution.625 = 24% of the time is spent in state 0.1025 = 40% of the time is spent in state 1.925 = 36% of the time is spent in state 2.

Williamson Markov Chains and Stationary Distributions

Page 16: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Example

Suppose we have a Markov chain having state space S = 0, 1, 2 and transitionmatrix

P =

13

13

13

14

12

14

16

13

12

.The stationary distribution π of this Markov chain is

π0 =625, π1 =

1025, π2 =

925.

What does this mean?

Consider the total time spent once the chain reaches the stationary distribution.625 = 24% of the time is spent in state 0.1025 = 40% of the time is spent in state 1.925 = 36% of the time is spent in state 2.

Williamson Markov Chains and Stationary Distributions

Page 17: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Example

Suppose we have a Markov chain having state space S = 0, 1, 2 and transitionmatrix

P =

13

13

13

14

12

14

16

13

12

.The stationary distribution π of this Markov chain is

π0 =625, π1 =

1025, π2 =

925.

What does this mean?

Consider the total time spent once the chain reaches the stationary distribution.

625 = 24% of the time is spent in state 0.1025 = 40% of the time is spent in state 1.925 = 36% of the time is spent in state 2.

Williamson Markov Chains and Stationary Distributions

Page 18: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Example

Suppose we have a Markov chain having state space S = 0, 1, 2 and transitionmatrix

P =

13

13

13

14

12

14

16

13

12

.The stationary distribution π of this Markov chain is

π0 =625, π1 =

1025, π2 =

925.

What does this mean?

Consider the total time spent once the chain reaches the stationary distribution.625 = 24% of the time is spent in state 0.

1025 = 40% of the time is spent in state 1.925 = 36% of the time is spent in state 2.

Williamson Markov Chains and Stationary Distributions

Page 19: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Example

Suppose we have a Markov chain having state space S = 0, 1, 2 and transitionmatrix

P =

13

13

13

14

12

14

16

13

12

.The stationary distribution π of this Markov chain is

π0 =625, π1 =

1025, π2 =

925.

What does this mean?

Consider the total time spent once the chain reaches the stationary distribution.625 = 24% of the time is spent in state 0.1025 = 40% of the time is spent in state 1.

925 = 36% of the time is spent in state 2.

Williamson Markov Chains and Stationary Distributions

Page 20: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Example

Suppose we have a Markov chain having state space S = 0, 1, 2 and transitionmatrix

P =

13

13

13

14

12

14

16

13

12

.The stationary distribution π of this Markov chain is

π0 =625, π1 =

1025, π2 =

925.

What does this mean?

Consider the total time spent once the chain reaches the stationary distribution.625 = 24% of the time is spent in state 0.1025 = 40% of the time is spent in state 1.925 = 36% of the time is spent in state 2.

Williamson Markov Chains and Stationary Distributions

Page 21: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 22: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Fundamental Theorem of Markov Chains

We discuss first the case of finite chains and then extend the results to anydiscrete space chain.

Without loss of generality, assume that the finite set of states of the Markov chainis 0, 1, . . . , n.

Theorem

Any finite, irreducible, and ergodic Markov chain has the following properties:1 The chain has a unique stationary distribution π = (π0, π1, . . . , πn).2 For all j and i , the limit lim

t→∞P t

j,i exists, and it is independent of j .

3 πi = limt→∞

P tj,i =

1hi,i

, where hi,i is the expected time to return to state i when

starting at state i .

Williamson Markov Chains and Stationary Distributions

Page 23: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Fundamental Theorem of Markov Chains

We discuss first the case of finite chains and then extend the results to anydiscrete space chain.

Without loss of generality, assume that the finite set of states of the Markov chainis 0, 1, . . . , n.

Theorem

Any finite, irreducible, and ergodic Markov chain has the following properties:1 The chain has a unique stationary distribution π = (π0, π1, . . . , πn).2 For all j and i , the limit lim

t→∞P t

j,i exists, and it is independent of j .

3 πi = limt→∞

P tj,i =

1hi,i

, where hi,i is the expected time to return to state i when

starting at state i .

Williamson Markov Chains and Stationary Distributions

Page 24: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Fundamental Theorem of Markov Chains

We discuss first the case of finite chains and then extend the results to anydiscrete space chain.

Without loss of generality, assume that the finite set of states of the Markov chainis 0, 1, . . . , n.

Theorem

Any finite, irreducible, and ergodic Markov chain has the following properties:

1 The chain has a unique stationary distribution π = (π0, π1, . . . , πn).2 For all j and i , the limit lim

t→∞P t

j,i exists, and it is independent of j .

3 πi = limt→∞

P tj,i =

1hi,i

, where hi,i is the expected time to return to state i when

starting at state i .

Williamson Markov Chains and Stationary Distributions

Page 25: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Fundamental Theorem of Markov Chains

We discuss first the case of finite chains and then extend the results to anydiscrete space chain.

Without loss of generality, assume that the finite set of states of the Markov chainis 0, 1, . . . , n.

Theorem

Any finite, irreducible, and ergodic Markov chain has the following properties:1 The chain has a unique stationary distribution π = (π0, π1, . . . , πn).

2 For all j and i , the limit limt→∞

P tj,i exists, and it is independent of j .

3 πi = limt→∞

P tj,i =

1hi,i

, where hi,i is the expected time to return to state i when

starting at state i .

Williamson Markov Chains and Stationary Distributions

Page 26: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Fundamental Theorem of Markov Chains

We discuss first the case of finite chains and then extend the results to anydiscrete space chain.

Without loss of generality, assume that the finite set of states of the Markov chainis 0, 1, . . . , n.

Theorem

Any finite, irreducible, and ergodic Markov chain has the following properties:1 The chain has a unique stationary distribution π = (π0, π1, . . . , πn).2 For all j and i , the limit lim

t→∞P t

j,i exists, and it is independent of j .

3 πi = limt→∞

P tj,i =

1hi,i

, where hi,i is the expected time to return to state i when

starting at state i .

Williamson Markov Chains and Stationary Distributions

Page 27: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Stationary Distributions

Fundamental Theorem of Markov Chains

We discuss first the case of finite chains and then extend the results to anydiscrete space chain.

Without loss of generality, assume that the finite set of states of the Markov chainis 0, 1, . . . , n.

Theorem

Any finite, irreducible, and ergodic Markov chain has the following properties:1 The chain has a unique stationary distribution π = (π0, π1, . . . , πn).2 For all j and i , the limit lim

t→∞P t

j,i exists, and it is independent of j .

3 πi = limt→∞

P tj,i =

1hi,i

, where hi,i is the expected time to return to state i when

starting at state i .

Williamson Markov Chains and Stationary Distributions

Page 28: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Fundamental Theorem of Markov Chains

Intuition

The stationary distribution π has two interpretations

1 πi is the limiting probability that the Markov chain will be in state i infinitely far outin the future.

This probability is independent of the initial state.If we run the chain long enough, the initial state of the chain is almost forgotten, and theprobability of being in state i converges to πi .

2 πi is the inverse of hi,i =∞∑t=1

t · r ti,i .

If the average time to return to state i from i is hi,i , then we expect to be in state i for 1hi,i

of the time and thus, in the limit, we must have πi = 1hi,i

.

Williamson Markov Chains and Stationary Distributions

Page 29: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Fundamental Theorem of Markov Chains

Intuition

The stationary distribution π has two interpretations1 πi is the limiting probability that the Markov chain will be in state i infinitely far out

in the future.

This probability is independent of the initial state.If we run the chain long enough, the initial state of the chain is almost forgotten, and theprobability of being in state i converges to πi .

2 πi is the inverse of hi,i =∞∑t=1

t · r ti,i .

If the average time to return to state i from i is hi,i , then we expect to be in state i for 1hi,i

of the time and thus, in the limit, we must have πi = 1hi,i

.

Williamson Markov Chains and Stationary Distributions

Page 30: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Fundamental Theorem of Markov Chains

Intuition

The stationary distribution π has two interpretations1 πi is the limiting probability that the Markov chain will be in state i infinitely far out

in the future.This probability is independent of the initial state.

If we run the chain long enough, the initial state of the chain is almost forgotten, and theprobability of being in state i converges to πi .

2 πi is the inverse of hi,i =∞∑t=1

t · r ti,i .

If the average time to return to state i from i is hi,i , then we expect to be in state i for 1hi,i

of the time and thus, in the limit, we must have πi = 1hi,i

.

Williamson Markov Chains and Stationary Distributions

Page 31: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Fundamental Theorem of Markov Chains

Intuition

The stationary distribution π has two interpretations1 πi is the limiting probability that the Markov chain will be in state i infinitely far out

in the future.This probability is independent of the initial state.If we run the chain long enough, the initial state of the chain is almost forgotten, and theprobability of being in state i converges to πi .

2 πi is the inverse of hi,i =∞∑t=1

t · r ti,i .

If the average time to return to state i from i is hi,i , then we expect to be in state i for 1hi,i

of the time and thus, in the limit, we must have πi = 1hi,i

.

Williamson Markov Chains and Stationary Distributions

Page 32: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Fundamental Theorem of Markov Chains

Intuition

The stationary distribution π has two interpretations1 πi is the limiting probability that the Markov chain will be in state i infinitely far out

in the future.This probability is independent of the initial state.If we run the chain long enough, the initial state of the chain is almost forgotten, and theprobability of being in state i converges to πi .

2 πi is the inverse of hi,i =∞∑t=1

t · r ti,i .

If the average time to return to state i from i is hi,i , then we expect to be in state i for 1hi,i

of the time and thus, in the limit, we must have πi = 1hi,i

.

Williamson Markov Chains and Stationary Distributions

Page 33: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Fundamental Theorem of Markov Chains

Intuition

The stationary distribution π has two interpretations1 πi is the limiting probability that the Markov chain will be in state i infinitely far out

in the future.This probability is independent of the initial state.If we run the chain long enough, the initial state of the chain is almost forgotten, and theprobability of being in state i converges to πi .

2 πi is the inverse of hi,i =∞∑t=1

t · r ti,i .

If the average time to return to state i from i is hi,i , then we expect to be in state i for 1hi,i

of the time and thus, in the limit, we must have πi = 1hi,i

.

Williamson Markov Chains and Stationary Distributions

Page 34: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Lemma

For any irreducible, ergodic Markov chain and for any state i , the limit limt→∞

P ti,i exists

andlim

t→∞P t

i,i =1

hi,i.

Intuition

Since the expected time between visits to i is hi,i , state i is visited 1/hi,i of thetime.

Thus, limt→∞ P ti,i , which represents the probability a state chosen far in the future

is at state i when the chain starts at state i , must be 1/hi,i .

Williamson Markov Chains and Stationary Distributions

Page 35: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Lemma

For any irreducible, ergodic Markov chain and for any state i , the limit limt→∞

P ti,i exists

andlim

t→∞P t

i,i =1

hi,i.

Intuition

Since the expected time between visits to i is hi,i , state i is visited 1/hi,i of thetime.

Thus, limt→∞ P ti,i , which represents the probability a state chosen far in the future

is at state i when the chain starts at state i , must be 1/hi,i .

Williamson Markov Chains and Stationary Distributions

Page 36: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Lemma

For any irreducible, ergodic Markov chain and for any state i , the limit limt→∞

P ti,i exists

andlim

t→∞P t

i,i =1

hi,i.

Intuition

Since the expected time between visits to i is hi,i , state i is visited 1/hi,i of thetime.

Thus, limt→∞ P ti,i , which represents the probability a state chosen far in the future

is at state i when the chain starts at state i , must be 1/hi,i .

Williamson Markov Chains and Stationary Distributions

Page 37: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Showing Limits Exist and are Independent of Starting State j

Using the fact that limt→∞ P ti,i exists, for any j and i ,

limt→∞

P tj,i = lim

t→∞P t

i,i =1

hi,i.

Proof

Read pages 168-169 in the book!

Williamson Markov Chains and Stationary Distributions

Page 38: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Showing Limits Exist and are Independent of Starting State j

Using the fact that limt→∞ P ti,i exists, for any j and i ,

limt→∞

P tj,i = lim

t→∞P t

i,i =1

hi,i.

Proof

Read pages 168-169 in the book!

Williamson Markov Chains and Stationary Distributions

Page 39: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Let πi = limt→∞

P tj,i =

1hi,i

.

For every t ≥ 0, we have P ti,i ≥ 0 and thus πi ≥ 0.

For any t ≥ 0,n∑

i=0

P tj,i = 1 and thus

limt→∞

n∑i=0

P tj,i =

n∑i=0

limt→∞

P tj,i =

n∑i=0

πi = 1

π is a proper distribution.

Williamson Markov Chains and Stationary Distributions

Page 40: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Let πi = limt→∞

P tj,i =

1hi,i

.

For every t ≥ 0, we have P ti,i ≥ 0 and thus πi ≥ 0.

For any t ≥ 0,n∑

i=0

P tj,i = 1 and thus

limt→∞

n∑i=0

P tj,i =

n∑i=0

limt→∞

P tj,i =

n∑i=0

πi = 1

π is a proper distribution.

Williamson Markov Chains and Stationary Distributions

Page 41: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Let πi = limt→∞

P tj,i =

1hi,i

.

For every t ≥ 0, we have P ti,i ≥ 0 and thus πi ≥ 0.

For any t ≥ 0,n∑

i=0

P tj,i = 1 and thus

limt→∞

n∑i=0

P tj,i =

n∑i=0

limt→∞

P tj,i =

n∑i=0

πi = 1

π is a proper distribution.

Williamson Markov Chains and Stationary Distributions

Page 42: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Let πi = limt→∞

P tj,i =

1hi,i

.

For every t ≥ 0, we have P ti,i ≥ 0 and thus πi ≥ 0.

For any t ≥ 0,n∑

i=0

P tj,i = 1 and thus

limt→∞

n∑i=0

P tj,i =

n∑i=0

limt→∞

P tj,i =

n∑i=0

πi =

1

π is a proper distribution.

Williamson Markov Chains and Stationary Distributions

Page 43: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Let πi = limt→∞

P tj,i =

1hi,i

.

For every t ≥ 0, we have P ti,i ≥ 0 and thus πi ≥ 0.

For any t ≥ 0,n∑

i=0

P tj,i = 1 and thus

limt→∞

n∑i=0

P tj,i =

n∑i=0

limt→∞

P tj,i =

n∑i=0

πi = 1

π is a proper distribution.

Williamson Markov Chains and Stationary Distributions

Page 44: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Let πi = limt→∞

P tj,i =

1hi,i

.

For every t ≥ 0, we have P ti,i ≥ 0 and thus πi ≥ 0.

For any t ≥ 0,n∑

i=0

P tj,i = 1 and thus

limt→∞

n∑i=0

P tj,i =

n∑i=0

limt→∞

P tj,i =

n∑i=0

πi = 1

π is a proper distribution.

Williamson Markov Chains and Stationary Distributions

Page 45: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Now,

P t+1j,i =

n∑k=0

P tj,k · Pk,i .

Letting t →∞, we have

P t+1j,i = πi

P tj,k = πk

πi =n∑

k=0

πk · Pk,i

Therefore, π is a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 46: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Now,

P t+1j,i =

n∑k=0

P tj,k · Pk,i .

Letting t →∞, we have

P t+1j,i =

πi

P tj,k = πk

πi =n∑

k=0

πk · Pk,i

Therefore, π is a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 47: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Now,

P t+1j,i =

n∑k=0

P tj,k · Pk,i .

Letting t →∞, we have

P t+1j,i = πi

P tj,k = πk

πi =n∑

k=0

πk · Pk,i

Therefore, π is a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 48: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Now,

P t+1j,i =

n∑k=0

P tj,k · Pk,i .

Letting t →∞, we have

P t+1j,i = πi

P tj,k =

πk

πi =n∑

k=0

πk · Pk,i

Therefore, π is a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 49: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Now,

P t+1j,i =

n∑k=0

P tj,k · Pk,i .

Letting t →∞, we have

P t+1j,i = πi

P tj,k = πk

πi =n∑

k=0

πk · Pk,i

Therefore, π is a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 50: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Now,

P t+1j,i =

n∑k=0

P tj,k · Pk,i .

Letting t →∞, we have

P t+1j,i = πi

P tj,k = πk

πi =n∑

k=0

πk · Pk,i

Therefore, π is a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 51: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Now,

P t+1j,i =

n∑k=0

P tj,k · Pk,i .

Letting t →∞, we have

P t+1j,i = πi

P tj,k = πk

πi =n∑

k=0

πk · Pk,i

Therefore, π is a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 52: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving π Forms a Stationary Distribution

Now,

P t+1j,i =

n∑k=0

P tj,k · Pk,i .

Letting t →∞, we have

P t+1j,i = πi

P tj,k = πk

πi =n∑

k=0

πk · Pk,i

Therefore, π is a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 53: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving the Stationary Distribution is Unique

Suppose there were another stationary distribution φ.

By the same argument, we would have

φi =n∑

k=0

φk · P tk,i .

Taking the limit at t →∞ yields

φi =n∑

k=0

φkπi = ·πi

n∑k=0

φk .

Since∑n

k=0 φk = 1, it follows that φi = πi for all i , or φ = π.

Williamson Markov Chains and Stationary Distributions

Page 54: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving the Stationary Distribution is Unique

Suppose there were another stationary distribution φ.

By the same argument, we would have

φi =n∑

k=0

φk · P tk,i .

Taking the limit at t →∞ yields

φi =n∑

k=0

φkπi = ·πi

n∑k=0

φk .

Since∑n

k=0 φk = 1, it follows that φi = πi for all i , or φ = π.

Williamson Markov Chains and Stationary Distributions

Page 55: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving the Stationary Distribution is Unique

Suppose there were another stationary distribution φ.

By the same argument, we would have

φi =n∑

k=0

φk · P tk,i .

Taking the limit at t →∞ yields

φi =n∑

k=0

φk

πi = ·πi

n∑k=0

φk .

Since∑n

k=0 φk = 1, it follows that φi = πi for all i , or φ = π.

Williamson Markov Chains and Stationary Distributions

Page 56: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving the Stationary Distribution is Unique

Suppose there were another stationary distribution φ.

By the same argument, we would have

φi =n∑

k=0

φk · P tk,i .

Taking the limit at t →∞ yields

φi =n∑

k=0

φkπi

= ·πi

n∑k=0

φk .

Since∑n

k=0 φk = 1, it follows that φi = πi for all i , or φ = π.

Williamson Markov Chains and Stationary Distributions

Page 57: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving the Stationary Distribution is Unique

Suppose there were another stationary distribution φ.

By the same argument, we would have

φi =n∑

k=0

φk · P tk,i .

Taking the limit at t →∞ yields

φi =n∑

k=0

φkπi = ·πi

n∑k=0

φk .

Since∑n

k=0 φk = 1, it follows that φi = πi for all i , or φ = π.

Williamson Markov Chains and Stationary Distributions

Page 58: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving the Stationary Distribution is Unique

Suppose there were another stationary distribution φ.

By the same argument, we would have

φi =n∑

k=0

φk · P tk,i .

Taking the limit at t →∞ yields

φi =n∑

k=0

φkπi = ·πi

n∑k=0

φk .

Since∑n

k=0 φk =

1, it follows that φi = πi for all i , or φ = π.

Williamson Markov Chains and Stationary Distributions

Page 59: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving the Stationary Distribution is Unique

Suppose there were another stationary distribution φ.

By the same argument, we would have

φi =n∑

k=0

φk · P tk,i .

Taking the limit at t →∞ yields

φi =n∑

k=0

φkπi = ·πi

n∑k=0

φk .

Since∑n

k=0 φk = 1,

it follows that φi = πi for all i , or φ = π.

Williamson Markov Chains and Stationary Distributions

Page 60: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Proof of Fundamental Theorem

Proving the Stationary Distribution is Unique

Suppose there were another stationary distribution φ.

By the same argument, we would have

φi =n∑

k=0

φk · P tk,i .

Taking the limit at t →∞ yields

φi =n∑

k=0

φkπi = ·πi

n∑k=0

φk .

Since∑n

k=0 φk = 1, it follows that φi = πi for all i , or φ = π.

Williamson Markov Chains and Stationary Distributions

Page 61: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 62: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

System of Linear Equations

One way to compute the stationary distribution of a finite Markov chain is to solvethe system of linear equations

π · P = π.

If we are given a transition matrix

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

,

we have the following system

π0 = π0 · P0,0 + π1 · P1,0 + · · ·+ πi · Pi,0 + · · ·π1 = π0 · P0,1 + π1 · P1,1 + · · ·+ πi · Pi,1 + · · ·πj = π0 · P0,j + π1 · P1,j + · · ·+ πi · Pi,j + · · · Done?1 = π0 + π1 + · · ·+ πi + · · ·

Williamson Markov Chains and Stationary Distributions

Page 63: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

System of Linear Equations

One way to compute the stationary distribution of a finite Markov chain is to solvethe system of linear equations

π · P = π.

If we are given a transition matrix

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

,

we have the following system

π0 = π0 · P0,0 + π1 · P1,0 + · · ·+ πi · Pi,0 + · · ·π1 = π0 · P0,1 + π1 · P1,1 + · · ·+ πi · Pi,1 + · · ·πj = π0 · P0,j + π1 · P1,j + · · ·+ πi · Pi,j + · · · Done?1 = π0 + π1 + · · ·+ πi + · · ·

Williamson Markov Chains and Stationary Distributions

Page 64: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

System of Linear Equations

One way to compute the stationary distribution of a finite Markov chain is to solvethe system of linear equations

π · P = π.

If we are given a transition matrix

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

,

we have the following system

π0 = π0 · P0,0 + π1 · P1,0 + · · ·+ πi · Pi,0 + · · ·

π1 = π0 · P0,1 + π1 · P1,1 + · · ·+ πi · Pi,1 + · · ·πj = π0 · P0,j + π1 · P1,j + · · ·+ πi · Pi,j + · · · Done?1 = π0 + π1 + · · ·+ πi + · · ·

Williamson Markov Chains and Stationary Distributions

Page 65: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

System of Linear Equations

One way to compute the stationary distribution of a finite Markov chain is to solvethe system of linear equations

π · P = π.

If we are given a transition matrix

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

,

we have the following system

π0 = π0 · P0,0 + π1 · P1,0 + · · ·+ πi · Pi,0 + · · ·π1 = π0 · P0,1 + π1 · P1,1 + · · ·+ πi · Pi,1 + · · ·

πj = π0 · P0,j + π1 · P1,j + · · ·+ πi · Pi,j + · · · Done?1 = π0 + π1 + · · ·+ πi + · · ·

Williamson Markov Chains and Stationary Distributions

Page 66: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

System of Linear Equations

One way to compute the stationary distribution of a finite Markov chain is to solvethe system of linear equations

π · P = π.

If we are given a transition matrix

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

,

we have the following system

π0 = π0 · P0,0 + π1 · P1,0 + · · ·+ πi · Pi,0 + · · ·π1 = π0 · P0,1 + π1 · P1,1 + · · ·+ πi · Pi,1 + · · ·πj = π0 · P0,j + π1 · P1,j + · · ·+ πi · Pi,j + · · ·

Done?1 = π0 + π1 + · · ·+ πi + · · ·

Williamson Markov Chains and Stationary Distributions

Page 67: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

System of Linear Equations

One way to compute the stationary distribution of a finite Markov chain is to solvethe system of linear equations

π · P = π.

If we are given a transition matrix

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

,

we have the following system

π0 = π0 · P0,0 + π1 · P1,0 + · · ·+ πi · Pi,0 + · · ·π1 = π0 · P0,1 + π1 · P1,1 + · · ·+ πi · Pi,1 + · · ·πj = π0 · P0,j + π1 · P1,j + · · ·+ πi · Pi,j + · · · Done?

1 = π0 + π1 + · · ·+ πi + · · ·

Williamson Markov Chains and Stationary Distributions

Page 68: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

System of Linear Equations

One way to compute the stationary distribution of a finite Markov chain is to solvethe system of linear equations

π · P = π.

If we are given a transition matrix

P =

P0,0 P0,1 · · · P0,j · · ·P1,0 P1,1 · · · P1,j · · ·

......

. . ....

. . .Pi,0 Pi,1 · · · Pi,j · · ·

......

. . ....

. . .

,

we have the following system

π0 = π0 · P0,0 + π1 · P1,0 + · · ·+ πi · Pi,0 + · · ·π1 = π0 · P0,1 + π1 · P1,1 + · · ·+ πi · Pi,1 + · · ·πj = π0 · P0,j + π1 · P1,j + · · ·+ πi · Pi,j + · · · Done?1 = π0 + π1 + · · ·+ πi + · · ·

Williamson Markov Chains and Stationary Distributions

Page 69: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Exercise

Three out of every four trucks on the road are followed by a car, while only one out ofevery five cars is followed by a truck.

What fraction of vehicles on the road are trucks?

Solution

Imagine sitting on the side of the road watching vehicles go by.

If a truck goes by, the next vehicle will be a car with probability 3/4 and will be atruck with probability 1/4.

If a car goes by, the next vehicle will be a car with probability 4/5 and will be atruck with probability 1/5.

Let 0 be the state that the vehicle is a truck, and let 1 be the state that the vehicleis a car.

Williamson Markov Chains and Stationary Distributions

Page 70: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Exercise

Three out of every four trucks on the road are followed by a car, while only one out ofevery five cars is followed by a truck. What fraction of vehicles on the road are trucks?

Solution

Imagine sitting on the side of the road watching vehicles go by.

If a truck goes by, the next vehicle will be a car with probability 3/4 and will be atruck with probability 1/4.

If a car goes by, the next vehicle will be a car with probability 4/5 and will be atruck with probability 1/5.

Let 0 be the state that the vehicle is a truck, and let 1 be the state that the vehicleis a car.

Williamson Markov Chains and Stationary Distributions

Page 71: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Exercise

Three out of every four trucks on the road are followed by a car, while only one out ofevery five cars is followed by a truck. What fraction of vehicles on the road are trucks?

Solution

Imagine sitting on the side of the road watching vehicles go by.

If a truck goes by, the next vehicle will be a car with probability 3/4 and will be atruck with probability 1/4.

If a car goes by, the next vehicle will be a car with probability 4/5 and will be atruck with probability 1/5.

Let 0 be the state that the vehicle is a truck, and let 1 be the state that the vehicleis a car.

Williamson Markov Chains and Stationary Distributions

Page 72: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Exercise

Three out of every four trucks on the road are followed by a car, while only one out ofevery five cars is followed by a truck. What fraction of vehicles on the road are trucks?

Solution

Imagine sitting on the side of the road watching vehicles go by.

If a truck goes by, the next vehicle will be a car with probability 3/4 and will be atruck with probability 1/4.

If a car goes by, the next vehicle will be a car with probability 4/5 and will be atruck with probability 1/5.

Let 0 be the state that the vehicle is a truck, and let 1 be the state that the vehicleis a car.

Williamson Markov Chains and Stationary Distributions

Page 73: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Exercise

Three out of every four trucks on the road are followed by a car, while only one out ofevery five cars is followed by a truck. What fraction of vehicles on the road are trucks?

Solution

Imagine sitting on the side of the road watching vehicles go by.

If a truck goes by, the next vehicle will be a car with probability 3/4 and will be atruck with probability 1/4.

If a car goes by, the next vehicle will be a car with probability 4/5 and will be atruck with probability 1/5.

Let 0 be the state that the vehicle is a truck, and let 1 be the state that the vehicleis a car.

Williamson Markov Chains and Stationary Distributions

Page 74: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Exercise

Three out of every four trucks on the road are followed by a car, while only one out ofevery five cars is followed by a truck. What fraction of vehicles on the road are trucks?

Solution

Imagine sitting on the side of the road watching vehicles go by.

If a truck goes by, the next vehicle will be a car with probability 3/4 and will be atruck with probability 1/4.

If a car goes by, the next vehicle will be a car with probability 4/5 and will be atruck with probability 1/5.

Let 0 be the state that the vehicle is a truck, and let 1 be the state that the vehicleis a car.

Williamson Markov Chains and Stationary Distributions

Page 75: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Our transition probability matrix is

P =

[14

34

15

45

].

Our system of equations is

π0 = 14 · π0 + 1

5 · π1π1 = 3

4 · π0 + 45 · π1

1 = π0 + π1.

Solving the first equation gives us

34 · π0 = 1

5 · π1π0 = 4

15 · π1.

Williamson Markov Chains and Stationary Distributions

Page 76: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Our transition probability matrix is

P =

[14

34

15

45

].

Our system of equations is

π0 = 14 · π0 + 1

5 · π1π1 = 3

4 · π0 + 45 · π1

1 = π0 + π1.

Solving the first equation gives us

34 · π0 = 1

5 · π1π0 = 4

15 · π1.

Williamson Markov Chains and Stationary Distributions

Page 77: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Our transition probability matrix is

P =

[14

34

15

45

].

Our system of equations is

π0 = 14 · π0 + 1

5 · π1π1 = 3

4 · π0 + 45 · π1

1 = π0 + π1.

Solving the first equation gives us

34 · π0 = 1

5 · π1π0 = 4

15 · π1.

Williamson Markov Chains and Stationary Distributions

Page 78: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Our transition probability matrix is

P =

[14

34

15

45

].

Our system of equations is

π0 = 14 · π0 + 1

5 · π1π1 = 3

4 · π0 + 45 · π1

1 = π0 + π1.

Solving the first equation gives us

34 · π0 = 1

5 · π1π0 = 4

15 · π1.

Williamson Markov Chains and Stationary Distributions

Page 79: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Our transition probability matrix is

P =

[14

34

15

45

].

Our system of equations is

π0 = 14 · π0 + 1

5 · π1π1 = 3

4 · π0 + 45 · π1

1 = π0 + π1.

Solving the first equation gives us

34 · π0 = 1

5 · π1π0 = 4

15 · π1.

Williamson Markov Chains and Stationary Distributions

Page 80: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Our transition probability matrix is

P =

[14

34

15

45

].

Our system of equations is

π0 = 14 · π0 + 1

5 · π1π1 = 3

4 · π0 + 45 · π1

1 = π0 + π1.

Solving the first equation gives us

34 · π0 = 1

5 · π1

π0 = 415 · π1.

Williamson Markov Chains and Stationary Distributions

Page 81: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Our transition probability matrix is

P =

[14

34

15

45

].

Our system of equations is

π0 = 14 · π0 + 1

5 · π1π1 = 3

4 · π0 + 45 · π1

1 = π0 + π1.

Solving the first equation gives us

34 · π0 = 1

5 · π1π0 = 4

15 · π1.

Williamson Markov Chains and Stationary Distributions

Page 82: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Plugging this into the constraint π0 + π1 = 1, we get

415 · π1 + π1 = 1

1915 · π1 = 1

π1 = 1519 .

Therefore, π0 = 419 . Thus, as we sit by the road, 4

19 of all vehicles passing by will betrucks.

Williamson Markov Chains and Stationary Distributions

Page 83: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Plugging this into the constraint π0 + π1 = 1, we get

415 · π1 + π1 = 1

1915 · π1 = 1

π1 = 1519 .

Therefore, π0 = 419 . Thus, as we sit by the road, 4

19 of all vehicles passing by will betrucks.

Williamson Markov Chains and Stationary Distributions

Page 84: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Plugging this into the constraint π0 + π1 = 1, we get

415 · π1 + π1 = 1

1915 · π1 = 1

π1 = 1519 .

Therefore, π0 = 419 . Thus, as we sit by the road, 4

19 of all vehicles passing by will betrucks.

Williamson Markov Chains and Stationary Distributions

Page 85: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Plugging this into the constraint π0 + π1 = 1, we get

415 · π1 + π1 = 1

1915 · π1 = 1

π1 = 1519 .

Therefore, π0 = 419 . Thus, as we sit by the road, 4

19 of all vehicles passing by will betrucks.

Williamson Markov Chains and Stationary Distributions

Page 86: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Plugging this into the constraint π0 + π1 = 1, we get

415 · π1 + π1 = 1

1915 · π1 = 1

π1 = 1519 .

Therefore, π0 =

419 . Thus, as we sit by the road, 4

19 of all vehicles passing by will betrucks.

Williamson Markov Chains and Stationary Distributions

Page 87: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Plugging this into the constraint π0 + π1 = 1, we get

415 · π1 + π1 = 1

1915 · π1 = 1

π1 = 1519 .

Therefore, π0 = 419 .

Thus, as we sit by the road, 419 of all vehicles passing by will be

trucks.

Williamson Markov Chains and Stationary Distributions

Page 88: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Solution (Contd.)

Plugging this into the constraint π0 + π1 = 1, we get

415 · π1 + π1 = 1

1915 · π1 = 1

π1 = 1519 .

Therefore, π0 = 419 . Thus, as we sit by the road, 4

19 of all vehicles passing by will betrucks.

Williamson Markov Chains and Stationary Distributions

Page 89: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Cut Sets

Another technique is to study the cut-sets of the Markov chain.

For any state i of the chain,

n∑j=0

πj · Pj,i = πi = πi

n∑j=0

Pi,j

or ∑j 6=i

πj · Pj,i =∑j 6=i

πi Pi,j .

In the stationary distribution, the probability that a chain leaves a state equals theprobability that it enters the state.

This observation can be generalized to sets of states.

Theorem

Let S be a set of states of a finite, irreducible, aperiodic Markov chain. In the stationarydistribution, the probability that the chain leaves the set S equals the probability that itenters S.

Williamson Markov Chains and Stationary Distributions

Page 90: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Cut Sets

Another technique is to study the cut-sets of the Markov chain.

For any state i of the chain,

n∑j=0

πj · Pj,i = πi = πi

n∑j=0

Pi,j

or ∑j 6=i

πj · Pj,i =∑j 6=i

πi Pi,j .

In the stationary distribution, the probability that a chain leaves a state equals theprobability that it enters the state.

This observation can be generalized to sets of states.

Theorem

Let S be a set of states of a finite, irreducible, aperiodic Markov chain. In the stationarydistribution, the probability that the chain leaves the set S equals the probability that itenters S.

Williamson Markov Chains and Stationary Distributions

Page 91: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Cut Sets

Another technique is to study the cut-sets of the Markov chain.

For any state i of the chain,

n∑j=0

πj · Pj,i = πi = πi

n∑j=0

Pi,j

or ∑j 6=i

πj · Pj,i =∑j 6=i

πi Pi,j .

In the stationary distribution, the probability that a chain leaves a state equals theprobability that it enters the state.

This observation can be generalized to sets of states.

Theorem

Let S be a set of states of a finite, irreducible, aperiodic Markov chain. In the stationarydistribution, the probability that the chain leaves the set S equals the probability that itenters S.

Williamson Markov Chains and Stationary Distributions

Page 92: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Cut Sets

Another technique is to study the cut-sets of the Markov chain.

For any state i of the chain,

n∑j=0

πj · Pj,i = πi = πi

n∑j=0

Pi,j

or ∑j 6=i

πj · Pj,i =∑j 6=i

πi Pi,j .

In the stationary distribution, the probability that a chain leaves a state equals theprobability that it enters the state.

This observation can be generalized to sets of states.

Theorem

Let S be a set of states of a finite, irreducible, aperiodic Markov chain. In the stationarydistribution, the probability that the chain leaves the set S equals the probability that itenters S.

Williamson Markov Chains and Stationary Distributions

Page 93: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Cut Sets

Another technique is to study the cut-sets of the Markov chain.

For any state i of the chain,

n∑j=0

πj · Pj,i = πi = πi

n∑j=0

Pi,j

or ∑j 6=i

πj · Pj,i =∑j 6=i

πi Pi,j .

In the stationary distribution, the probability that a chain leaves a state equals theprobability that it enters the state.

This observation can be generalized to sets of states.

Theorem

Let S be a set of states of a finite, irreducible, aperiodic Markov chain.

In the stationarydistribution, the probability that the chain leaves the set S equals the probability that itenters S.

Williamson Markov Chains and Stationary Distributions

Page 94: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Cut Sets

Another technique is to study the cut-sets of the Markov chain.

For any state i of the chain,

n∑j=0

πj · Pj,i = πi = πi

n∑j=0

Pi,j

or ∑j 6=i

πj · Pj,i =∑j 6=i

πi Pi,j .

In the stationary distribution, the probability that a chain leaves a state equals theprobability that it enters the state.

This observation can be generalized to sets of states.

Theorem

Let S be a set of states of a finite, irreducible, aperiodic Markov chain. In the stationarydistribution, the probability that the chain leaves the set S equals the probability that itenters S.

Williamson Markov Chains and Stationary Distributions

Page 95: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Note

If C is a cut-set in the graph representation of the chain, then in the stationarydistribution, the probability of crossing the cut-set in one direction is equal to theprobability of crossing the cut-set in the other direction.

Example

0 1

p

q

1− p

1− q

From state 0, we move to state 1 with probability p and stay at state 0 withprobability 1− p.

From state 1, we move to state 0 with probability q and stay at state 1 withprobability 1− q.

If p and q are small, state changes are rare.

Williamson Markov Chains and Stationary Distributions

Page 96: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Note

If C is a cut-set in the graph representation of the chain, then in the stationarydistribution, the probability of crossing the cut-set in one direction is equal to theprobability of crossing the cut-set in the other direction.

Example

0 1

p

q

1− p

1− q

From state 0, we move to state 1 with probability p and stay at state 0 withprobability 1− p.

From state 1, we move to state 0 with probability q and stay at state 1 withprobability 1− q.

If p and q are small, state changes are rare.

Williamson Markov Chains and Stationary Distributions

Page 97: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Note

If C is a cut-set in the graph representation of the chain, then in the stationarydistribution, the probability of crossing the cut-set in one direction is equal to theprobability of crossing the cut-set in the other direction.

Example

0 1

p

q

1− p

1− q

From state 0, we move to state 1 with probability p and stay at state 0 withprobability 1− p.

From state 1, we move to state 0 with probability q and stay at state 1 withprobability 1− q.

If p and q are small, state changes are rare.

Williamson Markov Chains and Stationary Distributions

Page 98: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Note

If C is a cut-set in the graph representation of the chain, then in the stationarydistribution, the probability of crossing the cut-set in one direction is equal to theprobability of crossing the cut-set in the other direction.

Example

0 1

p

q

1− p

1− q

From state 0, we move to state 1 with probability p and stay at state 0 withprobability 1− p.

From state 1, we move to state 0 with probability q and stay at state 1 withprobability 1− q.

If p and q are small, state changes are rare.

Williamson Markov Chains and Stationary Distributions

Page 99: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Note

If C is a cut-set in the graph representation of the chain, then in the stationarydistribution, the probability of crossing the cut-set in one direction is equal to theprobability of crossing the cut-set in the other direction.

Example

0 1

p

q

1− p

1− q

From state 0, we move to state 1 with probability p and stay at state 0 withprobability 1− p.

From state 1, we move to state 0 with probability q and stay at state 1 withprobability 1− q.

If p and q are small, state changes are rare.

Williamson Markov Chains and Stationary Distributions

Page 100: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution isπ0 =

qp + q

and π1 =p

p + q.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p = π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 101: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]

Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution isπ0 =

qp + q

and π1 =p

p + q.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p = π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 102: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution isπ0 =

qp + q

and π1 =p

p + q.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p = π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 103: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution isπ0 =

qp + q

and π1 =p

p + q.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p = π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 104: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution is

π0 =q

p + qand π1 =

pp + q

.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p = π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 105: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution isπ0 =

qp + q

and π1 =p

p + q.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p = π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 106: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution isπ0 =

qp + q

and π1 =p

p + q.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p = π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 107: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution isπ0 =

qp + q

and π1 =p

p + q.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p =

π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 108: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution isπ0 =

qp + q

and π1 =p

p + q.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p = π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 109: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Using System of Equations

Our transition matrix is

P =

[1− p p

q 1− q

]Solving π · P = π corresponds to solving the the system

π0 · (1− p) + π1 · q = π0π0 · q + π1 · (1− q) = π1

π0 + π1 = 1

Our solution isπ0 =

qp + q

and π1 =p

p + q.

Using Cut-Set Formulation

The probability of leaving state 0 must equal the probability of entering state 0, or

π0 · p = π1 · q.

Using π0 + π1 = 1 yields π0 = qp+q and π1 = p

p+q .

Williamson Markov Chains and Stationary Distributions

Page 110: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P.

If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 111: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,

πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 112: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 113: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 114: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P.

Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 115: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j

=n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 116: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i

= πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 117: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 118: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P.

Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 119: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 120: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Computing Stationary Distributions

Theorem

Consider a finite, irreducible, and ergodic Markov chain with transition matrix P. If thereare nonnegative numbers π = (π0, . . . , πn) such that

∑ni=0 πi = 1 and if, for any pair of

states i , j ,πi · Pi,j = πj · Pj,i ,

then π is the stationary distribution corresponding to P.

Proof

Consider the j th entry of π · P. Using the assumption of the theorem, we find that itequals

n∑i=0

πi · Pi,j =n∑

i=0

πj · Pj,i = πj .

Thus π satisfies π = π · P. Since∑n

i=0 πi = 1, it follows from the FundamentalTheorem that π must be the unique stationary distribution of the Markov chain.

Definition

Chains that satisfy the condition πi · Pi,j = πj · Pj,i are called time reversible.

Williamson Markov Chains and Stationary Distributions

Page 121: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Convergence of Markov Chains with Countably Infinite State Spaces

Theorem

Any irreducible aperiodic Markov chain belongs to one of the following two categories:

1 The chain is ergodic - For any pair of states i and j , the limit limt→∞ P tj,i exists and

is independent of j , and the chain has a unique stationary distributionπi = limt→∞ P t

j,i > 0.

2 No state is positive recurrent - For all i and j , limt→∞ P tj,i = 0, and the chain has

no stationary distribution.

Proof

Same as the proof of the Fundamental Theorem.

Williamson Markov Chains and Stationary Distributions

Page 122: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Convergence of Markov Chains with Countably Infinite State Spaces

Theorem

Any irreducible aperiodic Markov chain belongs to one of the following two categories:1 The chain is ergodic

- For any pair of states i and j , the limit limt→∞ P tj,i exists and

is independent of j , and the chain has a unique stationary distributionπi = limt→∞ P t

j,i > 0.

2 No state is positive recurrent - For all i and j , limt→∞ P tj,i = 0, and the chain has

no stationary distribution.

Proof

Same as the proof of the Fundamental Theorem.

Williamson Markov Chains and Stationary Distributions

Page 123: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Convergence of Markov Chains with Countably Infinite State Spaces

Theorem

Any irreducible aperiodic Markov chain belongs to one of the following two categories:1 The chain is ergodic - For any pair of states i and j , the limit limt→∞ P t

j,i exists andis independent of j , and the chain has a unique stationary distributionπi = limt→∞ P t

j,i > 0.

2 No state is positive recurrent - For all i and j , limt→∞ P tj,i = 0, and the chain has

no stationary distribution.

Proof

Same as the proof of the Fundamental Theorem.

Williamson Markov Chains and Stationary Distributions

Page 124: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Convergence of Markov Chains with Countably Infinite State Spaces

Theorem

Any irreducible aperiodic Markov chain belongs to one of the following two categories:1 The chain is ergodic - For any pair of states i and j , the limit limt→∞ P t

j,i exists andis independent of j , and the chain has a unique stationary distributionπi = limt→∞ P t

j,i > 0.

2 No state is positive recurrent

- For all i and j , limt→∞ P tj,i = 0, and the chain has

no stationary distribution.

Proof

Same as the proof of the Fundamental Theorem.

Williamson Markov Chains and Stationary Distributions

Page 125: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Convergence of Markov Chains with Countably Infinite State Spaces

Theorem

Any irreducible aperiodic Markov chain belongs to one of the following two categories:1 The chain is ergodic - For any pair of states i and j , the limit limt→∞ P t

j,i exists andis independent of j , and the chain has a unique stationary distributionπi = limt→∞ P t

j,i > 0.

2 No state is positive recurrent - For all i and j , limt→∞ P tj,i = 0, and the chain has

no stationary distribution.

Proof

Same as the proof of the Fundamental Theorem.

Williamson Markov Chains and Stationary Distributions

Page 126: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Convergence of Markov Chains with Countably Infinite State Spaces

Theorem

Any irreducible aperiodic Markov chain belongs to one of the following two categories:1 The chain is ergodic - For any pair of states i and j , the limit limt→∞ P t

j,i exists andis independent of j , and the chain has a unique stationary distributionπi = limt→∞ P t

j,i > 0.

2 No state is positive recurrent - For all i and j , limt→∞ P tj,i = 0, and the chain has

no stationary distribution.

Proof

Same as the proof of the Fundamental Theorem.

Williamson Markov Chains and Stationary Distributions

Page 127: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 128: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service.

We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 129: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length.

At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 130: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 131: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 132: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 133: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is

1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 134: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ,

the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 135: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 136: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain.

Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 137: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 138: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < n

Pi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 139: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 140: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Queues

A queue is a line where customers wait for service. We examine a model for abounded queue where time is divided into steps of equal length. At each time step,exactly one of the following occurs.

If the queue has fewer than n customers, then with probability λ, a new customerjoins the queue.

If the queue is not empty, then with probability µ, the head of the line is served andleaves the queue.

With the remaining probability, which is 1− λ− µ, the queue is unchanged.

Transition Matrix

If Xt is the number of customers in the queue at time t , then all the Xt yield afinite-state Markov chain. Its transition matrix has the following nonzero entries:

Pi,i+1 = λ if i < nPi,i−1 = µ if i > 0

Pi,i =

1− λ if i = 01− λ− µ if 1 ≤ i ≤ n − 11− µ if i = n

Williamson Markov Chains and Stationary Distributions

Page 141: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Markov Chain

The Markov chain is irreducible, finite, and aperiodic, so hit has a unique stationarydistribution π.

We use π = π · P to write

π0 = (1− λ) · π0 + µ · π1πi = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, 1 ≤ i ≤ n − 1πn = λ · πn−1 + (1− µ) · πn

Solution

A solution to the system of equations is

πi = π0 ·(λ

µ

)i.

Williamson Markov Chains and Stationary Distributions

Page 142: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Markov Chain

The Markov chain is irreducible, finite, and aperiodic, so hit has a unique stationarydistribution π. We use π = π · P to write

π0 = (1− λ) · π0 + µ · π1πi = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, 1 ≤ i ≤ n − 1πn = λ · πn−1 + (1− µ) · πn

Solution

A solution to the system of equations is

πi = π0 ·(λ

µ

)i.

Williamson Markov Chains and Stationary Distributions

Page 143: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Markov Chain

The Markov chain is irreducible, finite, and aperiodic, so hit has a unique stationarydistribution π. We use π = π · P to write

π0 = (1− λ) · π0 + µ · π1

πi = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, 1 ≤ i ≤ n − 1πn = λ · πn−1 + (1− µ) · πn

Solution

A solution to the system of equations is

πi = π0 ·(λ

µ

)i.

Williamson Markov Chains and Stationary Distributions

Page 144: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Markov Chain

The Markov chain is irreducible, finite, and aperiodic, so hit has a unique stationarydistribution π. We use π = π · P to write

π0 = (1− λ) · π0 + µ · π1πi = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, 1 ≤ i ≤ n − 1

πn = λ · πn−1 + (1− µ) · πn

Solution

A solution to the system of equations is

πi = π0 ·(λ

µ

)i.

Williamson Markov Chains and Stationary Distributions

Page 145: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Markov Chain

The Markov chain is irreducible, finite, and aperiodic, so hit has a unique stationarydistribution π. We use π = π · P to write

π0 = (1− λ) · π0 + µ · π1πi = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, 1 ≤ i ≤ n − 1πn = λ · πn−1 + (1− µ) · πn

Solution

A solution to the system of equations is

πi = π0 ·(λ

µ

)i.

Williamson Markov Chains and Stationary Distributions

Page 146: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Markov Chain

The Markov chain is irreducible, finite, and aperiodic, so hit has a unique stationarydistribution π. We use π = π · P to write

π0 = (1− λ) · π0 + µ · π1πi = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, 1 ≤ i ≤ n − 1πn = λ · πn−1 + (1− µ) · πn

Solution

A solution to the system of equations is

πi = π0 ·(λ

µ

)i.

Williamson Markov Chains and Stationary Distributions

Page 147: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Markov Chain

The Markov chain is irreducible, finite, and aperiodic, so hit has a unique stationarydistribution π. We use π = π · P to write

π0 = (1− λ) · π0 + µ · π1πi = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, 1 ≤ i ≤ n − 1πn = λ · πn−1 + (1− µ) · πn

Solution

A solution to the system of equations is

πi = π0 ·(λ

µ

)i.

Williamson Markov Chains and Stationary Distributions

Page 148: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

Adding the requirement∑n

i=0 πi = 1, we have

n∑i=0

πi

=n∑

i=0

π0 ·(λ

µ

)i= 1

orπ0 =

1∑ni=0(λ/µ)i

.

For all 0 ≤ i ≤ n,

πi =(λ/µ)i∑ni=0(λ/µ)i

.

Williamson Markov Chains and Stationary Distributions

Page 149: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

Adding the requirement∑n

i=0 πi = 1, we have

n∑i=0

πi =n∑

i=0

π0 ·(λ

µ

)i

= 1

orπ0 =

1∑ni=0(λ/µ)i

.

For all 0 ≤ i ≤ n,

πi =(λ/µ)i∑ni=0(λ/µ)i

.

Williamson Markov Chains and Stationary Distributions

Page 150: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

Adding the requirement∑n

i=0 πi = 1, we have

n∑i=0

πi =n∑

i=0

π0 ·(λ

µ

)i= 1

orπ0 =

1∑ni=0(λ/µ)i

.

For all 0 ≤ i ≤ n,

πi =(λ/µ)i∑ni=0(λ/µ)i

.

Williamson Markov Chains and Stationary Distributions

Page 151: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

Adding the requirement∑n

i=0 πi = 1, we have

n∑i=0

πi =n∑

i=0

π0 ·(λ

µ

)i= 1

orπ0 =

1∑ni=0(λ/µ)i

.

For all 0 ≤ i ≤ n,

πi =(λ/µ)i∑ni=0(λ/µ)i

.

Williamson Markov Chains and Stationary Distributions

Page 152: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

Adding the requirement∑n

i=0 πi = 1, we have

n∑i=0

πi =n∑

i=0

π0 ·(λ

µ

)i= 1

orπ0 =

1∑ni=0(λ/µ)i

.

For all 0 ≤ i ≤ n,

πi =(λ/µ)i∑ni=0(λ/µ)i

.

Williamson Markov Chains and Stationary Distributions

Page 153: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Cut-Sets

For any i , the transitions i → i + 1 and i + 1→ i constitute a cut-set of the graphrepresenting the Markov chain.

Thus, in the stationary distribution, the probability ofmoving from state i to state i + 1 must be equal to the probability of moving from statei + 1 to i , or

λπi = µ · πi+1.

A simple induction now yields

πi = π0 ·(λ

µ

)i.

No Limit

When there is no upper limit n on the number of customers in a queue, the Markovchain is no longer finite. The Markov chain has a countably infinite state space.Applying our theorem, the Markov chain has a stationary distribution if and only if thefollowing set of linear equations has a solution with all πi > 0

π0 = (1− λ) · π0 + µ · π1π1 = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, i ≥ 1

Williamson Markov Chains and Stationary Distributions

Page 154: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Cut-Sets

For any i , the transitions i → i + 1 and i + 1→ i constitute a cut-set of the graphrepresenting the Markov chain. Thus, in the stationary distribution, the probability ofmoving from state i to state i + 1 must be equal to the probability of moving from statei + 1 to i , or

λπi = µ · πi+1.

A simple induction now yields

πi = π0 ·(λ

µ

)i.

No Limit

When there is no upper limit n on the number of customers in a queue, the Markovchain is no longer finite. The Markov chain has a countably infinite state space.Applying our theorem, the Markov chain has a stationary distribution if and only if thefollowing set of linear equations has a solution with all πi > 0

π0 = (1− λ) · π0 + µ · π1π1 = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, i ≥ 1

Williamson Markov Chains and Stationary Distributions

Page 155: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Cut-Sets

For any i , the transitions i → i + 1 and i + 1→ i constitute a cut-set of the graphrepresenting the Markov chain. Thus, in the stationary distribution, the probability ofmoving from state i to state i + 1 must be equal to the probability of moving from statei + 1 to i , or

λπi = µ · πi+1.

A simple induction now yields

πi = π0 ·(λ

µ

)i.

No Limit

When there is no upper limit n on the number of customers in a queue, the Markovchain is no longer finite. The Markov chain has a countably infinite state space.Applying our theorem, the Markov chain has a stationary distribution if and only if thefollowing set of linear equations has a solution with all πi > 0

π0 = (1− λ) · π0 + µ · π1π1 = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, i ≥ 1

Williamson Markov Chains and Stationary Distributions

Page 156: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Cut-Sets

For any i , the transitions i → i + 1 and i + 1→ i constitute a cut-set of the graphrepresenting the Markov chain. Thus, in the stationary distribution, the probability ofmoving from state i to state i + 1 must be equal to the probability of moving from statei + 1 to i , or

λπi = µ · πi+1.

A simple induction now yields

πi = π0 ·(λ

µ

)i.

No Limit

When there is no upper limit n on the number of customers in a queue, the Markovchain is no longer finite.

The Markov chain has a countably infinite state space.Applying our theorem, the Markov chain has a stationary distribution if and only if thefollowing set of linear equations has a solution with all πi > 0

π0 = (1− λ) · π0 + µ · π1π1 = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, i ≥ 1

Williamson Markov Chains and Stationary Distributions

Page 157: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Cut-Sets

For any i , the transitions i → i + 1 and i + 1→ i constitute a cut-set of the graphrepresenting the Markov chain. Thus, in the stationary distribution, the probability ofmoving from state i to state i + 1 must be equal to the probability of moving from statei + 1 to i , or

λπi = µ · πi+1.

A simple induction now yields

πi = π0 ·(λ

µ

)i.

No Limit

When there is no upper limit n on the number of customers in a queue, the Markovchain is no longer finite. The Markov chain has a countably infinite state space.

Applying our theorem, the Markov chain has a stationary distribution if and only if thefollowing set of linear equations has a solution with all πi > 0

π0 = (1− λ) · π0 + µ · π1π1 = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, i ≥ 1

Williamson Markov Chains and Stationary Distributions

Page 158: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Cut-Sets

For any i , the transitions i → i + 1 and i + 1→ i constitute a cut-set of the graphrepresenting the Markov chain. Thus, in the stationary distribution, the probability ofmoving from state i to state i + 1 must be equal to the probability of moving from statei + 1 to i , or

λπi = µ · πi+1.

A simple induction now yields

πi = π0 ·(λ

µ

)i.

No Limit

When there is no upper limit n on the number of customers in a queue, the Markovchain is no longer finite. The Markov chain has a countably infinite state space.Applying our theorem, the Markov chain has a stationary distribution if and only if thefollowing set of linear equations has a solution with all πi > 0

π0 = (1− λ) · π0 + µ · π1π1 = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, i ≥ 1

Williamson Markov Chains and Stationary Distributions

Page 159: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Cut-Sets

For any i , the transitions i → i + 1 and i + 1→ i constitute a cut-set of the graphrepresenting the Markov chain. Thus, in the stationary distribution, the probability ofmoving from state i to state i + 1 must be equal to the probability of moving from statei + 1 to i , or

λπi = µ · πi+1.

A simple induction now yields

πi = π0 ·(λ

µ

)i.

No Limit

When there is no upper limit n on the number of customers in a queue, the Markovchain is no longer finite. The Markov chain has a countably infinite state space.Applying our theorem, the Markov chain has a stationary distribution if and only if thefollowing set of linear equations has a solution with all πi > 0

π0 = (1− λ) · π0 + µ · π1

π1 = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, i ≥ 1

Williamson Markov Chains and Stationary Distributions

Page 160: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Cut-Sets

For any i , the transitions i → i + 1 and i + 1→ i constitute a cut-set of the graphrepresenting the Markov chain. Thus, in the stationary distribution, the probability ofmoving from state i to state i + 1 must be equal to the probability of moving from statei + 1 to i , or

λπi = µ · πi+1.

A simple induction now yields

πi = π0 ·(λ

µ

)i.

No Limit

When there is no upper limit n on the number of customers in a queue, the Markovchain is no longer finite. The Markov chain has a countably infinite state space.Applying our theorem, the Markov chain has a stationary distribution if and only if thefollowing set of linear equations has a solution with all πi > 0

π0 = (1− λ) · π0 + µ · π1π1 = λ · πi−1 + (1− λ− µ) · πi + µ · πi+1, i ≥ 1

Williamson Markov Chains and Stationary Distributions

Page 161: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

A solution of the system is

πi =(λ/µ)i∑∞i=0(λ/µ)i

=

µ

)i·(

1−λ

µ

).

This generalizes the solution to the case where there is an upper bound n on thenumber of the customers in the system

πi =(λ/µ)i∑ni=0(λ/µ)i

.

All of the πi are greater than 0 if and only if λ < µ⇒ the rate at which customersarrive is lower than the rate customers are served.

If λ > µ, the rate at which customers arrive is higher than the rate they depart.

This means there is no stationary distribution, and the queue length will becomearbitrarily long.

In this case, each state in the Markov chain is transient.

If λ = µ, there is still no stationary distribution, but the states are null recurrent.

Williamson Markov Chains and Stationary Distributions

Page 162: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

A solution of the system is

πi =(λ/µ)i∑∞i=0(λ/µ)i

=

µ

)i·(

1−λ

µ

).

This generalizes the solution to the case where there is an upper bound n on thenumber of the customers in the system

πi =(λ/µ)i∑ni=0(λ/µ)i

.

All of the πi are greater than 0 if and only if λ < µ⇒ the rate at which customersarrive is lower than the rate customers are served.

If λ > µ, the rate at which customers arrive is higher than the rate they depart.

This means there is no stationary distribution, and the queue length will becomearbitrarily long.

In this case, each state in the Markov chain is transient.

If λ = µ, there is still no stationary distribution, but the states are null recurrent.

Williamson Markov Chains and Stationary Distributions

Page 163: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

A solution of the system is

πi =(λ/µ)i∑∞i=0(λ/µ)i

=

µ

)i·(

1−λ

µ

).

This generalizes the solution to the case where there is an upper bound n on thenumber of the customers in the system

πi =(λ/µ)i∑ni=0(λ/µ)i

.

All of the πi are greater than 0 if and only if λ < µ⇒ the rate at which customersarrive is lower than the rate customers are served.

If λ > µ, the rate at which customers arrive is higher than the rate they depart.

This means there is no stationary distribution, and the queue length will becomearbitrarily long.

In this case, each state in the Markov chain is transient.

If λ = µ, there is still no stationary distribution, but the states are null recurrent.

Williamson Markov Chains and Stationary Distributions

Page 164: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

A solution of the system is

πi =(λ/µ)i∑∞i=0(λ/µ)i

=

µ

)i·(

1−λ

µ

).

This generalizes the solution to the case where there is an upper bound n on thenumber of the customers in the system

πi =(λ/µ)i∑ni=0(λ/µ)i

.

All of the πi are greater than 0 if and only if λ < µ⇒ the rate at which customersarrive is lower than the rate customers are served.

If λ > µ, the rate at which customers arrive is higher than the rate they depart.

This means there is no stationary distribution, and the queue length will becomearbitrarily long.

In this case, each state in the Markov chain is transient.

If λ = µ, there is still no stationary distribution, but the states are null recurrent.

Williamson Markov Chains and Stationary Distributions

Page 165: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

A solution of the system is

πi =(λ/µ)i∑∞i=0(λ/µ)i

=

µ

)i·(

1−λ

µ

).

This generalizes the solution to the case where there is an upper bound n on thenumber of the customers in the system

πi =(λ/µ)i∑ni=0(λ/µ)i

.

All of the πi are greater than 0 if and only if λ < µ⇒ the rate at which customersarrive is lower than the rate customers are served.

If λ > µ, the rate at which customers arrive is higher than the rate they depart.

This means there is no stationary distribution, and the queue length will becomearbitrarily long.

In this case, each state in the Markov chain is transient.

If λ = µ, there is still no stationary distribution, but the states are null recurrent.

Williamson Markov Chains and Stationary Distributions

Page 166: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

A solution of the system is

πi =(λ/µ)i∑∞i=0(λ/µ)i

=

µ

)i·(

1−λ

µ

).

This generalizes the solution to the case where there is an upper bound n on thenumber of the customers in the system

πi =(λ/µ)i∑ni=0(λ/µ)i

.

All of the πi are greater than 0 if and only if λ < µ⇒ the rate at which customersarrive is lower than the rate customers are served.

If λ > µ, the rate at which customers arrive is higher than the rate they depart.

This means there is no stationary distribution, and the queue length will becomearbitrarily long.

In this case, each state in the Markov chain is transient.

If λ = µ, there is still no stationary distribution, but the states are null recurrent.

Williamson Markov Chains and Stationary Distributions

Page 167: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

A solution of the system is

πi =(λ/µ)i∑∞i=0(λ/µ)i

=

µ

)i·(

1−λ

µ

).

This generalizes the solution to the case where there is an upper bound n on thenumber of the customers in the system

πi =(λ/µ)i∑ni=0(λ/µ)i

.

All of the πi are greater than 0 if and only if λ < µ⇒ the rate at which customersarrive is lower than the rate customers are served.

If λ > µ, the rate at which customers arrive is higher than the rate they depart.

This means there is no stationary distribution, and the queue length will becomearbitrarily long.

In this case, each state in the Markov chain is transient.

If λ = µ, there is still no stationary distribution, but the states are null recurrent.

Williamson Markov Chains and Stationary Distributions

Page 168: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Fundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

Queue Example

Solution

A solution of the system is

πi =(λ/µ)i∑∞i=0(λ/µ)i

=

µ

)i·(

1−λ

µ

).

This generalizes the solution to the case where there is an upper bound n on thenumber of the customers in the system

πi =(λ/µ)i∑ni=0(λ/µ)i

.

All of the πi are greater than 0 if and only if λ < µ⇒ the rate at which customersarrive is lower than the rate customers are served.

If λ > µ, the rate at which customers arrive is higher than the rate they depart.

This means there is no stationary distribution, and the queue length will becomearbitrarily long.

In this case, each state in the Markov chain is transient.

If λ = µ, there is still no stationary distribution, but the states are null recurrent.

Williamson Markov Chains and Stationary Distributions

Page 169: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 170: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Random Walks

Introduction

A random walk on an undirected graph is a special type of Markov chain that isoften used in analyzing algorithms.

Let G = (V ,E) be a finite, undirected, and connected graph.

Definition

A random walk on G is a Markov chain defined by the sequence of moves of a particlebetween vertices of G. In this process, the place of the particle at a given time step isthe state of the system. If the particle is at vertex i and if i has d(i) outgoing edges,then the probability that the particle follows the edge (i, j) and moves to a neighbor j is1/d(i).

Williamson Markov Chains and Stationary Distributions

Page 171: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Random Walks

Introduction

A random walk on an undirected graph is a special type of Markov chain that isoften used in analyzing algorithms.

Let G = (V ,E) be a finite, undirected, and connected graph.

Definition

A random walk on G is a Markov chain defined by the sequence of moves of a particlebetween vertices of G. In this process, the place of the particle at a given time step isthe state of the system. If the particle is at vertex i and if i has d(i) outgoing edges,then the probability that the particle follows the edge (i, j) and moves to a neighbor j is1/d(i).

Williamson Markov Chains and Stationary Distributions

Page 172: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Random Walks

Introduction

A random walk on an undirected graph is a special type of Markov chain that isoften used in analyzing algorithms.

Let G = (V ,E) be a finite, undirected, and connected graph.

Definition

A random walk on G is a Markov chain defined by the sequence of moves of a particlebetween vertices of G.

In this process, the place of the particle at a given time step isthe state of the system. If the particle is at vertex i and if i has d(i) outgoing edges,then the probability that the particle follows the edge (i, j) and moves to a neighbor j is1/d(i).

Williamson Markov Chains and Stationary Distributions

Page 173: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Random Walks

Introduction

A random walk on an undirected graph is a special type of Markov chain that isoften used in analyzing algorithms.

Let G = (V ,E) be a finite, undirected, and connected graph.

Definition

A random walk on G is a Markov chain defined by the sequence of moves of a particlebetween vertices of G. In this process, the place of the particle at a given time step isthe state of the system.

If the particle is at vertex i and if i has d(i) outgoing edges,then the probability that the particle follows the edge (i, j) and moves to a neighbor j is1/d(i).

Williamson Markov Chains and Stationary Distributions

Page 174: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Random Walks

Introduction

A random walk on an undirected graph is a special type of Markov chain that isoften used in analyzing algorithms.

Let G = (V ,E) be a finite, undirected, and connected graph.

Definition

A random walk on G is a Markov chain defined by the sequence of moves of a particlebetween vertices of G. In this process, the place of the particle at a given time step isthe state of the system. If the particle is at vertex i and if i has d(i) outgoing edges,then the probability that the particle follows the edge (i, j) and moves to a neighbor j is1/d(i).

Williamson Markov Chains and Stationary Distributions

Page 175: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Random Walks

Example

a b

c

Undirected Graph G

a b

c

1/2

1/2

1/2

1/2

1/2

1/2

Markov Chain

Williamson Markov Chains and Stationary Distributions

Page 176: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Random Walks

Example

a b

c

Undirected Graph G

a b

c

1/2

1/2

1/2

1/2

1/2

1/2

Markov Chain

Williamson Markov Chains and Stationary Distributions

Page 177: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Criterion for Aperiodicity

Lemma

A random walk on an undirected graph G is aperiodic if and only if G is not bipartite.

Proof

A graph is bipartite if and only if it does not have cycles with an odd number ofedges.

In an undirected graph, there is always a path of length 2 from a vertex to itself.

If the graph is bipartite, then the random walk is periodic with period d = 2.

If the graph is not bipartite, what can you say about the cycles? The graph has anodd cycle.

By traversing that cycle we have an odd-length path from any vertex to itself.

It follows that the Markov chain is aperiodic.

Note

A random walk on a finite, undirected, connected, and non-bipartite graph G satisfiesthe conditions of our Fundamental Theorem which means the random walk convergesto a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 178: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Criterion for Aperiodicity

Lemma

A random walk on an undirected graph G is aperiodic if and only if G is not bipartite.

Proof

A graph is bipartite if and only if it does not have cycles with an odd number ofedges.

In an undirected graph, there is always a path of length 2 from a vertex to itself.

If the graph is bipartite, then the random walk is periodic with period d = 2.

If the graph is not bipartite, what can you say about the cycles? The graph has anodd cycle.

By traversing that cycle we have an odd-length path from any vertex to itself.

It follows that the Markov chain is aperiodic.

Note

A random walk on a finite, undirected, connected, and non-bipartite graph G satisfiesthe conditions of our Fundamental Theorem which means the random walk convergesto a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 179: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Criterion for Aperiodicity

Lemma

A random walk on an undirected graph G is aperiodic if and only if G is not bipartite.

Proof

A graph is bipartite if and only if it does not have cycles with an odd number ofedges.

In an undirected graph, there is always a path of length 2 from a vertex to itself.

If the graph is bipartite, then the random walk is periodic with period d = 2.

If the graph is not bipartite, what can you say about the cycles? The graph has anodd cycle.

By traversing that cycle we have an odd-length path from any vertex to itself.

It follows that the Markov chain is aperiodic.

Note

A random walk on a finite, undirected, connected, and non-bipartite graph G satisfiesthe conditions of our Fundamental Theorem which means the random walk convergesto a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 180: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Criterion for Aperiodicity

Lemma

A random walk on an undirected graph G is aperiodic if and only if G is not bipartite.

Proof

A graph is bipartite if and only if it does not have cycles with an odd number ofedges.

In an undirected graph, there is always a path of length 2 from a vertex to itself.

If the graph is bipartite, then the random walk is periodic with period d = 2.

If the graph is not bipartite, what can you say about the cycles? The graph has anodd cycle.

By traversing that cycle we have an odd-length path from any vertex to itself.

It follows that the Markov chain is aperiodic.

Note

A random walk on a finite, undirected, connected, and non-bipartite graph G satisfiesthe conditions of our Fundamental Theorem which means the random walk convergesto a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 181: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Criterion for Aperiodicity

Lemma

A random walk on an undirected graph G is aperiodic if and only if G is not bipartite.

Proof

A graph is bipartite if and only if it does not have cycles with an odd number ofedges.

In an undirected graph, there is always a path of length 2 from a vertex to itself.

If the graph is bipartite, then the random walk is periodic with period d = 2.

If the graph is not bipartite, what can you say about the cycles?

The graph has anodd cycle.

By traversing that cycle we have an odd-length path from any vertex to itself.

It follows that the Markov chain is aperiodic.

Note

A random walk on a finite, undirected, connected, and non-bipartite graph G satisfiesthe conditions of our Fundamental Theorem which means the random walk convergesto a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 182: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Criterion for Aperiodicity

Lemma

A random walk on an undirected graph G is aperiodic if and only if G is not bipartite.

Proof

A graph is bipartite if and only if it does not have cycles with an odd number ofedges.

In an undirected graph, there is always a path of length 2 from a vertex to itself.

If the graph is bipartite, then the random walk is periodic with period d = 2.

If the graph is not bipartite, what can you say about the cycles? The graph has anodd cycle.

By traversing that cycle we have an odd-length path from any vertex to itself.

It follows that the Markov chain is aperiodic.

Note

A random walk on a finite, undirected, connected, and non-bipartite graph G satisfiesthe conditions of our Fundamental Theorem which means the random walk convergesto a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 183: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Criterion for Aperiodicity

Lemma

A random walk on an undirected graph G is aperiodic if and only if G is not bipartite.

Proof

A graph is bipartite if and only if it does not have cycles with an odd number ofedges.

In an undirected graph, there is always a path of length 2 from a vertex to itself.

If the graph is bipartite, then the random walk is periodic with period d = 2.

If the graph is not bipartite, what can you say about the cycles? The graph has anodd cycle.

By traversing that cycle we have an odd-length path from any vertex to itself.

It follows that the Markov chain is aperiodic.

Note

A random walk on a finite, undirected, connected, and non-bipartite graph G satisfiesthe conditions of our Fundamental Theorem which means the random walk convergesto a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 184: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Criterion for Aperiodicity

Lemma

A random walk on an undirected graph G is aperiodic if and only if G is not bipartite.

Proof

A graph is bipartite if and only if it does not have cycles with an odd number ofedges.

In an undirected graph, there is always a path of length 2 from a vertex to itself.

If the graph is bipartite, then the random walk is periodic with period d = 2.

If the graph is not bipartite, what can you say about the cycles? The graph has anodd cycle.

By traversing that cycle we have an odd-length path from any vertex to itself.

It follows that the Markov chain is aperiodic.

Note

A random walk on a finite, undirected, connected, and non-bipartite graph G satisfiesthe conditions of our Fundamental Theorem which means the random walk convergesto a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 185: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Criterion for Aperiodicity

Lemma

A random walk on an undirected graph G is aperiodic if and only if G is not bipartite.

Proof

A graph is bipartite if and only if it does not have cycles with an odd number ofedges.

In an undirected graph, there is always a path of length 2 from a vertex to itself.

If the graph is bipartite, then the random walk is periodic with period d = 2.

If the graph is not bipartite, what can you say about the cycles? The graph has anodd cycle.

By traversing that cycle we have an odd-length path from any vertex to itself.

It follows that the Markov chain is aperiodic.

Note

A random walk on a finite, undirected, connected, and non-bipartite graph G satisfiesthe conditions of our Fundamental Theorem which means the random walk convergesto a stationary distribution.

Williamson Markov Chains and Stationary Distributions

Page 186: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Theorem

A random walk on G converges to a stationary distribution π, where

πv =d(v)

2 · |E |.

Proof

Since∑

v∈V d(v) = 2 · |E |, it follows that

∑v∈V

πv =∑v∈V

d(v)

2 · |E |= 1,

and π is a proper distribution over v ∈ V .

Let P be the transition probability matrix of the Markov chain.

Let N(v) represent the neighbors of v .

Williamson Markov Chains and Stationary Distributions

Page 187: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Theorem

A random walk on G converges to a stationary distribution π, where

πv =d(v)

2 · |E |.

Proof

Since∑

v∈V d(v) = 2 · |E |, it follows that

∑v∈V

πv =∑v∈V

d(v)

2 · |E |

= 1,

and π is a proper distribution over v ∈ V .

Let P be the transition probability matrix of the Markov chain.

Let N(v) represent the neighbors of v .

Williamson Markov Chains and Stationary Distributions

Page 188: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Theorem

A random walk on G converges to a stationary distribution π, where

πv =d(v)

2 · |E |.

Proof

Since∑

v∈V d(v) = 2 · |E |, it follows that

∑v∈V

πv =∑v∈V

d(v)

2 · |E |=

1,

and π is a proper distribution over v ∈ V .

Let P be the transition probability matrix of the Markov chain.

Let N(v) represent the neighbors of v .

Williamson Markov Chains and Stationary Distributions

Page 189: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Theorem

A random walk on G converges to a stationary distribution π, where

πv =d(v)

2 · |E |.

Proof

Since∑

v∈V d(v) = 2 · |E |, it follows that

∑v∈V

πv =∑v∈V

d(v)

2 · |E |= 1,

and π is a proper distribution over v ∈ V .

Let P be the transition probability matrix of the Markov chain.

Let N(v) represent the neighbors of v .

Williamson Markov Chains and Stationary Distributions

Page 190: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Theorem

A random walk on G converges to a stationary distribution π, where

πv =d(v)

2 · |E |.

Proof

Since∑

v∈V d(v) = 2 · |E |, it follows that

∑v∈V

πv =∑v∈V

d(v)

2 · |E |= 1,

and π is a proper distribution over v ∈ V .

Let P be the transition probability matrix of the Markov chain.

Let N(v) represent the neighbors of v .

Williamson Markov Chains and Stationary Distributions

Page 191: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Theorem

A random walk on G converges to a stationary distribution π, where

πv =d(v)

2 · |E |.

Proof

Since∑

v∈V d(v) = 2 · |E |, it follows that

∑v∈V

πv =∑v∈V

d(v)

2 · |E |= 1,

and π is a proper distribution over v ∈ V .

Let P be the transition probability matrix of the Markov chain.

Let N(v) represent the neighbors of v .

Williamson Markov Chains and Stationary Distributions

Page 192: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Theorem

A random walk on G converges to a stationary distribution π, where

πv =d(v)

2 · |E |.

Proof

Since∑

v∈V d(v) = 2 · |E |, it follows that

∑v∈V

πv =∑v∈V

d(v)

2 · |E |= 1,

and π is a proper distribution over v ∈ V .

Let P be the transition probability matrix of the Markov chain.

Let N(v) represent the neighbors of v .

Williamson Markov Chains and Stationary Distributions

Page 193: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Proof

The relation π = π · P is equivalent to

πv =∑

u∈N(v)

d(u)

2 · |E |·

1d(u)

=d(v)

2 · |E |.

Note

Recall that hv,u denotes the expected number of steps to reach u from v .

Corollary

For any vertex u in G,

hu,u =2 · |E |d(u)

.

Williamson Markov Chains and Stationary Distributions

Page 194: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Proof

The relation π = π · P is equivalent to

πv =∑

u∈N(v)

d(u)

2 · |E |·

1d(u)

=

d(v)

2 · |E |.

Note

Recall that hv,u denotes the expected number of steps to reach u from v .

Corollary

For any vertex u in G,

hu,u =2 · |E |d(u)

.

Williamson Markov Chains and Stationary Distributions

Page 195: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Proof

The relation π = π · P is equivalent to

πv =∑

u∈N(v)

d(u)

2 · |E |·

1d(u)

=d(v)

2 · |E |.

Note

Recall that hv,u denotes the expected number of steps to reach u from v .

Corollary

For any vertex u in G,

hu,u =2 · |E |d(u)

.

Williamson Markov Chains and Stationary Distributions

Page 196: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Proof

The relation π = π · P is equivalent to

πv =∑

u∈N(v)

d(u)

2 · |E |·

1d(u)

=d(v)

2 · |E |.

Note

Recall that hv,u denotes the expected number of steps to reach u from v .

Corollary

For any vertex u in G,

hu,u =2 · |E |d(u)

.

Williamson Markov Chains and Stationary Distributions

Page 197: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Proof

The relation π = π · P is equivalent to

πv =∑

u∈N(v)

d(u)

2 · |E |·

1d(u)

=d(v)

2 · |E |.

Note

Recall that hv,u denotes the expected number of steps to reach u from v .

Corollary

For any vertex u in G,

hu,u =2 · |E |d(u)

.

Williamson Markov Chains and Stationary Distributions

Page 198: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Lemma

If (u, v) ∈ E , then hv,u < 2 · |E |.

Proof

Let N(u) be the set of neighbors of vertex u in G. We compute hu,u in two differentways:

2 · |E |d(u)

= hu,u =1

d(u)·∑

w∈N(u)

(1 + hw,u).

Therefore,2 · |E | =

∑w∈N(u)

(1 + hw,u),

and we conclude that hv,u < 2 · |E |.

Definition

The cover time of a graph G = (V ,E) is the maximum expected time to visit all of thevertices in the graph by a random walk starting from v , for all vertices v ∈ V .

Williamson Markov Chains and Stationary Distributions

Page 199: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Lemma

If (u, v) ∈ E , then hv,u < 2 · |E |.

Proof

Let N(u) be the set of neighbors of vertex u in G.

We compute hu,u in two differentways:

2 · |E |d(u)

= hu,u =1

d(u)·∑

w∈N(u)

(1 + hw,u).

Therefore,2 · |E | =

∑w∈N(u)

(1 + hw,u),

and we conclude that hv,u < 2 · |E |.

Definition

The cover time of a graph G = (V ,E) is the maximum expected time to visit all of thevertices in the graph by a random walk starting from v , for all vertices v ∈ V .

Williamson Markov Chains and Stationary Distributions

Page 200: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Lemma

If (u, v) ∈ E , then hv,u < 2 · |E |.

Proof

Let N(u) be the set of neighbors of vertex u in G. We compute hu,u in two differentways:

2 · |E |d(u)

= hu,u =1

d(u)·∑

w∈N(u)

(1 + hw,u).

Therefore,2 · |E | =

∑w∈N(u)

(1 + hw,u),

and we conclude that hv,u < 2 · |E |.

Definition

The cover time of a graph G = (V ,E) is the maximum expected time to visit all of thevertices in the graph by a random walk starting from v , for all vertices v ∈ V .

Williamson Markov Chains and Stationary Distributions

Page 201: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Lemma

If (u, v) ∈ E , then hv,u < 2 · |E |.

Proof

Let N(u) be the set of neighbors of vertex u in G. We compute hu,u in two differentways:

2 · |E |d(u)

= hu,u

=1

d(u)·∑

w∈N(u)

(1 + hw,u).

Therefore,2 · |E | =

∑w∈N(u)

(1 + hw,u),

and we conclude that hv,u < 2 · |E |.

Definition

The cover time of a graph G = (V ,E) is the maximum expected time to visit all of thevertices in the graph by a random walk starting from v , for all vertices v ∈ V .

Williamson Markov Chains and Stationary Distributions

Page 202: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Lemma

If (u, v) ∈ E , then hv,u < 2 · |E |.

Proof

Let N(u) be the set of neighbors of vertex u in G. We compute hu,u in two differentways:

2 · |E |d(u)

= hu,u =1

d(u)·∑

w∈N(u)

(1 + hw,u).

Therefore,2 · |E | =

∑w∈N(u)

(1 + hw,u),

and we conclude that hv,u < 2 · |E |.

Definition

The cover time of a graph G = (V ,E) is the maximum expected time to visit all of thevertices in the graph by a random walk starting from v , for all vertices v ∈ V .

Williamson Markov Chains and Stationary Distributions

Page 203: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Lemma

If (u, v) ∈ E , then hv,u < 2 · |E |.

Proof

Let N(u) be the set of neighbors of vertex u in G. We compute hu,u in two differentways:

2 · |E |d(u)

= hu,u =1

d(u)·∑

w∈N(u)

(1 + hw,u).

Therefore,2 · |E | =

∑w∈N(u)

(1 + hw,u),

and we conclude that hv,u < 2 · |E |.

Definition

The cover time of a graph G = (V ,E) is the maximum expected time to visit all of thevertices in the graph by a random walk starting from v , for all vertices v ∈ V .

Williamson Markov Chains and Stationary Distributions

Page 204: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Stationary Distribution

Lemma

If (u, v) ∈ E , then hv,u < 2 · |E |.

Proof

Let N(u) be the set of neighbors of vertex u in G. We compute hu,u in two differentways:

2 · |E |d(u)

= hu,u =1

d(u)·∑

w∈N(u)

(1 + hw,u).

Therefore,2 · |E | =

∑w∈N(u)

(1 + hw,u),

and we conclude that hv,u < 2 · |E |.

Definition

The cover time of a graph G = (V ,E) is the maximum expected time to visit all of thevertices in the graph by a random walk starting from v , for all vertices v ∈ V .

Williamson Markov Chains and Stationary Distributions

Page 205: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Cover Time

Lemma

The cover time of G = (V ,E) is bounded above by 4 · |V | · |E |.

Proof

Choose a spanning tree of G; any subset of of the edges that gives an acyclicgraph connecting all of the vertices in G.

There exists a cyclic tour on this spanning tree in which every edge is traversedonce in each direction. (Example: sequence of vertices passed during depth-firstsearch)

Let v0, v1, . . . , v2|V |−2 = v0 be the sequence of vertices in the tour, starting fromv0.

Clearly, the expected time to go through the vertices in the tour is an upper boundon the cover time.

Hence the cover time is bounded above by

2·|V |−3∑i=0

hvi ,vi+1 < (2| · V | − 2) · (2 · |E |) < 4 · |V | · |E |.

Williamson Markov Chains and Stationary Distributions

Page 206: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Cover Time

Lemma

The cover time of G = (V ,E) is bounded above by 4 · |V | · |E |.

Proof

Choose a spanning tree of G; any subset of of the edges that gives an acyclicgraph connecting all of the vertices in G.

There exists a cyclic tour on this spanning tree in which every edge is traversedonce in each direction. (Example: sequence of vertices passed during depth-firstsearch)

Let v0, v1, . . . , v2|V |−2 = v0 be the sequence of vertices in the tour, starting fromv0.

Clearly, the expected time to go through the vertices in the tour is an upper boundon the cover time.

Hence the cover time is bounded above by

2·|V |−3∑i=0

hvi ,vi+1 < (2| · V | − 2) · (2 · |E |) < 4 · |V | · |E |.

Williamson Markov Chains and Stationary Distributions

Page 207: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Cover Time

Lemma

The cover time of G = (V ,E) is bounded above by 4 · |V | · |E |.

Proof

Choose a spanning tree of G; any subset of of the edges that gives an acyclicgraph connecting all of the vertices in G.

There exists a cyclic tour on this spanning tree in which every edge is traversedonce in each direction.

(Example: sequence of vertices passed during depth-firstsearch)

Let v0, v1, . . . , v2|V |−2 = v0 be the sequence of vertices in the tour, starting fromv0.

Clearly, the expected time to go through the vertices in the tour is an upper boundon the cover time.

Hence the cover time is bounded above by

2·|V |−3∑i=0

hvi ,vi+1 < (2| · V | − 2) · (2 · |E |) < 4 · |V | · |E |.

Williamson Markov Chains and Stationary Distributions

Page 208: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Cover Time

Lemma

The cover time of G = (V ,E) is bounded above by 4 · |V | · |E |.

Proof

Choose a spanning tree of G; any subset of of the edges that gives an acyclicgraph connecting all of the vertices in G.

There exists a cyclic tour on this spanning tree in which every edge is traversedonce in each direction. (Example: sequence of vertices passed during depth-firstsearch)

Let v0, v1, . . . , v2|V |−2 = v0 be the sequence of vertices in the tour, starting fromv0.

Clearly, the expected time to go through the vertices in the tour is an upper boundon the cover time.

Hence the cover time is bounded above by

2·|V |−3∑i=0

hvi ,vi+1 < (2| · V | − 2) · (2 · |E |) < 4 · |V | · |E |.

Williamson Markov Chains and Stationary Distributions

Page 209: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Cover Time

Lemma

The cover time of G = (V ,E) is bounded above by 4 · |V | · |E |.

Proof

Choose a spanning tree of G; any subset of of the edges that gives an acyclicgraph connecting all of the vertices in G.

There exists a cyclic tour on this spanning tree in which every edge is traversedonce in each direction. (Example: sequence of vertices passed during depth-firstsearch)

Let v0, v1, . . . , v2|V |−2 = v0 be the sequence of vertices in the tour, starting fromv0.

Clearly, the expected time to go through the vertices in the tour is an upper boundon the cover time.

Hence the cover time is bounded above by

2·|V |−3∑i=0

hvi ,vi+1 < (2| · V | − 2) · (2 · |E |) < 4 · |V | · |E |.

Williamson Markov Chains and Stationary Distributions

Page 210: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Cover Time

Lemma

The cover time of G = (V ,E) is bounded above by 4 · |V | · |E |.

Proof

Choose a spanning tree of G; any subset of of the edges that gives an acyclicgraph connecting all of the vertices in G.

There exists a cyclic tour on this spanning tree in which every edge is traversedonce in each direction. (Example: sequence of vertices passed during depth-firstsearch)

Let v0, v1, . . . , v2|V |−2 = v0 be the sequence of vertices in the tour, starting fromv0.

Clearly, the expected time to go through the vertices in the tour is an upper boundon the cover time.

Hence the cover time is bounded above by

2·|V |−3∑i=0

hvi ,vi+1 < (2| · V | − 2) · (2 · |E |) < 4 · |V | · |E |.

Williamson Markov Chains and Stationary Distributions

Page 211: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Cover Time

Lemma

The cover time of G = (V ,E) is bounded above by 4 · |V | · |E |.

Proof

Choose a spanning tree of G; any subset of of the edges that gives an acyclicgraph connecting all of the vertices in G.

There exists a cyclic tour on this spanning tree in which every edge is traversedonce in each direction. (Example: sequence of vertices passed during depth-firstsearch)

Let v0, v1, . . . , v2|V |−2 = v0 be the sequence of vertices in the tour, starting fromv0.

Clearly, the expected time to go through the vertices in the tour is an upper boundon the cover time.

Hence the cover time is bounded above by

2·|V |−3∑i=0

hvi ,vi+1

< (2| · V | − 2) · (2 · |E |) < 4 · |V | · |E |.

Williamson Markov Chains and Stationary Distributions

Page 212: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Cover Time

Lemma

The cover time of G = (V ,E) is bounded above by 4 · |V | · |E |.

Proof

Choose a spanning tree of G; any subset of of the edges that gives an acyclicgraph connecting all of the vertices in G.

There exists a cyclic tour on this spanning tree in which every edge is traversedonce in each direction. (Example: sequence of vertices passed during depth-firstsearch)

Let v0, v1, . . . , v2|V |−2 = v0 be the sequence of vertices in the tour, starting fromv0.

Clearly, the expected time to go through the vertices in the tour is an upper boundon the cover time.

Hence the cover time is bounded above by

2·|V |−3∑i=0

hvi ,vi+1 < (2| · V | − 2) · (2 · |E |)

< 4 · |V | · |E |.

Williamson Markov Chains and Stationary Distributions

Page 213: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Cover Time

Lemma

The cover time of G = (V ,E) is bounded above by 4 · |V | · |E |.

Proof

Choose a spanning tree of G; any subset of of the edges that gives an acyclicgraph connecting all of the vertices in G.

There exists a cyclic tour on this spanning tree in which every edge is traversedonce in each direction. (Example: sequence of vertices passed during depth-firstsearch)

Let v0, v1, . . . , v2|V |−2 = v0 be the sequence of vertices in the tour, starting fromv0.

Clearly, the expected time to go through the vertices in the tour is an upper boundon the cover time.

Hence the cover time is bounded above by

2·|V |−3∑i=0

hvi ,vi+1 < (2| · V | − 2) · (2 · |E |) < 4 · |V | · |E |.

Williamson Markov Chains and Stationary Distributions

Page 214: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 215: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Problem Description

Suppose we are given an undirected graph G = (V ,E) and two vertices s and t inG.

Let n = |V | and m = |E |.We want to determine if there is a path connecting s and t .

This can be done in linear time using breadth-first search or depth-first search.

These approaches require Ω(n) space.

Randomized Algorithm

Our randomized algorithm works with only O(log n) bits of memory.

Perform a random walk on G for enough steps so that a path from s to t is likely tobe found.

We use the cover time result to bound the number of steps that the random walkhas to run.

Assume G is non-bipartite.

Williamson Markov Chains and Stationary Distributions

Page 216: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Problem Description

Suppose we are given an undirected graph G = (V ,E) and two vertices s and t inG.

Let n = |V | and m = |E |.

We want to determine if there is a path connecting s and t .

This can be done in linear time using breadth-first search or depth-first search.

These approaches require Ω(n) space.

Randomized Algorithm

Our randomized algorithm works with only O(log n) bits of memory.

Perform a random walk on G for enough steps so that a path from s to t is likely tobe found.

We use the cover time result to bound the number of steps that the random walkhas to run.

Assume G is non-bipartite.

Williamson Markov Chains and Stationary Distributions

Page 217: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Problem Description

Suppose we are given an undirected graph G = (V ,E) and two vertices s and t inG.

Let n = |V | and m = |E |.We want to determine if there is a path connecting s and t .

This can be done in linear time using breadth-first search or depth-first search.

These approaches require Ω(n) space.

Randomized Algorithm

Our randomized algorithm works with only O(log n) bits of memory.

Perform a random walk on G for enough steps so that a path from s to t is likely tobe found.

We use the cover time result to bound the number of steps that the random walkhas to run.

Assume G is non-bipartite.

Williamson Markov Chains and Stationary Distributions

Page 218: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Problem Description

Suppose we are given an undirected graph G = (V ,E) and two vertices s and t inG.

Let n = |V | and m = |E |.We want to determine if there is a path connecting s and t .

This can be done in linear time using breadth-first search or depth-first search.

These approaches require Ω(n) space.

Randomized Algorithm

Our randomized algorithm works with only O(log n) bits of memory.

Perform a random walk on G for enough steps so that a path from s to t is likely tobe found.

We use the cover time result to bound the number of steps that the random walkhas to run.

Assume G is non-bipartite.

Williamson Markov Chains and Stationary Distributions

Page 219: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Problem Description

Suppose we are given an undirected graph G = (V ,E) and two vertices s and t inG.

Let n = |V | and m = |E |.We want to determine if there is a path connecting s and t .

This can be done in linear time using breadth-first search or depth-first search.

These approaches require Ω(n) space.

Randomized Algorithm

Our randomized algorithm works with only O(log n) bits of memory.

Perform a random walk on G for enough steps so that a path from s to t is likely tobe found.

We use the cover time result to bound the number of steps that the random walkhas to run.

Assume G is non-bipartite.

Williamson Markov Chains and Stationary Distributions

Page 220: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Problem Description

Suppose we are given an undirected graph G = (V ,E) and two vertices s and t inG.

Let n = |V | and m = |E |.We want to determine if there is a path connecting s and t .

This can be done in linear time using breadth-first search or depth-first search.

These approaches require Ω(n) space.

Randomized Algorithm

Our randomized algorithm works with only O(log n) bits of memory.

Perform a random walk on G for enough steps so that a path from s to t is likely tobe found.

We use the cover time result to bound the number of steps that the random walkhas to run.

Assume G is non-bipartite.

Williamson Markov Chains and Stationary Distributions

Page 221: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Problem Description

Suppose we are given an undirected graph G = (V ,E) and two vertices s and t inG.

Let n = |V | and m = |E |.We want to determine if there is a path connecting s and t .

This can be done in linear time using breadth-first search or depth-first search.

These approaches require Ω(n) space.

Randomized Algorithm

Our randomized algorithm works with only O(log n) bits of memory.

Perform a random walk on G for enough steps so that a path from s to t is likely tobe found.

We use the cover time result to bound the number of steps that the random walkhas to run.

Assume G is non-bipartite.

Williamson Markov Chains and Stationary Distributions

Page 222: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Problem Description

Suppose we are given an undirected graph G = (V ,E) and two vertices s and t inG.

Let n = |V | and m = |E |.We want to determine if there is a path connecting s and t .

This can be done in linear time using breadth-first search or depth-first search.

These approaches require Ω(n) space.

Randomized Algorithm

Our randomized algorithm works with only O(log n) bits of memory.

Perform a random walk on G for enough steps so that a path from s to t is likely tobe found.

We use the cover time result to bound the number of steps that the random walkhas to run.

Assume G is non-bipartite.

Williamson Markov Chains and Stationary Distributions

Page 223: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Problem Description

Suppose we are given an undirected graph G = (V ,E) and two vertices s and t inG.

Let n = |V | and m = |E |.We want to determine if there is a path connecting s and t .

This can be done in linear time using breadth-first search or depth-first search.

These approaches require Ω(n) space.

Randomized Algorithm

Our randomized algorithm works with only O(log n) bits of memory.

Perform a random walk on G for enough steps so that a path from s to t is likely tobe found.

We use the cover time result to bound the number of steps that the random walkhas to run.

Assume G is non-bipartite.

Williamson Markov Chains and Stationary Distributions

Page 224: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Function s-t CONNECTIVITY(s, t)1: Start a random walk from s.2: if (walk reaches t within 4 · n3 steps) then3: return (“There is a path”)4: else5: return (“There is no path”)6: end if

Algorithm 3.1: s-t Connectivity Algorithm

Williamson Markov Chains and Stationary Distributions

Page 225: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 226: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer?

No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 227: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 228: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 229: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 230: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m

< 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 231: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)

< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 232: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)<

2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 233: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 234: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤

2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 235: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3

=12.

Williamson Markov Chains and Stationary Distributions

Page 236: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Theorem

The s-t connectivity algorithm returns the correct answer with probability 1/2, and itonly errs by returning that there is no path from s to t when there is such a path.

Proof

If there is no path, does the algorithm return an incorrect answer? No!

If there is a path, the algorithm errs if it does not find a path within 4 · n3 steps ofthe walk.

The expected time to reach t from s is bounded from above by the cover time oftheir shared component, which is at most

4 · n ·m < 4 · n ·(

n · (n − 1)

2

)< 2 · n3.

The probability that a walk takes more than 4 · n3 steps to reach s from t is at most

P(X > 4 · n3) ≤2 · n3

4 · n3=

12.

Williamson Markov Chains and Stationary Distributions

Page 237: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Notes

The algorithm must keep track of its current position, which takes O(log n) bits, aswell as the number of steps taken in the random walk, which also takes onlyO(log n) bits.

As long as there is some mechanism for choosing a random neighbor from eachvertex, that is all the memory required.

Williamson Markov Chains and Stationary Distributions

Page 238: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s ParadoxApplication: An s-t Connectivity Algorithm

Connectivity Algorithm

Notes

The algorithm must keep track of its current position, which takes O(log n) bits, aswell as the number of steps taken in the random walk, which also takes onlyO(log n) bits.

As long as there is some mechanism for choosing a random neighbor from eachvertex, that is all the memory required.

Williamson Markov Chains and Stationary Distributions

Page 239: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Outline

1 Stationary DistributionsFundamental Theorem of Markov ChainsComputing Stationary DistributionsExample: A Simple Queue

2 Random Walks on Undirected GraphsApplication: An s-t Connectivity Algorithm

3 Parrondo’s Paradox

Williamson Markov Chains and Stationary Distributions

Page 240: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Parrondo’s Paradox

Main Idea

Given two games, each with a higher probability of losing than winning, it is possible toconstruct a winning strategy by playing the games alternately.

How does this apply to Markov Chains?

Let A and B be the two games.

Use Markov Chains to show that both A and B are losing games by analyzingabsorbing states or using stationary distributions.

Combine games A and B into a new game C, where you alternate between gamesA and B with a provided probability. Game C ends up being a winning game.

Williamson Markov Chains and Stationary Distributions

Page 241: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Parrondo’s Paradox

Main Idea

Given two games, each with a higher probability of losing than winning, it is possible toconstruct a winning strategy by playing the games alternately.

How does this apply to Markov Chains?

Let A and B be the two games.

Use Markov Chains to show that both A and B are losing games by analyzingabsorbing states or using stationary distributions.

Combine games A and B into a new game C, where you alternate between gamesA and B with a provided probability. Game C ends up being a winning game.

Williamson Markov Chains and Stationary Distributions

Page 242: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Parrondo’s Paradox

Main Idea

Given two games, each with a higher probability of losing than winning, it is possible toconstruct a winning strategy by playing the games alternately.

How does this apply to Markov Chains?

Let A and B be the two games.

Use Markov Chains to show that both A and B are losing games by analyzingabsorbing states or using stationary distributions.

Combine games A and B into a new game C, where you alternate between gamesA and B with a provided probability. Game C ends up being a winning game.

Williamson Markov Chains and Stationary Distributions

Page 243: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Parrondo’s Paradox

Main Idea

Given two games, each with a higher probability of losing than winning, it is possible toconstruct a winning strategy by playing the games alternately.

How does this apply to Markov Chains?

Let A and B be the two games.

Use Markov Chains to show that both A and B are losing games by analyzingabsorbing states or using stationary distributions.

Combine games A and B into a new game C, where you alternate between gamesA and B with a provided probability.

Game C ends up being a winning game.

Williamson Markov Chains and Stationary Distributions

Page 244: Markov Chains and Stationary Distributionscommunity.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/markov… · Markov Chains and Stationary Distributions ... Suppose we have a Markov

Stationary DistributionsRandom Walks on Undirected Graphs

Parrondo’s Paradox

Parrondo’s Paradox

Main Idea

Given two games, each with a higher probability of losing than winning, it is possible toconstruct a winning strategy by playing the games alternately.

How does this apply to Markov Chains?

Let A and B be the two games.

Use Markov Chains to show that both A and B are losing games by analyzingabsorbing states or using stationary distributions.

Combine games A and B into a new game C, where you alternate between gamesA and B with a provided probability. Game C ends up being a winning game.

Williamson Markov Chains and Stationary Distributions