markov chains regular markov chains absorbing markov chains

122
9 9 Markov Chains Markov Chains Regular Markov Chains Regular Markov Chains Absorbing Markov Chains Absorbing Markov Chains Game Theory and Strictly Game Theory and Strictly Determined Games Determined Games Games with Mixed Games with Mixed Strategies Strategies Markov Chains and the Theory of Markov Chains and the Theory of Games Games

Upload: taro

Post on 12-Feb-2016

185 views

Category:

Documents


3 download

DESCRIPTION

9. Markov Chains Regular Markov Chains Absorbing Markov Chains Game Theory and Strictly Determined Games Games with Mixed Strategies. Markov Chains and the Theory of Games . 9.1. Markov Chains. Transitional Probabilities. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Markov Chains Regular Markov Chains Absorbing Markov Chains

99 Markov ChainsMarkov Chains Regular Markov ChainsRegular Markov Chains Absorbing Markov ChainsAbsorbing Markov Chains Game Theory and Strictly Game Theory and Strictly

Determined GamesDetermined Games Games with Mixed StrategiesGames with Mixed Strategies

Markov Chains and the Theory of Games Markov Chains and the Theory of Games

Page 2: Markov Chains Regular Markov Chains Absorbing Markov Chains

9.19.1Markov ChainsMarkov Chains

11 12 1 1

21 22 2 2

1 2

1 2

j n

j n

i i ij in

n n nj nn

a a a aa a a a

a a a a

a a a a

11 12 1 1

21 22 2 2

1 2

1 2

j n

j n

i i ij in

n n nj nn

a a a aa a a a

a a a a

a a a a

State 1State 1 State 2State 2 ·· ·· ·· State State jj ·· ·· ·· State State nn

State 1State 1

State 2State 2......

State State ii......

State State nn

Current stateCurrent state

Next Next statestateTT = =

Page 3: Markov Chains Regular Markov Chains Absorbing Markov Chains

Transitional ProbabilitiesTransitional Probabilities

In this chapter we will be concerned with In this chapter we will be concerned with a special class of a special class of stochastic processesstochastic processes in which the in which the probabilitiesprobabilities associated associated with the with the outcomesoutcomes at at any stage any stage of the experiment of the experiment depend depend onlyonly on the on the outcomesoutcomes of the of the preceding stagepreceding stage..

Such a process is called a Such a process is called a Markov processMarkov process, or , or Markov Markov chainchain..

The The outcomeoutcome at at any stageany stage of the experiment in a Markov of the experiment in a Markov process is called the process is called the statestate of the of the experimentexperiment..

In particular, the In particular, the outcomeoutcome at the at the current stagecurrent stage of the of the experiment is called the experiment is called the current statecurrent state of the process. of the process.

Page 4: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Common Stocks Common Stocks An analyst at Weaver and Kline, a stock brokerage firm, An analyst at Weaver and Kline, a stock brokerage firm,

observes that the observes that the closing priceclosing price of the preferred stock of an of the preferred stock of an airline company over a short span of time airline company over a short span of time depends onlydepends only on on its its previous closing priceprevious closing price..

At the end of each trading day, he makes a note of the At the end of each trading day, he makes a note of the stock’s performance for that day, recording the closing stock’s performance for that day, recording the closing price as “higher,” “unchanged,” or “lower” according to price as “higher,” “unchanged,” or “lower” according to whether the stock closes whether the stock closes higherhigher, , unchangedunchanged, or , or lowerlower than than the the previous day’s closing priceprevious day’s closing price..

This sequence of observations may be viewed as a This sequence of observations may be viewed as a Markov Markov chainchain..

Applied Example 1, page 484

Page 5: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Common Stocks Common Stocks If on a certain day the stock’s If on a certain day the stock’s closing priceclosing price is is higherhigher than than

that of the that of the previous dayprevious day, then the , then the probabilityprobability that it that it closescloses higherhigher, , unchangedunchanged, or , or lowerlower on the on the next trading daynext trading day is is .2.2, , .3.3, , and and .5.5, respectively., respectively.

Next, if the stock’s Next, if the stock’s closing priceclosing price is is unchangedunchanged from the from the previous dayprevious day, then the , then the probabilityprobability that it that it closescloses higherhigher, , unchangedunchanged, or , or lowerlower on the on the next trading daynext trading day is is .5.5, , .2.2, and , and .3.3, , respectively.respectively.

Finally, if the stock’s Finally, if the stock’s closing priceclosing price is is lowerlower than that of the than that of the previous dayprevious day, then the , then the probabilityprobability that it that it closescloses higherhigher, , unchangedunchanged, or , or lowerlower on the on the next trading daynext trading day is is .4.4, , .4.4, and , and .2.2, , respectively.respectively.

With the aid of With the aid of tree diagramstree diagrams, describe the , describe the transitiontransition between statesbetween states and the and the probabilitiesprobabilities associated with these associated with these transitions.transitions.

Applied Example 1, page 484

Page 6: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Common Stocks Common StocksSolutionSolution The The Markov chainMarkov chain being described has being described has three statesthree states, each of , each of

which may be displayed by constructing a which may be displayed by constructing a tree diagramtree diagram in in which the which the associated probabilitiesassociated probabilities are shown on the are shown on the appropriate limbs:appropriate limbs:✦ If the If the current statecurrent state is is higherhigher, the , the tree diagramtree diagram is: is:

HigherHigher

HigherHigher

UnchangedUnchanged

LowerLower

.2.2

.3.3

.5.5

Applied Example 1, page 484

Page 7: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Common Stocks Common StocksSolutionSolution The The Markov chainMarkov chain being described has being described has three statesthree states, each of , each of

which may be displayed by constructing a which may be displayed by constructing a tree diagramtree diagram in in which the which the associated probabilitiesassociated probabilities are shown on the are shown on the appropriate limbs:appropriate limbs:✦ If the If the current statecurrent state is is unchangedunchanged, the , the tree diagramtree diagram is: is:

UnchangedUnchanged

HigherHigher

UnchangedUnchanged

LowerLower

.5.5

.2.2

.3.3

Applied Example 1, page 484

Page 8: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Common Stocks Common StocksSolutionSolution The The Markov chainMarkov chain being described has being described has three statesthree states, each of , each of

which may be displayed by constructing a which may be displayed by constructing a tree diagramtree diagram in in which the which the associated probabilitiesassociated probabilities are shown on the are shown on the appropriate limbs:appropriate limbs:✦ If the If the current statecurrent state is is lowerlower, the , the tree diagramtree diagram is: is:

LowerLower

HigherHigher

UnchangedUnchanged

LowerLower

.4.4

.4.4

.2.2

Applied Example 1, page 484

Page 9: Markov Chains Regular Markov Chains Absorbing Markov Chains

Transition ProbabilitiesTransition Probabilities The probabilities encountered in the last example are called The probabilities encountered in the last example are called

transition probabilitiestransition probabilities because they are associated with the because they are associated with the transitiontransition from from one stateone state to the to the nextnext in the Markov process. in the Markov process.

These transition probabilities may be conveniently These transition probabilities may be conveniently represented in the form of a represented in the form of a matrixmatrix..

Suppose for simplicity that we have a Suppose for simplicity that we have a Markov chainMarkov chain with with three possible outcomesthree possible outcomes at each stage of the experiment. at each stage of the experiment.

Let’s refer to these outcomes as Let’s refer to these outcomes as state 1state 1, , state 2state 2, and , and state 3state 3.. Then the Then the transition probabilitiestransition probabilities associated with the associated with the

transitiontransition from from state 1state 1 to each of the to each of the states 1states 1, , 22, and , and 33 in the in the next phasenext phase of the experiment are precisely the respective of the experiment are precisely the respective conditional probabilitiesconditional probabilities that the that the outcomeoutcome is is state 1state 1, , state 2state 2, , and and state 3state 3 givengiven that the that the outcomeoutcome state 1state 1 has occurred. has occurred.

Page 10: Markov Chains Regular Markov Chains Absorbing Markov Chains

Transition ProbabilitiesTransition Probabilities

In short, the desired In short, the desired transition probabilitiestransition probabilities are respectively are respectively PP(state 1 (state 1 | state 1)| state 1), , PP(state 2 (state 2 | state 1)| state 1), and , and PP(state 3 (state 3 | state 1)| state 1)..

Thus, we can write:Thus, we can write:

aa1111 == P P(state 1 | state 1)(state 1 | state 1)

aa2121 == P P(state 2 | state 1)(state 2 | state 1)

aa3131 == P P(state 3 | state 1)(state 3 | state 1)

Next stateNext state

Current stateCurrent state

State 1State 1

State 1State 1

State 2State 2

State 3State 3

aa1111

aa2121

aa3131

These can be represented with These can be represented with a a tree diagramtree diagram as well: as well:

Page 11: Markov Chains Regular Markov Chains Absorbing Markov Chains

Transition ProbabilitiesTransition Probabilities

Similarly, the Similarly, the transition probabilitiestransition probabilities associated with the associated with the transition from transition from state 2state 2 can be presented as can be presented as conditional conditional probabilitiesprobabilities, as well as in a , as well as in a tree diagramtree diagram::

aa1212 == P P(state 1 | state 2)(state 1 | state 2)

aa2222 == P P(state 2 | state 2)(state 2 | state 2)

aa3232 == P P(state 3 | state 2)(state 3 | state 2)

Next stateNext state

Current stateCurrent state

State 2State 2

State 1State 1

State 2State 2

State 3State 3

aa1212

aa2222

aa3232

Page 12: Markov Chains Regular Markov Chains Absorbing Markov Chains

Transition ProbabilitiesTransition Probabilities

Finally, the Finally, the transition probabilitiestransition probabilities associated with the associated with the transition from transition from state 3state 3 can be presented as can be presented as conditional conditional probabilitiesprobabilities, as well as in a , as well as in a tree diagramtree diagram::

aa1313 == P P(state 1 | state 3)(state 1 | state 3)

aa2323 == P P(state 2 | state 3)(state 2 | state 3)

aa3333 == P P(state 3 | state 3)(state 3 | state 3)

Next stateNext state

Current stateCurrent state

State 3State 3

State 1State 1

State 2State 2

State 3State 3

aa1313

aa2323

aa3333

Page 13: Markov Chains Regular Markov Chains Absorbing Markov Chains

Transition ProbabilitiesTransition Probabilities

These observations lead to the following These observations lead to the following matrix matrix representationrepresentation of the of the transition probabilitiestransition probabilities::

11 12 13

21 22 23

31 32 33

a a aa a aa a a

State 1 State 2 State 3State 1 State 2 State 3

State 1State 1

State 2State 2

State 3State 3

Current stateCurrent state

NextNextstatestate

Page 14: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Common Stocks Common Stocks Use a Use a matrixmatrix to represent the to represent the transition probabilitiestransition probabilities

obtained earlier.obtained earlier.SolutionSolution There are There are three statesthree states at each at each stagestage of the Markov chain of the Markov chain

under consideration.under consideration. Letting Letting state 1state 1, , state 2state 2, and , and state 3state 3 denote the states “ denote the states “higherhigher,” ,”

““unchangedunchanged,” and “,” and “lowerlower,” respectively, we find that,” respectively, we find that

11 12 13

21 22 23

31 32 33

.2 .5 .4

.3 .2 .4

.5 .3 .2

a a a

a a a

a a a

Applied Example 2, page 484

Page 15: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Common Stocks Common Stocks Use a Use a matrixmatrix to represent the to represent the transition probabilitiestransition probabilities

obtained earlier.obtained earlier.SolutionSolution Thus, the required Thus, the required matrix representationmatrix representation is given by is given by

.2 .5 .4

.3 .2 .4

.5 .3 .2T

Applied Example 2, page 484

Page 16: Markov Chains Regular Markov Chains Absorbing Markov Chains

Transition MatrixTransition Matrix

A transition matrix associated with a A transition matrix associated with a Markov ChainMarkov Chain with with nn statesstates is an is an nn xx nn matrixmatrix T T with with entriesentries aaijij ( 1 ( 1 i i nn; ; 1 1 j j nn))

The transition matrix has the following The transition matrix has the following propertiesproperties::1.1. aaijij 0 0 for all for all ii and and j j..2.2. The The sumsum of the entries in each of the entries in each columncolumn of of TT is is 11..

11 12 1 1

21 22 2 2

1 2

1 2

j n

j n

i i ij in

n n nj nn

a a a aa a a a

a a a a

a a a a

11 12 1 1

21 22 2 2

1 2

1 2

j n

j n

i i ij in

n n nj nn

a a a aa a a a

a a a a

a a a a

State 1State 1 State 2State 2 ·· ·· ·· State State jj ·· ·· ·· State State nn

State 1State 1

State 2State 2......

State State ii......

State State nn

Current stateCurrent state

Next Next statestateTT = =

Page 17: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Urban-Suburban Population Flow Urban-Suburban Population Flow

Because of the continued successful implementation of an Because of the continued successful implementation of an urban renewal program, it is expected that each year urban renewal program, it is expected that each year 3%3% of the population currently of the population currently residing in the cityresiding in the city will will move move to the suburbsto the suburbs and and 6%6% of the population currently of the population currently residing residing in the suburbsin the suburbs will will move into the citymove into the city..

At present, At present, 65%65% of the total population of the metropolitan of the total population of the metropolitan area area lives in the city itselflives in the city itself, while the remaining , while the remaining 35%35% lives lives in the in the suburbssuburbs..

Assuming that the Assuming that the total populationtotal population of the metropolitan of the metropolitan area remains area remains constantconstant, what will be the , what will be the distributiondistribution of the of the population population oneone yearyear from now? from now?

Applied Example 4, page 487

Page 18: Markov Chains Regular Markov Chains Absorbing Markov Chains

SolutionSolution We can use a We can use a tree diagramtree diagram to see the to see the Markov processMarkov process

under consideration:under consideration:

Thus, the Thus, the probabilityprobability that a person selected at random will that a person selected at random will be a be a city dwellercity dweller one year from now is given by one year from now is given by

(.65)(.97) + (.35)(.06) = .6515(.65)(.97) + (.35)(.06) = .6515

Applied Example:Applied Example: Urban-Suburban Population Flow Urban-Suburban Population Flow

Current Current populationpopulation

CityCity

SuburbSuburb

.65.65

.35.35CityCity

SuburbSuburb

.06.06

.94.94

CityCity

SuburbSuburb

.97.97

.03.03

Population Population one year laterone year later

Applied Example 4, page 487

Page 19: Markov Chains Regular Markov Chains Absorbing Markov Chains

SolutionSolution We can use a We can use a tree diagramtree diagram to see the to see the Markov processMarkov process

under consideration:under consideration:

The The probabilityprobability that a person selected at random will be a that a person selected at random will be a suburb dwellersuburb dweller one year from now is given by one year from now is given by

(.65)(.03) + (.35)(.94) = .3485(.65)(.03) + (.35)(.94) = .3485

Applied Example:Applied Example: Urban-Suburban Population Flow Urban-Suburban Population Flow

Current Current populationpopulation

CityCity

SuburbSuburb

.65.65

.35.35CityCity

SuburbSuburb

.06.06

.94.94

CityCity

SuburbSuburb

.97.97

.03.03

Population Population one year laterone year later

Applied Example 4, page 487

Page 20: Markov Chains Regular Markov Chains Absorbing Markov Chains

SolutionSolution The process under consideration may be viewed as a The process under consideration may be viewed as a

Markov chainMarkov chain with with two possible statestwo possible states at each at each stagestage of the of the experiment: experiment:

State 1:State 1: “living in the city”“living in the city”State 2:State 2: “living in the suburbs”“living in the suburbs”

The The transition matrixtransition matrix associated with this Markov chain is associated with this Markov chain is

Applied Example:Applied Example: Urban-Suburban Population Flow Urban-Suburban Population Flow

.97 .06

.03 .94

State 1 State 2State 1 State 2

State 1State 1

State 2State 2T =

Transition Transition matrixmatrix

Applied Example 4, page 487

Page 21: Markov Chains Regular Markov Chains Absorbing Markov Chains

SolutionSolution Next, observe that the Next, observe that the initialinitial (current) (current) probability probability

distributiondistribution of the population may be summarized in the form of the population may be summarized in the form of the of the column vectorcolumn vector

Using the Using the probabilitiesprobabilities obtained with the obtained with the tree diagramtree diagram, we , we may write the population distribution may write the population distribution one year laterone year later as as

Applied Example:Applied Example: Urban-Suburban Population Flow Urban-Suburban Population Flow

.65

.35

State 1State 1

State 2State 2X0 = Initial-state Initial-state

matrixmatrix

.6515

.3485

State 1State 1

State 2State 2X1 = Distribution Distribution

after one yearafter one year

Applied Example 4, page 487

Page 22: Markov Chains Regular Markov Chains Absorbing Markov Chains

SolutionSolution We can now We can now verifyverify that that

so this problem may be so this problem may be solvedsolved using using matrix multiplicationmatrix multiplication..

Applied Example:Applied Example: Urban-Suburban Population Flow Urban-Suburban Population Flow

0 1

.97 .06 .65 .6515

.03 .94 .35 .3485TX X

Applied Example 4, page 487

Page 23: Markov Chains Regular Markov Chains Absorbing Markov Chains

Now, find the Now, find the population distributionpopulation distribution of the city after of the city after two two yearsyears and and three yearsthree years..

SolutionSolution Let Let XX11, , XX22, and , and XX33 be the be the column vectorscolumn vectors representing the representing the

population distributionpopulation distribution of the metropolitan area after of the metropolitan area after one one yearyear, , two yearstwo years, and , and three yearsthree years, respectively., respectively.

To find To find XX22, we take , we take XX11 to represent the “initial” probability to represent the “initial” probability distribution in this part of the calculation; thusdistribution in this part of the calculation; thus

Similarly, for Similarly, for XX3 3 we havewe have

Applied Example:Applied Example: Urban-Suburban Population Flow Urban-Suburban Population Flow

2 1

.97 .06 .6515 .6529

.03 .94 .3485 .3471X TX

3 2

.97 .06 .6529 .6541

.03 .94 .3471 .3459X TX

Applied Example 4, page 487

Page 24: Markov Chains Regular Markov Chains Absorbing Markov Chains

Now, find the Now, find the population distributionpopulation distribution of the city after of the city after two two yearsyears and and three yearsthree years..

SolutionSolution Observe that we haveObserve that we have

These results are easily These results are easily generalizedgeneralized..

Applied Example:Applied Example: Urban-Suburban Population Flow Urban-Suburban Population Flow

1 0

22 1 0

33 2 0

X TX

X TX T X

X TX T X

Applied Example 4, page 487

Page 25: Markov Chains Regular Markov Chains Absorbing Markov Chains

Distribution VectorsDistribution Vectors

Let there be a Let there be a Markov processMarkov process in which there are in which there are n n possible statespossible states at each at each stagestage of the experiment. of the experiment.

Let the Let the probabilityprobability of the system being in of the system being in state 1state 1, , state 2state 2, … , , … , state state nn, , initiallyinitially, be given by, be given by pp11, , pp22, … , , … , ppnn, respectively., respectively.

Such a Such a distributiondistribution may be represented as an may be represented as an nn-dimensional -dimensional distribution vectordistribution vector

and the and the probability distributionprobability distribution of the system after of the system after mm observationsobservations is given by is given by

1

20

n

pp

X

p

0m

mX T X

Page 26: Markov Chains Regular Markov Chains Absorbing Markov Chains

9.29.2Regular Markov ChainsRegular Markov Chains

1 0

.7 .2 .2 .3

.3 .8 .8 .7X TX

1 0

.7 .2 .2 .3

.3 .8 .8 .7X TX

After one After one generationgeneration

2 1

.7 .2 .3 .35

.3 .8 .7 .65X TX

2 1

.7 .2 .3 .35

.3 .8 .7 .65X TX

After two After two generationsgenerations

3 2

.7 .2 .35 .375

.3 .8 .65 .625X TX

3 2

.7 .2 .35 .375

.3 .8 .65 .625X TX

After three After three generationsgenerations

Page 27: Markov Chains Regular Markov Chains Absorbing Markov Chains

Steady-State Distribution VectorsSteady-State Distribution Vectors

In the last section, we derived a In the last section, we derived a formulaformula for computing the for computing the likelihoodlikelihood that a physical system will be in any one of the that a physical system will be in any one of the possible statespossible states associated with each stage of a associated with each stage of a Markov Markov processprocess describing the system. describing the system.

In this section we In this section we use this formulause this formula to help us investigate to help us investigate the the long-term trendslong-term trends of certain of certain Markov processesMarkov processes..

Page 28: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Educational Status of Women Educational Status of Women A survey conducted by the National Commission on the A survey conducted by the National Commission on the

Educational Status of Women reveals that Educational Status of Women reveals that 70%70% of the of the daughtersdaughters of women who have of women who have completedcompleted 22 or more or more years years of collegeof college have also have also completed completed 22 or more or more years of collegeyears of college, , whereas whereas 20%20% of the of the daughtersdaughters of women who have of women who have hadhad less less thanthan 22 years of collegeyears of college have have completed completed 22 or more or more years of years of collegecollege..

If this trend continues, If this trend continues, determinedetermine, in the , in the long runlong run, the , the percentagepercentage of women in the population who will have of women in the population who will have competedcompeted at least at least 2 2 years of collegeyears of college given that given that currentlycurrently only only 20%20% of the women of the women have completedhave completed at least at least 22 years of years of collegecollege. .

Applied Example 1, page 494

Page 29: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Educational Status of Women Educational Status of Women

SolutionSolution This problem may be viewed as a This problem may be viewed as a Markov processMarkov process with with

two possible states: two possible states: State 1:State 1: “completed “completed 22 or more years of college” or more years of college”State 2:State 2: “completed less than “completed less than 22 years of college” years of college”

The The transition matrixtransition matrix associated with this Markov chain associated with this Markov chain is given byis given by

and the and the initialinitial distribution vectordistribution vector is given by is given by

.7 .2

.3 .8T

0

.2

.8X

Applied Example 1, page 494

Page 30: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Educational Status of Women Educational Status of Women

SolutionSolution To study the To study the long-term trendlong-term trend, let’s compute, let’s compute X X11, , XX22, … , … These These vectorsvectors give the give the proportionproportion of women of women withwith 22 or or

more more years of collegeyears of college and that of women and that of women with less thanwith less than 2 2 years of collegeyears of college after after each generationeach generation..

1 0

.7 .2 .2 .3

.3 .8 .8 .7X TX

After one generationAfter one generation

2 1

.7 .2 .3 .35

.3 .8 .7 .65X TX

After two generationsAfter two generations

3 2

.7 .2 .35 .375

.3 .8 .65 .625X TX

After three generationsAfter three generations

Applied Example 1, page 494

Page 31: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Education Status of Women Education Status of Women

SolutionSolution Proceeding furtherProceeding further, we obtain the following , we obtain the following sequence of sequence of

vectorsvectors::

From the result of these computations, we see that as From the result of these computations, we see that as m m increasesincreases, the probability distribution, the probability distribution vector vector XXmm approachesapproaches the probability distribution the probability distribution vector vector

Such a Such a vectorvector is called the is called the limitinglimiting, or , or steady-state, steady-state, distribution vectordistribution vector for the system. for the system.

4

.3875

.6125X

After nine After nine generationsgenerations

2535

.4

.6

or

5

.3938

.6062X

6

.3969

.6031X

7

.3984

.6016X

8

.3992

.6008X

9

.3996

.6004X

10

.3998

.6002X

Click forward Click forward to see more to see more generationsgenerations

Applied Example 1, page 494

Page 32: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Educational Status of Women Educational Status of Women

SolutionSolution We interpret these results as follows: We interpret these results as follows:

✦ InitiallyInitially, , 20%20% of all women have of all women have completedcompleted 22 or more or more years of collegeyears of college, whereas , whereas 80%80% have have completed less thancompleted less than 2 2 years of collegeyears of college..

✦ We see that generation after generation, the proportion We see that generation after generation, the proportion of the of the first groupfirst group increasesincreases while the proportion of the while the proportion of the second groupsecond group decreasesdecreases. .

✦ In theIn the long run long run, the , the proportions stabilizeproportions stabilize, so that , so that 40%40% of of all women will have all women will have completedcompleted 22 or more or more years of years of collegecollege, whereas , whereas 60%60% will have will have completed less thancompleted less than 22 years of collegeyears of college..

Applied Example 1, page 494

Page 33: Markov Chains Regular Markov Chains Absorbing Markov Chains

Regular Markov ChainRegular Markov Chain

Continuing with last example, if we calculate Continuing with last example, if we calculate TT, , TT22, , TT33, … , … we can see that the powers we can see that the powers TT

mm of the of the transition matrixtransition matrix TT tend towardtend toward a a fixed matrixfixed matrix as as mm gets gets largerlarger and and largerlarger::

We can see that the We can see that the largerlarger the value of the value of mm the the closercloser resulting matrix resulting matrix approachesapproaches the matrix the matrix

Such a matrix is called the Such a matrix is called the steady-state matrixsteady-state matrix for the for the system.system.

.7 .2

.3 .8T

2 .7 .2 .7 .2 .55 .3.3 .8 .3 .8 .45 .7

T

3 .55 .3 .7 .2 .475 .35

.45 .7 .3 .8 .525 .65T

4 .475 .35 .7 .2 .4375 .375.525 .65 .3 .8 .5625 .625

T

5 .4375 .375 .7 .2 .4188 .3875.5625 .625 .3 .8 .5813 .6125

T

6 .4188 .3875 .7 .2 .4094 .3938

.5813 .6125 .3 .8 .5906 .6062T

7 .4094 .3938 .7 .2 .4047 .3969.5906 .6062 .3 .8 .5953 .6031

T

8 .4047 .3969 .7 .2 .4023 .3984

.5953 .6031 .3 .8 .5977 .6016T

9 .4023 .3984 .7 .2 .4012 .3992.5977 .6016 .3 .8 .5988 .6008

T

10 .4012 .3992 .7 .2 .4006 .3996

.5988 .6008 .3 .8 .5994 .6004T

2 25 53 35 5

.40 .40

.60 .60L

or

Click forward Click forward to see different to see different

powers of powers of TT

TL User
View in slideshow mode to see different powers of T
Page 34: Markov Chains Regular Markov Chains Absorbing Markov Chains

Note that the Note that the steady-state matrixsteady-state matrix from our last example from our last example has has columnscolumns that are that are all equalall equal and all the and all the entriesentries are are positivepositive::

A matrix A matrix TT having this property is called a having this property is called a regular regular

Markov chainMarkov chain..

Regular Markov ChainRegular Markov Chain

2 25 53 35 5

.40 .40

.60 .60L

or

Page 35: Markov Chains Regular Markov Chains Absorbing Markov Chains

A A stochastic matrixstochastic matrix TT is a is a regular Markov regular Markov chainchain if the sequence if the sequence

TT, , TT 22, , TT 33, … , … approachesapproaches a a steady-state matrixsteady-state matrix in which in which the the columnscolumns of the of the limiting matrixlimiting matrix are are all all equalequal and all the and all the entriesentries are are positivepositive..

A A stochastic matrixstochastic matrix TT is is regularregular if and only if and only if some if some powerpower of of TT has has entriesentries that are that are all all positivepositive..

Regular Markov ChainRegular Markov Chain

Page 36: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample

Determine whether the Determine whether the matrixmatrix is is regularregular::

SolutionSolution Since Since allall the the entriesentries of the matrix are of the matrix are positivepositive, the given , the given

matrix is matrix is regularregular..

.7 .2

.3 .8

Example 2, page 497

Page 37: Markov Chains Regular Markov Chains Absorbing Markov Chains

.4 .1

.6 0

ExampleExample

Determine whether the Determine whether the matrixmatrix is is regularregular::

SolutionSolution One of the One of the entriesentries is equal to is equal to zerozero, so let’s compute the , so let’s compute the

second powersecond power of the matrix: of the matrix:

Since the Since the second powersecond power of the matrix has of the matrix has entriesentries that are that are allall positive positive, we conclude that the given matrix is in fact , we conclude that the given matrix is in fact regularregular..

2.4 .1 .4 .1 .4 .1 .76 .4.6 0 .6 0 .6 0 .24 .6

Example 2, page 497

Page 38: Markov Chains Regular Markov Chains Absorbing Markov Chains

0 11 0

ExampleExample

Determine whether the Determine whether the matrixmatrix is is regularregular::SolutionSolution Denote the given matrix by Denote the given matrix by AA. Then. Then

Since Since AA33 = = AA, it follows that , it follows that AA44 = = AA22 and so on. and so on. Therefore, these are the Therefore, these are the only two matricesonly two matrices that arise for that arise for

any powerany power of of AA..

0 11 0

A

2 0 1 0 1 1 01 0 1 0 0 1

A

3 0 1 1 0 0 11 0 0 1 1 0

A A

Example 2, page 497

Page 39: Markov Chains Regular Markov Chains Absorbing Markov Chains

0 11 0

ExampleExample

Determine whether the Determine whether the matrixmatrix is is regularregular::SolutionSolution Denote the given matrix by Denote the given matrix by AA. Then. Then

Some of theSome of the entries entries of of AA and and AA22 are are notnot positivepositive, so , so anyany powerpower of of AA will have entries that are will have entries that are not positivenot positive..

Thus, we conclude the matrix is Thus, we conclude the matrix is not regularnot regular..

0 11 0

A

2 0 1 0 1 1 01 0 1 0 0 1

A

3 0 1 1 0 0 11 0 0 1 1 0

A A

Some entries are Some entries are not positivenot positive

Example 2, page 497

Page 40: Markov Chains Regular Markov Chains Absorbing Markov Chains

Finding the Steady-State Distribution VectorFinding the Steady-State Distribution Vector

Let Let TT be a be a regular stochastic matrixregular stochastic matrix.. Then the Then the steady-statesteady-state distribution vectordistribution vector XX

may be found by solving the vector equation may be found by solving the vector equation TX TX == X X

together with the together with the conditioncondition that the sum that the sum of of the elementsthe elements of the of the vectorvector XX be equal to be equal to 11..

Page 41: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample

Find the Find the steady-state steady-state distribution vectordistribution vector for the regular for the regular Markov chain whose Markov chain whose transition matrixtransition matrix is is

.7 .2

.3 .8T

Example 3, page 498

Page 42: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExampleSolutionSolution Let Let

be the be the steady-state steady-state distribution vectordistribution vector, where the numbers , where the numbers xx and and yy are to be determined. are to be determined.

The The conditioncondition TX TX == X X translates into the translates into the matrix equationmatrix equation

or, equivalently, the or, equivalently, the system of linear equationssystem of linear equations

.7 .2

.3 .8x xy y

xX

y

0.7 0.20.3 0.8

x y xx y y

Example 3, page 498

Page 43: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExampleSolutionSolution But But eacheach equation that makes up the equation that makes up the system system is equivalent is equivalent

to theto the single equationsingle equation

Next, the Next, the conditioncondition that the that the sumsum of the of the elementselements of of XX add add up to up to 11 gives gives

To find the To find the valuesvalues of of xx and and yy that meet that meet bothboth conditionsconditions we we solvesolve the the systemsystem

0.7 0.2 00.3 0.8 0

x x yx y y 0.3 0.2 0x y

1x y

0.3 0.2 01

x yx y

Example 3, page 498

Page 44: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExampleSolutionSolution The The solutionsolution to the system to the system

isis

So, the required So, the required steady-statesteady-state distribution vectordistribution vector is given by is given by

which which agreesagrees with the result obtained with the result obtained earlierearlier..

0.3 0.2 01

x yx y

2 35 5

x y an d

2535

X

Example 3, page 498

Page 45: Markov Chains Regular Markov Chains Absorbing Markov Chains

9.39.3Absorbing Markov ChainsAbsorbing Markov Chains

1 0 .79 .470 1 .21 .530 0 0 00 0 0 0

1 0 .79 .470 1 .21 .530 0 0 00 0 0 0

$0$0 $3$3 $1$1 $2$2

$0$0

$3$3

$1$1

$2$2

1( )I S I RO O

1( )I S I RO O

Page 46: Markov Chains Regular Markov Chains Absorbing Markov Chains

Absorbing Markov ChainsAbsorbing Markov Chains

In this section we investigate the In this section we investigate the long-term trendslong-term trends of a of a certain class of Markov chains that involve certain class of Markov chains that involve transition transition matricesmatrices that are that are not regularnot regular..

In particular, we study Markov chains in which the In particular, we study Markov chains in which the transition matricestransition matrices, know as , know as absorbing matricesabsorbing matrices, have , have special propertiesspecial properties we will describe. we will describe.

Page 47: Markov Chains Regular Markov Chains Absorbing Markov Chains

Consider the Consider the stochastic matrixstochastic matrix associated with a associated with a Markov Markov processprocess::

We see that We see that after one observationafter one observation, the , the probabilityprobability is is 1 1 that that an object an object previouslypreviously in in state 1state 1 will will remainremain in in state 1state 1..

1 0 .2 00 1 .3 10 0 .5 00 0 0 0

Absorbing Markov ChainsAbsorbing Markov Chains

Page 48: Markov Chains Regular Markov Chains Absorbing Markov Chains

Consider the Consider the stochastic matrixstochastic matrix associated with a associated with a Markov Markov processprocess::

Similarly, we see that an object Similarly, we see that an object previouslypreviously in in state 2state 2 must must remainremain in in state 2state 2..

1 0 .2 00 1 .3 10 0 .5 00 0 0 0

Absorbing Markov ChainsAbsorbing Markov Chains

Page 49: Markov Chains Regular Markov Chains Absorbing Markov Chains

Consider the Consider the stochastic matrixstochastic matrix associated with a associated with a Markov Markov processprocess::

Next, we find that an object Next, we find that an object previouslypreviously in in state 3state 3 has a has a probabilityprobability of of

✦ .2.2 of of going togoing to state 1state 1..✦ .3.3 of of going togoing to state 2state 2..✦ .5.5 of of remaining inremaining in state 3state 3. . ✦ 00 (no chance) of (no chance) of going togoing to state 4state 4

1 0 .2 00 1 .3 10 0 .5 00 0 0 0

Absorbing Markov ChainsAbsorbing Markov Chains

Page 50: Markov Chains Regular Markov Chains Absorbing Markov Chains

Absorbing Markov ChainsAbsorbing Markov Chains

Consider the Consider the stochastic matrixstochastic matrix associated with a associated with a Markov Markov processprocess::

Finally, we see that an object Finally, we see that an object previouslypreviously in in state 4state 4 must go must go to to state 2state 2..

1 0 .2 00 1 .3 10 0 .5 00 0 0 0

Page 51: Markov Chains Regular Markov Chains Absorbing Markov Chains

Absorbing Markov ChainsAbsorbing Markov Chains

This This stochastic matrixstochastic matrix exhibits certain exhibits certain special special characteristicscharacteristics::

As we saw, an object in As we saw, an object in state 1state 1 or or state 2state 2 must remainmust remain in in state 1state 1 or or state 2state 2, respectively., respectively.

Such states are called Such states are called absorbing statesabsorbing states.. In general, an In general, an absorbing stateabsorbing state is one from which it is is one from which it is

impossibleimpossible for an object to for an object to leave leave..

1 0 .2 00 1 .3 10 0 .5 00 0 0 0

Page 52: Markov Chains Regular Markov Chains Absorbing Markov Chains

Absorbing Markov ChainsAbsorbing Markov Chains

This This stochastic matrixstochastic matrix exhibits certain exhibits certain special special characteristicscharacteristics::

To To identifyidentify the the absorbing statesabsorbing states of a of a stochastic matrixstochastic matrix, we , we examine examine eacheach column column of the matrix. of the matrix.

If If columncolumn i i has a has a 1 1 in the in the aaiiii position (on the position (on the main diagonalmain diagonal of the matrix) and of the matrix) and zeros zeros elsewhere in that elsewhere in that columncolumn, then , then and only then is and only then is statestate ii an an absorbing stateabsorbing state..

1 0 .2 00 1 .3 10 0 .5 00 0 0 0

Page 53: Markov Chains Regular Markov Chains Absorbing Markov Chains

Absorbing Markov ChainsAbsorbing Markov Chains

This This stochastic matrixstochastic matrix exhibits certain exhibits certain special special characteristicscharacteristics::

Also note that Also note that states 3states 3 and and 44, although , although notnot absorbing absorbing statesstates, have the property that an object in each of these , have the property that an object in each of these statesstates has a has a possibilitypossibility of of going togoing to an an absorbing stateabsorbing state..

1 0 .2 00 1 .3 10 0 .5 00 0 0 0

Page 54: Markov Chains Regular Markov Chains Absorbing Markov Chains

Absorbing Markov ChainsAbsorbing Markov Chains

This This stochastic matrixstochastic matrix exhibits certain exhibits certain special special characteristicscharacteristics::

For exampleFor example An object in An object in state 3state 3 has has probabilitiesprobabilities of of .2.2 and and .3.3 of of ending ending

upup in in states 1states 1 and and 22, respectively, which are , respectively, which are absorbing absorbing statesstates..

1 0 .2 00 1 .3 10 0 .5 00 0 0 0

Page 55: Markov Chains Regular Markov Chains Absorbing Markov Chains

Absorbing Markov ChainsAbsorbing Markov Chains

This This stochastic matrixstochastic matrix exhibits certain exhibits certain special special characteristicscharacteristics::

For exampleFor example An object in An object in state 4state 4 must end upmust end up in in state 2state 2, an , an absorbing absorbing

statestate..

1 0 .2 00 1 .3 10 0 .5 00 0 0 0

Page 56: Markov Chains Regular Markov Chains Absorbing Markov Chains

Absorbing Stochastic MatrixAbsorbing Stochastic Matrix

An An absorbing stochastic matrixabsorbing stochastic matrix has the has the following following propertiesproperties::1.1. There is There is at least oneat least one absorbing stateabsorbing state..2.2. It is It is possiblepossible to go from each to go from each nonabsorbing nonabsorbing

statestate to an to an absorbing stateabsorbing state in one or more in one or more steps.steps.

Page 57: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin Gambler’s Ruin

John has decided to John has decided to riskrisk $2$2 in the following in the following game of game of chancechance..

He places a He places a $1$1 bet on each bet on each repeated playrepeated play of the game in of the game in which the which the probabilityprobability of his of his winningwinning $1$1 is is .4.4, and he , and he continues to play until he has continues to play until he has accumulatedaccumulated a total of a total of $3$3 or or he has he has lost alllost all of his money. of his money.

Write the Write the transition matrixtransition matrix for the related for the related absorbing absorbing Markov chainMarkov chain..

Applied Example 2, page 506

Page 58: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin Gambler’s Ruin

SolutionSolution

There are There are four statesfour states in this in this Markov chainMarkov chain, which correspond , which correspond to John accumulating a total of to John accumulating a total of $0$0, , $1$1, , $2$2, and , and $3$3..

The The statestate of accumulating of accumulating $0$0 (losing everything) and the (losing everything) and the statestate of accumulatingof accumulating $3$3 are both are both absorbing statesabsorbing states..

We will We will listlist the the absorbing statesabsorbing states firstfirst (it will later become (it will later become evident why this is convenient).evident why this is convenient).

1 0 .6 00 1 0 .40 0 0 .60 0 .4 0

$0$0 $3$3 $1$1 $2$2

$0$0

$3$3

$1$1

$2$2

AbsorbingAbsorbing NonabsorbingNonabsorbing

Applied Example 2, page 506

Page 59: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin Gambler’s Ruin

SolutionSolution

For the case of the For the case of the nonabsorbing statenonabsorbing state “ “$1$1”:”:✦ The The probabilityprobability of going of going fromfrom an accumulated amount an accumulated amount

of of $1$1 toto $0$0, is , is aa3131 = .6 = .6..

1 0 .6 00 1 0 .40 0 0 .60 0 .4 0

$0$0 $3$3 $1$1 $2$2

$0$0

$3$3

$1$1

$2$2

AbsorbingAbsorbing NonabsorbingNonabsorbing

Applied Example 2, page 506

Page 60: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin Gambler’s Ruin

SolutionSolution

For the case of the For the case of the nonabsorbing statenonabsorbing state “ “$1$1”:”:✦ It is It is not possiblenot possible to go to go fromfrom an accumulated amount an accumulated amount

of of $1$1 toto either either $3$3 or or $1$1, so , so aa2323 = = aa3333 = 0 = 0..

1 0 .6 00 1 0 .40 0 0 .60 0 .4 0

$0$0 $3$3 $1$1 $2$2

$0$0

$3$3

$1$1

$2$2

AbsorbingAbsorbing NonabsorbingNonabsorbing

Applied Example 2, page 506

Page 61: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin Gambler’s Ruin

SolutionSolution

For the case of the For the case of the nonabsorbing statenonabsorbing state “ “$1$1”:”:✦ The The probabilityprobability of going of going fromfrom an accumulated amount an accumulated amount

of of $1$1 toto $2$2, is , is aa4343 = .4 = .4..

1 0 .6 00 1 0 .40 0 0 .60 0 .4 0

$0$0 $3$3 $1$1 $2$2

$0$0

$3$3

$1$1

$2$2

AbsorbingAbsorbing NonabsorbingNonabsorbing

Applied Example 2, page 506

Page 62: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin Gambler’s Ruin

SolutionSolution

For the case of the For the case of the nonabsorbing statenonabsorbing state “ “$2$2”:”:✦ It is It is not possiblenot possible to go to go fromfrom an accumulated amount an accumulated amount

of of $2$2 toto either either $0$0 or or $2$2, so , so aa4141 = = aa4444 = 0 = 0..

1 0 .6 00 1 0 .40 0 0 .60 0 .4 0

$0$0 $3$3 $1$1 $2$2

$0$0

$3$3

$1$1

$2$2

AbsorbingAbsorbing NonabsorbingNonabsorbing

Applied Example 2, page 506

Page 63: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin Gambler’s Ruin

SolutionSolution

For the case of the For the case of the nonabsorbing statenonabsorbing state “ “$2$2”:”:✦ The The probabilityprobability of going of going from from an accumulated amount an accumulated amount

of of $2$2 toto $3$3, is , is aa4242 = .4 = .4..

1 0 .6 00 1 0 .40 0 0 .60 0 .4 0

$0$0 $3$3 $1$1 $2$2

$0$0

$3$3

$1$1

$2$2

AbsorbingAbsorbing NonabsorbingNonabsorbing

Applied Example 2, page 506

Page 64: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin Gambler’s Ruin

SolutionSolution

For the case of the For the case of the nonabsorbing statenonabsorbing state “ “$2$2”:”:✦ The The probabilityprobability of going of going fromfrom an accumulated amount an accumulated amount

of of $2$2 toto $1$1, is , is aa4343 = .6 = .6..

1 0 .6 00 1 0 .40 0 0 .60 0 .4 0

$0$0 $3$3 $1$1 $2$2

$0$0

$3$3

$1$1

$2$2

AbsorbingAbsorbing NonabsorbingNonabsorbing

Applied Example 2, page 506

Page 65: Markov Chains Regular Markov Chains Absorbing Markov Chains

Finding the Steady-State Matrix Finding the Steady-State Matrix for an Absorbing Stochastic Matrixfor an Absorbing Stochastic Matrix

Suppose an Suppose an absorbing stochastic matrixabsorbing stochastic matrix AA has been has been partitionedpartitioned into into submatricessubmatrices

Then the Then the steady-state matrixsteady-state matrix of of AA is given by is given by

where the where the orderorder of the of the identity matrixidentity matrix appearing in the appearing in the expression expression ((II – – RR))–1–1 is chosen to be the is chosen to be the same ordersame order as as RR..

I SA O R

1( )I S I RO O

Page 66: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin (Continued) Gambler’s Ruin (Continued)

If John If John continues to playcontinues to play the game the game untiluntil he has he has accumulatedaccumulated a sum of a sum of $3$3 or he has or he has lost alllost all of his money, of his money, what is the what is the probabilityprobability that he will that he will accumulateaccumulate $3$3??

Applied Example 3, page 507

Page 67: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin (Continued) Gambler’s Ruin (Continued)SolutionSolution The The transition matrixtransition matrix associated with the associated with the Markov processMarkov process is is

We need to find the We need to find the steady-state matrixsteady-state matrix of of AA. In this case,. In this case,

Thus,Thus,

1 0 .6 00 1 0 .40 0 0 .60 0 .4 0

0 .6 .6 0.4 0 0 .4

R S

and

1 0 0 .6 1 .60 1 .4 0 .4 1

I R

Applied Example 3, page 507

Page 68: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin (Continued) Gambler’s Ruin (Continued)SolutionSolution Next, we find the Next, we find the inverseinverse of of matrixmatrix I I –– R R::

and soand so

1 1.32 .79( )

.53 1.32I R

1 .6 0 1.32 .79 .79 .47( )

0 .4 .53 1.32 .21 .53S I R

Applied Example 3, page 507

Page 69: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Gambler’s Ruin (Continued) Gambler’s Ruin (Continued)SolutionSolution Therefore, the required Therefore, the required steady-state matrixsteady-state matrix of of A A is given byis given by

Thus, we see that Thus, we see that startingstarting with with $2$2, the , the probabilityprobability is is .53.53 that that John will leave the game with an John will leave the game with an accumulatedaccumulated amount of amount of $3$3 (that is, he wins (that is, he wins $1$1).).

1 0 .79 .470 1 .21 .530 0 0 00 0 0 0

$0$0 $3$3 $1 $1 $2$2

$0$0

$3$3

$1$1

$2$2

1( )I S I RO O

Applied Example 3, page 507

Page 70: Markov Chains Regular Markov Chains Absorbing Markov Chains

9.49.4Game Theory and Strictly Determined GamesGame Theory and Strictly Determined Games

3 2 42 0 36 1 1

3 2 42 0 36 1 1

Column MaximaColumn Maxima

321

321

Row Row MinimaMinima

LargestLargest of of rowrow minimaminima

6 6 00 44

SmallestSmallest of of column column maximamaxima

Page 71: Markov Chains Regular Markov Chains Absorbing Markov Chains

Game TheoryGame Theory

The theory of games The theory of games combinescombines matrix methodsmatrix methods with the with the theory of theory of probabilityprobability to determine the to determine the optimal strategiesoptimal strategies to to be employed by two or more be employed by two or more opponentsopponents involved in a involved in a competitive situationcompetitive situation, with each opponent seeking to , with each opponent seeking to maximizemaximize his or her “gains,” or, equivalently, to his or her “gains,” or, equivalently, to minimizeminimize his or her “losses.”his or her “losses.”

For simplicity, we limit our discussion to games with For simplicity, we limit our discussion to games with two two playersplayers..

Page 72: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game RichieRichie and and ChuckChuck play a play a coin-matching gamecoin-matching game in which in which eacheach

player player selects a sideselects a side of a penny without prior knowledge of of a penny without prior knowledge of the other’s choice.the other’s choice.

Then upon a predetermined signal, both players Then upon a predetermined signal, both players disclosedisclose their choices their choices simultaneouslysimultaneously::✦ ChuckChuck agrees to pay agrees to pay RichieRichie $3$3 if if both both choosechoose heads heads..✦ If If RichieRichie chooses chooses headsheads and and ChuckChuck chooses chooses tailstails, then , then

RichieRichie pays pays ChuckChuck $6$6..✦ If If RichieRichie chooses chooses tailstails and and ChuckChuck chooses chooses headsheads, then , then

ChuckChuck pays pays RichieRichie $2$2..✦ Finally, if Finally, if bothboth choose choose tailstails, then , then ChuckChuck pays pays RichieRichie $1$1..

In this game, the In this game, the objectiveobjective of each player is to discover a of each player is to discover a strategystrategy that will ensure that his that will ensure that his winningswinnings are are maximizedmaximized (or (or losses minimizedlosses minimized).).

Applied Example 1, page 512

Page 73: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game

This This coin-matching gamecoin-matching game is an example of a is an example of a zero-sum gamezero-sum game::✦ a game in which the a game in which the payoffpayoff to to one partyone party results in results in

an an equal lossequal loss to to thethe otherother For such games, the For such games, the sum of the paymentsum of the payment made by made by

both players at the end of each play both players at the end of each play adds up to adds up to zerozero..

Applied Example 1, page 512

Page 74: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game We can We can representrepresent the game’s data in the form of a the game’s data in the form of a matrixmatrix::

Each Each rowrow corresponds to one of the two possible corresponds to one of the two possible movesmoves by by RichieRichie (referred to as the (referred to as the row playerrow player, , RR).).

Each Each columncolumn corresponds to one of the two possible corresponds to one of the two possible movesmoves by by ChuckChuck (the (the column playercolumn player, , CC).).

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Applied Example 1, page 512

Page 75: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game We can We can representrepresent the game’s data in the form of a the game’s data in the form of a matrixmatrix::

Each Each entryentry in the matrix represents the in the matrix represents the payoff payoff from from CC to to RR::✦ aa1111 = 3 = 3 represents a represents a $3$3 payoffpayoff from from ChuckChuck to to RichieRichie

( (CC to to RR) when ) when bothboth choose to play choose to play headsheads..

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Applied Example 1, page 512

Page 76: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game We can We can representrepresent the game’s data in the form of a the game’s data in the form of a matrixmatrix::

Each Each entryentry in the matrix represents the in the matrix represents the payoff payoff from from CC to to RR::✦ aa1212 = – = – 66 represents a represents a $6$6 payoffpayoff from from RichieRichie to to ChuckChuck

(the payoff from (the payoff from CC to to RR is is negativenegative) when ) when RichieRichie chooses chooses headsheads and and ChuckChuck chooses chooses tailstails..

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Applied Example 1, page 512

Page 77: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game We can We can representrepresent the game’s data in the form of a the game’s data in the form of a matrixmatrix::

Each Each entryentry in the matrix represents the in the matrix represents the payoff payoff from from CC to to RR::✦ aa2121 = 2 = 2 represents a represents a $2$2 payoffpayoff from from ChuckChuck to to Richie Richie

when when RichieRichie chooses chooses tailstails and and ChuckChuck chooses chooses headsheads..

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Applied Example 1, page 512

Page 78: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game We can We can representrepresent the game’s data in the form of a the game’s data in the form of a matrixmatrix::

Each Each entryentry in the matrix represents the in the matrix represents the payoff payoff from from CC to to RR::✦ aa2222 = 1 = 1 represents a represents a $1$1 payoffpayoff from from ChuckChuck to to Richie Richie

when when bothboth choose to play choose to play tailstails..

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Applied Example 1, page 512

Page 79: Markov Chains Regular Markov Chains Absorbing Markov Chains

More Generally: The Payoff MatrixMore Generally: The Payoff Matrix

Consider a Consider a two-person gametwo-person game with players with players RR and and CC.. Suppose Suppose RR has has mm possible movespossible moves, , RR11, , RR22, … , , … , RRmm and and

C C has has nn possible movespossible moves CC11, , CC22, … , , … , CCnn. .

Page 80: Markov Chains Regular Markov Chains Absorbing Markov Chains

More Generally: The Payoff MatrixMore Generally: The Payoff Matrix

We can We can represent represent thethe game game in terms of an in terms of an mm × × nn matrixmatrix in which each in which each rowrow represents represents oneone of the of the m m possible movespossible moves of of RR and each and each columncolumn represents represents oneone of the of the nn possible movespossible moves of of CC: :

11 12 1 1

21 22 2 2

1 2

1 2

j n

j n

i i ij in

m m mj mn

a a a aa a a a

a a a a

a a a a

11 12 1 1

21 22 2 2

1 2

1 2

j n

j n

i i ij in

m m mj mn

a a a aa a a a

a a a a

a a a a

CC11 CC22 ·· ·· ·· CCjj ·· ·· ·· CCnn

RR11

RR22......

RRii......

RRmm

CC’’s movess moves

RR’’s s movesmoves

Page 81: Markov Chains Regular Markov Chains Absorbing Markov Chains

More Generally: The Payoff MatrixMore Generally: The Payoff Matrix

11 12 1 1

21 22 2 2

1 2

1 2

j n

j n

i i ij in

m m mj mn

a a a aa a a a

a a a a

a a a a

11 12 1 1

21 22 2 2

1 2

1 2

j n

j n

i i ij in

m m mj mn

a a a aa a a a

a a a a

a a a a

CC11 CC22 ·· ·· ·· CCjj ·· ·· ·· CCnn

RR11

RR22......

RRii......

RRmm

CC’’s movess moves

RR’’s s movesmoves

The The entryentry aaijij in thein the iithth rowrow and and jjthth columncolumn of the of the payoff payoff matrixmatrix represents the represents the payoffpayoff from from CC to to RR when when RR chooses chooses movemove RRii and and CC chooses chooses movemove CCjj..

Note that a Note that a negativenegative value for value for entryentry aaijij means that the means that the payoffpayoff will be from will be from RR to to CC..

Page 82: Markov Chains Regular Markov Chains Absorbing Markov Chains

Optimal StrategiesOptimal Strategies Let’s return to the Let’s return to the payoff matrixpayoff matrix of the of the coin-matching gamecoin-matching game::

Let’s Let’s considerconsider first first RR’s ’s point of viewpoint of view: : ✦ The The entriesentries in the matrix represent in the matrix represent payoffspayoffs to to himhim, so he , so he

might at first consider choosing the might at first consider choosing the row containing the row containing the largest entrylargest entry ( (RR11) as a possible move.) as a possible move. By By choosingchoosing RR11, , RR would certainly would certainly gaingain the the largestlargest

possible possible payoffpayoff of of $3$3 ifif CC chose chose CC11.. However, if However, if CC chose chose CC22 instead, then instead, then RR would would loselose $6$6!!

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Page 83: Markov Chains Regular Markov Chains Absorbing Markov Chains

Optimal StrategiesOptimal Strategies Let’s return to the Let’s return to the payoff matrixpayoff matrix of the of the coin-matching gamecoin-matching game::

Let’s Let’s considerconsider first first RR’s ’s point of viewpoint of view: : ✦ A A more prudent approachmore prudent approach would be to would be to assumeassume that no that no

matter what row he chose, matter what row he chose, CC will will countercounter with a move with a move (column) that will result in the (column) that will result in the smallest payoffsmallest payoff to him. to him.

✦ To To maximizemaximize his payoff under these circumstances, his payoff under these circumstances, RR would then select from among the moves (rows) the one would then select from among the moves (rows) the one in which the in which the smallest payoffsmallest payoff is is as large as possibleas large as possible..

✦ This strategy is called the This strategy is called the maximin strategymaximin strategy..

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Page 84: Markov Chains Regular Markov Chains Absorbing Markov Chains

Maximin StrategyMaximin Strategy

1.1. For For each roweach row of the payoff matrix, find the of the payoff matrix, find the smallest entrysmallest entry in that in that rowrow..

2.2. ChooseChoose the the rowrow for which the entry found in for which the entry found in step 1step 1 is as is as largelarge as possible. as possible.This row constitutes This row constitutes RR’s “best” move.’s “best” move.

Page 85: Markov Chains Regular Markov Chains Absorbing Markov Chains

Optimal StrategiesOptimal Strategies Let’s return to the Let’s return to the payoff matrixpayoff matrix of the of the coin-matching gamecoin-matching game::

Let’s apply theLet’s apply the maximin strategymaximin strategy from from RR’s ’s point of viewpoint of view: : ✦ We can write to the right of the matrix the We can write to the right of the matrix the smallestsmallest

payoffpayoff RR can get in each move. can get in each move.✦ Of these, we can now see that the Of these, we can now see that the largest largest possible possible

guaranteed payoffguaranteed payoff for for RR is with move is with move RR22..✦ Thus, according to the Thus, according to the maximin strategymaximin strategy, , RR’s “best” ’s “best”

move is move is RR22, which means he should , which means he should choosechoose tailstails..

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves61

Row Row MinimaMinima

Page 86: Markov Chains Regular Markov Chains Absorbing Markov Chains

Optimal StrategiesOptimal Strategies Let’s return to the Let’s return to the payoff matrixpayoff matrix of the of the coin-matching gamecoin-matching game::

Next, let’s Next, let’s considerconsider the game from the game from CC’s ’s point of viewpoint of view: : ✦ His objective is to His objective is to minimizeminimize the the payoffpayoff to to RR..✦ This is accomplished by This is accomplished by choosingchoosing the the columncolumn whose whose

largestlargest payoff payoff is is as as smallsmall as possible as possible..✦ This strategy for This strategy for CC is called the is called the minimax strategyminimax strategy..

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

Page 87: Markov Chains Regular Markov Chains Absorbing Markov Chains

Minimax StrategyMinimax Strategy

1.1. For For each columneach column of the payoff matrix, find of the payoff matrix, find the the largest entrylargest entry in that in that columncolumn..

2.2. ChooseChoose the the columncolumn for which the entry found for which the entry found in in step 1step 1 is as is as smallsmall as possible. as possible.This column constitutes This column constitutes CC’s “best” move.’s “best” move.

Page 88: Markov Chains Regular Markov Chains Absorbing Markov Chains

Optimal StrategiesOptimal Strategies Let’s return to the Let’s return to the payoff matrixpayoff matrix of the of the coin-matching gamecoin-matching game::

Let’s apply theLet’s apply the minimax strategyminimax strategy from from CC’s ’s point of viewpoint of view: : ✦ We can write at the bottom of the matrix the We can write at the bottom of the matrix the largestlargest

payoffpayoff CC can pay in each move. can pay in each move.✦ We can now see that the We can now see that the smallestsmallest possible guaranteed possible guaranteed

payoffpayoff from from CC is with move is with move CC22..✦ Thus, according to the Thus, according to the minimax strategyminimax strategy, , CC’s “best” ’s “best”

move is move is CC22, which means he should , which means he should choosechoose tailstails..

3 62 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

Smaller of the Column MaximaSmaller of the Column Maxima 3 3 11

Page 89: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample For the game with the following For the game with the following payoff matrixpayoff matrix, determine , determine

the the maximinmaximin and and minimax strategiesminimax strategies for for each playereach player..

SolutionSolution The The maximin strategymaximin strategy for the for the row playerrow player is to play is to play row 3row 3.. The The minimax strategyminimax strategy for the for the column playercolumn player is to play is to play

column 2column 2..

3 2 42 0 36 1 1

Column MaximaColumn Maxima 6 6 00 44

321

Row Row MinimaMinima

LargestLargest of of rowrow minima minima

SmallestSmallest of of column column maximamaxima

Page 90: Markov Chains Regular Markov Chains Absorbing Markov Chains

Playing the Game RepeatedlyPlaying the Game Repeatedly

If both players are If both players are always rationalalways rational and and assumeassume the the otherother is also is also always rationalalways rational, it would seem that they both end , it would seem that they both end up following the up following the same strategysame strategy again and again, yielding again and again, yielding the the samesame exact outcomeexact outcome every timeevery time..

But what if But what if one playerone player realizesrealizes that his that his opponentopponent is is employing the employing the maximinmaximin (or (or minimaxminimax) strategy?) strategy?

Perhaps this player can Perhaps this player can use this knowledgeuse this knowledge to his to his advantage and advantage and choose a different strategychoose a different strategy..

Page 91: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample Let's consider our Let's consider our last examplelast example to see how this may work. to see how this may work.

Let's consider first the Let's consider first the row playerrow player.. Let's suppose the Let's suppose the row playerrow player realized that the realized that the column column

playerplayer consistently follows the consistently follows the minimax strategyminimax strategy, always , always choosing choosing column 2column 2..

3 2 42 0 36 1 1

Column MaximaColumn Maxima

321

Row Row MinimaMinima

LargestLargest of of rowrow minima minima

6 6 0 0 44

SmallestSmallest of of column column maximamaxima

Example 3, page 515

Page 92: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample Let's consider our Let's consider our last examplelast example to see how this may work. to see how this may work.

The The row playerrow player can now can now change his strategychange his strategy, by playing the , by playing the

rowrow that yields the that yields the highest payoffhighest payoff on on column 2column 2 (given that (given that the the column playercolumn player always plays always plays column 2column 2).).

Thus, the Thus, the row playerrow player will choose to play will choose to play row 2row 2 , instead of , instead of row 3row 3, which will , which will reduce his losesreduce his loses from from 11 to to 00..

3 2 42 0 36 1 1

Column MaximaColumn Maxima

New StrategyNew Strategy

SmallestSmallest of of column column maximamaxima

6 6 0 0 44

Example 3, page 515

Page 93: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample Let's consider our Let's consider our last examplelast example to see how this may work. to see how this may work.

Note that this change in strategy Note that this change in strategy works onlyworks only if at least one if at least one

of the of the other payoffsother payoffs in in column 2column 2 is is preferablepreferable to the to the payoffpayoff of of row 3row 3 played with the played with the maximin strategymaximin strategy..

3 2 42 0 36 1 1

Column MaximaColumn Maxima 6 6 0 0 44

New StrategyNew Strategy

SmallestSmallest of of column column maximamaxima

Example 3, page 515

Page 94: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample Let's consider our Let's consider our last examplelast example to see how this may work. to see how this may work.

For example, what if For example, what if aa2222 were equal to were equal to –3–3 instead of instead of 00?? Then the Then the row player’srow player’s optimal strategyoptimal strategy would be to play would be to play

row 3row 3, as with the , as with the maximin strategymaximin strategy, even though he knows , even though he knows that the that the column playercolumn player is always going to play is always going to play column 2column 2..

3 2 42 36 1 1

Column MaximaColumn Maxima

New StrategyNew Strategy

6 6 0 0 44

– –3 3

SmallestSmallest of of column column maximamaxima

Example 3, page 515

Page 95: Markov Chains Regular Markov Chains Absorbing Markov Chains

Optimal StrategyOptimal Strategy

The optimal strategy in a game is the strategy The optimal strategy in a game is the strategy that is that is most profitablemost profitable to a to a particular playerparticular player..

Page 96: Markov Chains Regular Markov Chains Absorbing Markov Chains

Strictly Determined GameStrictly Determined Game

A A strictly determined gamestrictly determined game is characterized by the is characterized by the following following propertiesproperties::1.1. There is an There is an entryentry in the in the payoff matrixpayoff matrix that is that is

simultaneouslysimultaneously the the smallestsmallest entry in its entry in its rowrow and and the the largestlargest entry in its entry in its columncolumn..This entry is called the This entry is called the saddle pointsaddle point for the game. for the game.

2.2. The The optimal strategyoptimal strategy for the for the row playerrow player is precisely is precisely the the maximin strategymaximin strategy and is the and is the rowrow containing containing the the saddle pointsaddle point..The The optimal strategyoptimal strategy for the for the column playercolumn player is the is the minimax strategyminimax strategy and is the and is the columncolumn containing containing the the saddle pointsaddle point..

Page 97: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample For the game with the following For the game with the following payoff matrixpayoff matrix, determine , determine

the the optimal strategyoptimal strategy for for each playereach player..

SolutionSolution The The maximin strategymaximin strategy for the for the row playerrow player is to play is to play row 2row 2.. The The minimax strategyminimax strategy for the for the column playercolumn player is to play is to play

column 3column 3..

3 4 42 1 3

Column MaximaColumn Maxima 3 3 44 –3 –3

43

Row Row MinimaMinima

LargestLargest of of rowrow minima minima

SmallestSmallest of of column column maximamaxima

Example 4, page 515

Page 98: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample For the game with the following For the game with the following payoff matrixpayoff matrix, determine , determine

the the optimal strategyoptimal strategy for for each playereach player..

SolutionSolution In In repeated playsrepeated plays, , RR discoversdiscovers that that CC consistently chooses consistently chooses

to play to play column 3column 3..

3 4 42 1 3

Column MaximaColumn Maxima 3 3 44 –3 –3

43

Row Row MinimaMinima

LargestLargest of of rowrow minima minima

SmallestSmallest of of column column maximamaxima

Example 4, page 515

Page 99: Markov Chains Regular Markov Chains Absorbing Markov Chains

For the game with the following For the game with the following payoff matrixpayoff matrix, determine , determine the the optimal strategyoptimal strategy for for each playereach player..

SolutionSolution Knowing that Knowing that CC alwaysalways chooses chooses column 3column 3, , RR’s’s best choice best choice

is to play the row with the is to play the row with the highest payoffhighest payoff in in column 3column 3.. Thus, Thus, RR’s ’s optimal strategyoptimal strategy is to play is to play row 2row 2.. Note that Note that row 2row 2 was also the outcome of the was also the outcome of the maximin maximin

strategystrategy..

3 4 42 1 3

ExampleExample

Column MaximaColumn Maxima 3 3 44 –3 –3

Optimal Optimal StrategyStrategy

SmallestSmallest of of column column maximamaxima

Example 4, page 515

Page 100: Markov Chains Regular Markov Chains Absorbing Markov Chains

ExampleExample For the game with the following For the game with the following payoff matrixpayoff matrix, determine , determine

the the optimal strategyoptimal strategy for for each playereach player..

SolutionSolution Let's consider now player Let's consider now player CC.. In In repeated playsrepeated plays, , CC discoversdiscovers that that RR consistently chooses consistently chooses

to play to play row 2row 2..

3 4 42 1 3

Column MaximaColumn Maxima 3 3 44 –3 –3

43

Row Row MinimaMinima

LargestLargest of of rowrow minima minima

SmallestSmallest of of column column maximamaxima

Example 4, page 515

Page 101: Markov Chains Regular Markov Chains Absorbing Markov Chains

For the game with the following For the game with the following payoff matrixpayoff matrix, determine , determine the the optimal strategyoptimal strategy for for each playereach player..

SolutionSolution Knowing that Knowing that RR alwaysalways chooses chooses Row 2Row 2, , CC’s’s best choice is to best choice is to

play the column with the play the column with the lowest payofflowest payoff in in row 2row 2.. Thus, Thus, CC’s ’s optimal strategyoptimal strategy is to play is to play column 3column 3.. Note that Note that column 3column 3 was also the outcome of the was also the outcome of the minimax minimax

strategystrategy..

3 4 42 1 3

ExampleExample

Optimal Optimal StrategyStrategy

43

Row Row MinimaMinima

LargestLargest of of rowrow minima minima

Example 4, page 515

Page 102: Markov Chains Regular Markov Chains Absorbing Markov Chains

For the game with the following For the game with the following payoff matrixpayoff matrix, determine , determine the the optimal strategyoptimal strategy for for each playereach player..

SolutionSolution Thus, we conclude that whether the game is Thus, we conclude that whether the game is onlyonly played played

only only onceonce or or repeatedlyrepeatedly, the , the outcomeoutcome is always the is always the samesame: : row 2row 2 and and column 3column 3..

This is because this a This is because this a strictly determined gamestrictly determined game, with entry , with entry aa2323 as the as the saddle pointsaddle point for the game. for the game.

3 4 42 1 3

ExampleExample

Optimal Optimal StrategyStrategy

Optimal Optimal StrategyStrategy

Example 4, page 515

Page 103: Markov Chains Regular Markov Chains Absorbing Markov Chains

Saddle PointSaddle Point

We just saw that when a game has a We just saw that when a game has a saddle pointsaddle point, , the the optimal strategiesoptimal strategies for the players are to for the players are to choosechoose, , respectively, the respectively, the rowrow and and columncolumn that contain the that contain the saddle pointsaddle point..

Furthermore, in repeated plays of the game, each Furthermore, in repeated plays of the game, each player’s player’s optimal strategyoptimal strategy consists of making the consists of making the same same movemove over and over againover and over again, since the discovery of the , since the discovery of the opponent’s optimal strategy cannot be used to opponent’s optimal strategy cannot be used to advantage.advantage.

Such strategies are called Such strategies are called pure strategiespure strategies..

Page 104: Markov Chains Regular Markov Chains Absorbing Markov Chains

Saddle PointSaddle Point

The The saddle pointsaddle point of a of a strictly determined gamestrictly determined game is is also referred to as the also referred to as the value of the gamevalue of the game..

If the If the valuevalue of a strictly determined game is of a strictly determined game is positivepositive, , then the game then the game favors favors thethe row player row player..

If the If the valuevalue is is negativenegative, it , it favorsfavors the the column playercolumn player.. If the If the valuevalue of the game is of the game is zerozero, the game is called a , the game is called a

fair gamefair game..

Page 105: Markov Chains Regular Markov Chains Absorbing Markov Chains

9.59.5Games with Mixed StrategiesGames with Mixed Strategies

HeadsHeads

TailsTails

Page 106: Markov Chains Regular Markov Chains Absorbing Markov Chains

Mixed StrategiesMixed Strategies Let’s consider a slightly Let’s consider a slightly modifiedmodified version of the version of the coin-coin-

matching gamematching game played by played by RichieRichie and and ChuckChuck::

Note that it contains Note that it contains nono saddle pointsaddle point and is therefore and is therefore notnot strictly determinedstrictly determined..

In In thisthis sectionsection, we shall look at games like this one, that are , we shall look at games like this one, that are notnot strictly determined strictly determined and the and the strategiesstrategies associated with associated with such games.such games.

3 22 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Page 107: Markov Chains Regular Markov Chains Absorbing Markov Chains

Mixed StrategiesMixed Strategies Let’s consider a slightly Let’s consider a slightly modifiedmodified version of the version of the coin-coin-

matching gamematching game played by played by RichieRichie and and ChuckChuck::

What What strategystrategy should should RichieRichie follow? follow? It would seem that It would seem that RichieRichie should select should select row 1row 1 since he stands since he stands

to to winwin $3$3 instead of the instead of the $1$1 he would get by playing he would get by playing row 2row 2, at , at a a riskrisk, in either case, of , in either case, of losinglosing $2$2..

3 22 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Page 108: Markov Chains Regular Markov Chains Absorbing Markov Chains

Mixed StrategiesMixed Strategies Let’s consider a slightly Let’s consider a slightly modifiedmodified version of the version of the coin-coin-

matching gamematching game played by played by RichieRichie and and ChuckChuck::

However, when However, when ChuckChuck realizesrealizes that that RichieRichie is consistently is consistently playing playing row 1row 1, he would counter by playing , he would counter by playing column 2column 2..

This would cause This would cause RichieRichie to to loselose $2$2 on each play. on each play. Thus, Thus, RichieRichie considers a strategy of choosing considers a strategy of choosing row 1row 1

sometimessometimes and and row 2row 2 at at other timesother times..

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie3 22 1

Page 109: Markov Chains Regular Markov Chains Absorbing Markov Chains

Mixed StrategiesMixed Strategies Let’s consider a slightly Let’s consider a slightly modifiedmodified version of the version of the coin-coin-

matching gamematching game played by played by RichieRichie and and ChuckChuck::

A similar analysis will lead A similar analysis will lead ChuckChuck to follow a strategy of to follow a strategy of choosing choosing column 1column 1 sometimessometimes and and column 2column 2 at at other timesother times..

Such strategies are called Such strategies are called mixed strategiesmixed strategies..

3 22 1

Heads TailsHeads Tails

HeadsHeads

TailsTails

CC’s moves’s moves

RR’s moves’s moves

ChuckChuck

RichieRichie

Page 110: Markov Chains Regular Markov Chains Absorbing Markov Chains

Mixed StrategiesMixed Strategies There are There are many waysmany ways in which a player may in which a player may choose moveschoose moves in in

a game with a game with mixed strategiesmixed strategies.. Richie, for example, might choose to play Richie, for example, might choose to play headsheads half the timehalf the time

and and tailstails half the timehalf the time, making the choice randomly by, say, , making the choice randomly by, say, flipping a coin.flipping a coin.

He could also determine beforehand the He could also determine beforehand the proportionproportion of the of the timetime row 1row 1 should be chosen, by means, say, of a should be chosen, by means, say, of a spinning spinning wheelwheel::

HeadsHeads

TailsTails

Page 111: Markov Chains Regular Markov Chains Absorbing Markov Chains

Mixed StrategiesMixed Strategies Mathematically, we could describe the Mathematically, we could describe the mixed strategymixed strategy as a as a

row vectorrow vector whose whose dimensiondimension coincides with the number of coincides with the number of possible movespossible moves the player has. the player has.

For example, if For example, if RichieRichie had decided on a strategy in which he had decided on a strategy in which he chose to play chose to play row 1row 1 half the timehalf the time and and row 2row 2 half the timehalf the time, , then this strategy is then this strategy is representedrepresented by the by the row vectorrow vector

Similarly, the Similarly, the mixed strategymixed strategy for the column player can be for the column player can be represented by a represented by a column vectorcolumn vector..

Let’s say that Let’s say that ChuckChuck has decided that has decided that 20%20% of the time he will of the time he will chose chose column 1column 1 and and 80%80% of the time he will chose of the time he will chose column 2column 2. .

This strategy can be This strategy can be representedrepresented by the by the column vectorcolumn vector

[.5 .5]

.2

.8

Page 112: Markov Chains Regular Markov Chains Absorbing Markov Chains

Expected Value of a GameExpected Value of a Game

In order to compare the merits of a player’s different In order to compare the merits of a player’s different mixed strategiesmixed strategies in a game, we can calculate the in a game, we can calculate the expected expected value of a gamevalue of a game..

The The expected value of a gameexpected value of a game measures the measures the average payoffaverage payoff to the to the row playerrow player when when both playersboth players adopt a particular set adopt a particular set of of mixed strategiesmixed strategies..

We can now explain this notion using a We can now explain this notion using a 2 2 × × 22 matrix gamematrix game whose whose payoff matrixpayoff matrix has the general form has the general form

11 12

21 22

a aA

a a

Page 113: Markov Chains Regular Markov Chains Absorbing Markov Chains

Expected Value of a GameExpected Value of a Game

Suppose in Suppose in repeated playsrepeated plays of the game, the of the game, the row playerrow player RR adopts the mixed strategy of selecting adopts the mixed strategy of selecting row 1row 1 with with probabilityprobability pp1 1 and and row 2row 2 with with probabilityprobability pp22..

The The column playercolumn player, in turn, adopts the mixed strategy of , in turn, adopts the mixed strategy of selecting selecting column 1column 1 with with probabilityprobability qq11 and and column 2column 2 with with probabilityprobability qq22..

1 2[ ]P p p

1

2

qQ

q

Page 114: Markov Chains Regular Markov Chains Absorbing Markov Chains

Expected Value of a GameExpected Value of a Game

Now, in each play of the game, there are Now, in each play of the game, there are four possible four possible outcomesoutcomes that may be represented by that may be represented by ordered pairsordered pairs

(row 1, column 1)(row 1, column 1)(row 1, column 2)(row 1, column 2)(row 2, column 1)(row 2, column 1)(row 2, column 2)(row 2, column 2)

where the where the first numberfirst number of each ordered pair represents of each ordered pair represents RR’s selection and the ’s selection and the secondsecond represents represents CC’s selection.’s selection.

Page 115: Markov Chains Regular Markov Chains Absorbing Markov Chains

Expected Value of a GameExpected Value of a Game

The The choicechoice of moves made by of moves made by one playerone player is made is made without without knowingknowing the the other’s choiceother’s choice, making these , making these independent independent eventsevents..

Therefore, the Therefore, the probabilityprobability of of RR choosing choosing row 1row 1 and and CC choosing choosing column 1column 1, , is given byis given by

PP(row 1, column 1)(row 1, column 1) = = PP(row 1) (row 1) · · PP(column 1) (column 1) = = pp11qq11

Page 116: Markov Chains Regular Markov Chains Absorbing Markov Chains

Expected Value of a GameExpected Value of a Game Similarly, the Similarly, the probabilitiesprobabilities of all of all four possible outcomesfour possible outcomes, ,

together with the together with the payoffspayoffs associated with each of the four associated with each of the four possible outcomes, may be possible outcomes, may be summarizedsummarized as follows: as follows:

Thus, the Thus, the expected payoffexpected payoff of the game is of the game is

which can be which can be expressedexpressed in terms of the in terms of the matricesmatrices PP, , AA, and , and QQ::

OutcomeOutcome ProbabilityProbability PayoffPayoff

(row 1, column 1)(row 1, column 1) pp11qq11 aa1111

(row 1, column 2)(row 1, column 2) pp11qq22 aa1212

(row 2, column 1)(row 2, column 1) pp22qq11 aa2121

(row 2, column 2)(row 2, column 2) pp22qq22 aa2222

1 1 11 1 2 12 2 1 21 2 2 22E p q a p q a p q a p q a

E PAQ

Page 117: Markov Chains Regular Markov Chains Absorbing Markov Chains

Expected Value of a GameExpected Value of a Game

LetLet

be the be the vectorsvectors representing the representing the mixed strategiesmixed strategies for the for the row playerrow player RR and the and the column playercolumn player CC, respectively, in , respectively, in a game with an a game with an m m ×× n n payoff matrixpayoff matrix

1

21 2 m

n

qq

P p p p Q

q

and

11 12 1

21 22 2

1 2

n

n

m m mn

a a aa a a

A

a a a

Page 118: Markov Chains Regular Markov Chains Absorbing Markov Chains

Expected Value of a GameExpected Value of a Game

Then the Then the expected valueexpected value of the gameof the game is given by is given by

11 12 1 1

21 22 2 21 2

1 2

n

nm

m m mn n

a a a qa a a q

E PAQ p p p

a a a q

Page 119: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game

Consider Consider RichieRichie and and ChuckChuck playing a playing a coin-matching gamecoin-matching game with a with a payoff matrixpayoff matrix given by given by

Compute the Compute the expected payoffexpected payoff of the game if of the game if RichieRichie adopts adopts the the mixed strategymixed strategy PP and and ChuckChuck adopts the adopts the mixed strategymixed strategy QQ, where, where

a.a.

b.b.

3 22 1

A

.5.5 .5

.5P Q

and

.1.8 .2

.9P Q

and

Applied Example 1, page 524

Page 120: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game

SolutionSolutiona.a. We computeWe compute

Thus, in Thus, in repeated playsrepeated plays of the game, it may be of the game, it may be expectedexpected in in the the long-termlong-term that the that the payoff payoff to each player is to each player is 00..

3 2 .5.5 .5

2 1 .5

.5.5 .5

.50

E PAQ

Applied Example 1, page 524

Page 121: Markov Chains Regular Markov Chains Absorbing Markov Chains

Applied Example:Applied Example: Coin-Matching Game Coin-Matching Game

SolutionSolutionb.b. We computeWe compute

Thus, in the Thus, in the long runlong run RichieRichie may be may be expectedexpected to to loselose $1.06$1.06 on the on the averageaverage in each play. in each play.

3 2 .1.8 .2

2 1 .9

.1.2 1.4

.91.06

E PAQ

Applied Example 1, page 524

Page 122: Markov Chains Regular Markov Chains Absorbing Markov Chains

End of End of Chapter Chapter