common knowledge: the math

Post on 22-Feb-2016

38 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Common Knowledge: The Math. We need a way to talk about “private information” . We will use an information structure < Ω , π 1 , π 2 , μ > . Ω is the (finite) set of “states” of the world ω  Ω is a possible state of the world E ⊆ Ω is an event Examples: - PowerPoint PPT Presentation

TRANSCRIPT

Common Knowledge: The Math

We need a way to talk about “private information”

We will use an information structure

< Ω, π1, π2, μ >

Ω is the (finite) set of “states” of the worldω Ω is a possible state of the world

E⊆Ω is an event

Examples:

Ω = {(hot, rainy), (hot, sunny), (cold, rainy), (cold, sunny)}ω=(hot,rainy)E={(hot,rainy),(hot,sunny)}

πi “partitions” the set of states for player i into those he can and those he cannot distinguish.

E.g., Suppose player 1 is in a basement with a thermostat but no window

π1 = {{(hot, rainy), (hot, sunny)}, {(cold, rainy), (cold, sunny)}}

We write: π1((hot, sunny)) = π1((hot, rainy)) π1((cold, sunny)) = π1((cold, rainy))

Suppose player 2 is in a high-rise with a window but no thermostat

π2 = {{(hot, rainy), (cold, rainy)}, {(hot, sunny), (cold, sunny)}}

π2((hot, sunny)) = π2((cold, sunny)) π2((hot, rainy)) = π2((cold, rainy))

We let μ represent the “common prior” probability distribution over Ω

I.e. μ: Ω [0, 1] s.t. Σ μ(ω) = 1

We interpret μ(ω) as the probability state ω occurs

E.g., μ((hot, sunny)) = .45μ((hot, rainy)) = .05μ((cold, sunny)) = .05μ((cold, rainy)) = .45

We can likewise write μ(E) or μ(E|F), using Bayes Rule.

E.g., μ((hot, sunny)|hot) = = .9

Now, we want to investigate how this private information can influence play in a game.

We assume that in every state of the world the players play the same coordination game.

(But they may play different actions in different states!)

a, a b, c

c, b d, d

A

B

A B

a > c , d > b(Interpret?)

What are the strategies in this new game? The payoffs?

si: πi {A, B}

e.g. s1({(hot, rainy), (hot, sunny)})=A s1({(cold, rainy), (cold, sunny)})=B

s2({(hot, sunny), (cold, sunny)})=A s2({(hot, rainy), (cold, rainy)})=B

Nots1({(cold, rainy)})=B s1({(hot, rainy), (hot, sunny), (cold,sunny)})=A

Ui: s1 × s2 lR s.t. Ui(s1, s2) = Σω μ(ω) Ui(s1(π1(ω), s2(π2(ω)))

How did we get this? Expected Utility=Weighted average of payoff in each state (given common priors, and prescribed action in each state)

E.g.

s1({(hot, rainy), (hot, sunny)})=As1({(cold, rainy), (cold, sunny)})=B

s2({(hot, sunny), (cold, sunny)})=As2({(hot, rainy), (cold, rainy)})=B

1, 1 0,0

0,0 5,5

A

B

A B

U1(s1, s2) =μ((hot,sunny)) U1(s1(π1((hot,sunny), s2(π2((hot,sunny)))+…=μ((hot,sunny)) U1(s1({(hot,rainy),(hot,sunny)}, s2({(hot,sunny),(cold,sunny)})+…=μ((hot,sunny)) U1(A, A)+…=.45×1+.05×0 + .05×0 + .45×5=2.7

What is the condition for NE?

Same as before…

(s1,s2) is NE iffU1(s1,s2) ≥ U1(s1’,s2) for all s1’U2(s1,s2) ≥ U2(s1,s2’) for all s2’

E.g.

s1({(hot, rainy), (hot, sunny)})=As1({(cold, rainy), (cold, sunny)})=B

s2({(hot, sunny), (cold, sunny)})=As2({(hot, rainy), (cold, rainy)})=B

1, 1 0,0

0,0 5,5

A

B

A B

Is (s1,s2) NE?

U1(s1,s2)=2.7

Let’s consider all possible deviations for player 1

Let s’1({(hot, rainy), (hot, sunny)})=s’1 ({(cold, rainy), (cold, sunny)})=A U1(s’1,s2)=.45*1+.05*0+.05*1+.45*0=.45 U1(s’1,s2)<U1(s1,s2)

Let s’1({(hot, rainy), (hot, sunny)})=s’1 ({(cold, rainy), (cold, sunny)})=BU1(s’1,s2)=2.5U1(s’1,s2)<U1(s1,s2)

Let s’1({(hot, rainy), (hot, sunny)})=B s’1 ({(cold, rainy), (cold, sunny)})=A U1(s’1,s2)=.3 U1(s’1,s2)<U1(s1,s2)

(Similarly for player 2) (s1,s2) is NE

Now assume μ((hot, sunny)) = .35μ((hot, rainy)) = .15μ((cold, sunny)) = .35μ((cold, rainy)) = .15

Is (s1,s2) still NE?

U1(s1,s2)=.35*1+.15*0+.15*0+.35*5=2.1

Consider:s’1({(hot, rainy), (hot, sunny)})=s’1 ({(cold, rainy), (cold, sunny)})=B

U1(s’1,s2)=.35*0+.15*5+.15*0+.35*5=2.5

U1(s’1,s2)>U1(s1,s2)(s1,s2) isn’t NE

(in fact, can similarly argue no other (s1,s2) are NE that condition action on information!)

So sometimes it is possible to condition one’s action on one’s information, and sometimes it isn’t

Can we characterize, for any coordination game and information structure, when this is possible?

It turns out the answer will have to do with “higher order beliefs.” To see that we will need to define concepts called p-beliefs and common p-beliefs

We say i p-believes E at ω, if

μ (E|πi (ω)) ≥ p

E.g., consider our original information structure and let E={(hot,sunny),(cold,sunny)}

player 1 .7-believes E at (hot,sunny)

μ ({(hot,sunny),(cold,sunny)}|π1 ((hot,sunny)))= μ ({(hot,sunny),(cold,sunny)}|{(hot,sunny),(hot, rainy)}) =

(.45+0)/.5=9/10>.7

I.e.Both p-believe EBoth p-believe that both p-believe EBoth p-believe that both p-believe that both p-believe E…

Suppose (s1, s2) is a Nash equilibrium such that for i=1,2

si(ω) = A for all ω Esi(ω) = B for all ω F

Then Ω/F is common p-belief at E, and Ω/E is common (1-p)-belief at F

Intuition…

If 1 is playing A when she observes the event E, then he better be quite sure it isn’t F (b/c 2 plays B on F)

How sure? At least p!

Is this enough? What if 1 p-believes that it isn’t F, but doesn’t think 2 p-believes it isn’t F?

Well then 1 thinks 2 will play B! How confident does 1 have to be, therefore, that 2 p-believes it isn’t F? At least p!

If Ω/F is common p-belief at E, and Ω/E is common (1-p)-belief at F

Then there exists a Nash Equilibrium (s1, s2) s.t.

si(ω) = A for all ω Esi(ω) = B for all ω F

Note:

-higher order beliefs matter IFF my optimal choice depends on your choice! (coordination game, hawk dove game, but not signaling game!)

-Even if game state dependent!

top related