Topics in Dynamic Epistemic Logic
Alexandru Baltag
(ILLC, University of Amsterdam)
Go to
website:
http://alexandru.tiddlyspot.com
then click on Teaching, then click on “Topics in DEL”
1
Course Administration
Instructor: ALEXANDRU BALTAG
Email: [email protected]
Teaching Assistants: Giovanni Cina and Zoe Christoff
Time and Place
Tuesdays 15:00-17:00 (room D1.114);
Thursdays 15:00-17:00 (room D1.114) and 17:00-19:00 (room D1.115).
2
Course Materials
There is no main textbook. The principal material consists in slides
of lectures. In addition, I will refer to a number of textbooks and
materials, among which the most important are:
• A. Baltag, H. P. van Ditmarsch and L.S. Moss, Epistemic logic
and information update, in Handbook of Philosophy of
Information, part of Handbook of Philosophy of Science, vol. 8, pp.
361-455, Elsevier, 2008.
• J. van Benthem, Logical Dynamics of Information and
Interaction, Cambridge Univ Press, 2011.
• H. P. van Ditmarsch, W. van der Hoek and B. Kooi, Dynamic
Epistemic Logic, Springer, 2007.
• R. Fagin, J.Y. Halpern, Y. Moses and M.Y. Vardi, Reasoning
about Knowledge, MIT Press, Cambridge MA 1995.
3
Assessment and Course Plan
Assessment: Homework exercises (worth in total 20% of the final
grade), plus a final paper or project (worth 80% of the final grade) on
one of the topics investigated in the course.
The exercises are individual (i.e. no collaboration), while the final
papers can be jointly written by teams.
The homework exercises will be mostly mathematical, while the final
paper/project should be mostly conceptual.
COURSE PLAN: See website. It’s tentative, and it will be
periodically updated.
4
1.1. MAIN THEMES: Examples and Jokes
Examples of Multi-agent Systems:
1. Computation: a network of communicating computers; the
Internet
2. Games: players in a game, e.g. chess or poker
3. AI: a team of robots exploring their environment and interacting
with each other
4. Cryptographic Communication: some communicating agents
(“principals”) following a cryptographic protocol to communicate
in a private and secret way
5
5. Economics: economic agents engaged in transactions in a market
6. Society: people engaged in social activities
7. Politics: “political games”, diplomacy, war.
8. Science: a community of scientists, engaged in creating theories
about Nature, making observations and performing experiments to
test their theories.
6
“Dynamic” and “informational” systems
Such multi-agent systems are dynamic: agents “do” some actions,
changing the system by interacting with each other. E.g. of actions:
moves in a game, communicating (sending, receiving or intercepting)
messages, buying/selling etc.
On the other hand, these systems are also informational systems:
agents acquire, store, process and exchange information about each
other and the environment. This information may be truthful, and then
it’s called knowledge. Or the information may be only plausible (or
probable), well-justified, but still possibly false; then it’s called
(justified) belief.
7
Nested Knowledge in Chinese Philosophy
Chuangtze and Hueitse had strolled onto a bridge over the Hao,
when the former observed,
“See how the small fish are darting about! That is the
happiness of the fish”.
“You are not fish yourself”, said Hueitse, “so how can you
know the happiness of the fish?”
“You are not me”, retorted Chuangtse, “so how can you know
that I do not know?”
Chuangtse, c. 300 B. C.
8
Self-Nesting: (Lack of) Introspection
As we know,
There are known knowns.
There are things we know we know.
We also know
There are known unknowns.
That is to say We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don’t know
We don’t know.
Donald Rumsfeld, Feb. 12, 2002, Department of Defense news briefing
9
... And Belief?
“Man is made by his belief. As he believes, so he is.”
(Bhagavad Gita, part of the epic poem Mahabharata)
“Myths which are believed in tend to become true.”
(George Orwell)
“To succeed, we must first believe that we can.”
(Michael Korda)
“By believing passionately in some thing that does not yet exist,
we create it.”
(Nikos Kazantzakis)
“The thing always happens that you really believe in; and the
belief in a thing makes it happen.” (Frank Lloid Wright)10
Oh, really?! But this is a Lie!
So what?
Everyone lies online. In fact, readers expect you to lie. If you
don’t, they’ll think you make less than you actually do. So the
only way to tell the truth is to lie.
(Brad Pitt’s thoughts on lying about how much money you make on
your online dating profile; Aug 2009 interview to “Wired” magazine)
11
Well, but all this believing, lying and cheating interactions can only end
up in extreme skepticism:
“I don’t even believe the truth anymore.”
(J. Edgar Hoover, the founder of the FBI)
Though even this was already anticipated centuries ago by the most
famous pirate of the Carribean:
12
Mullroy: What’s your purpose in Port Royal, Mr. Smith?
Murtogg: Yeah, and no lies.
Jack Sparrow: Well, then, I confess, it is my intention to
commandeer one of these ships, pick up a crew in Tortuga, raid,
pillage, plunder and otherwise pilfer my weasely black guts out.
Murtogg: I said no lies.
Mullroy: I think hes telling the truth.
Murtogg: Don’t be stupid: if he were telling the truth, he
wouldn’t have told it to us.
Jack Sparrow: Unless, of course, he knew you wouldn’t believe
the truth even if he told it to you.
13
Back to the real world!
The only escape from these infinite loops seems to be the solid ground
of the real world.
“Reality is that which, when you stop believing in it, doesn’t go
away.”
(Philip K. Dick)
But how to get back to reality, from the midst of our mistaken
beliefs??
14
Answer: Belief Revision!
Dare to confront your mistakes! Learn to give up!
“It is not bigotry to be certain we are right; but it is bigotry to
be unable to imagine how we might be wrong.”
(G. K. Chesterton)
15
Belief revision is action!
True belief revision is “dynamic”: a sustained, self-correcting,
truth-tracking action. True knowledge can only be recovered
by effort.
So, finally, we get to what we could call the “Motto” of
Dynamic-Epistemic Logic:
“The wise sees action and knowledge as one. They see truly.”
(Bhagavad Gita, once again)
16
Knowledge and Uncertainty
Uncertainty is a corollary of imperfect knowledge (or “imperfect
information”):
A game of imperfect information is one in which some moves are
hidden, so that the players don’t know all that was going on: they
only have a partial view of the situation.
Example: poker (in contrast to chess).
A player may be uncertain about the real situation of the game at a
given time: e.g. they simply cannot distinguish between a situation in
which another player has a winning hand and a situation in which this
is not the case. For all our player knows, these situations are both
“possible”.
17
Evolving Knowledge
The knowledge a player has may change in time, due to his or other
players’ actions.
For instance, he can do some move that allows him to learn some of the
cards of the other player. As a general rule, players try to minimize
their uncertainty and increase their knowledge.
18
Wrong Beliefs: Cheating
In their drive for more knowledge and less uncertainty, players
may be induced to acquire a false “certainty”: they will “know”
things that are not true.
Example: bluffing (in poker) may induce your opponent to believe you
have a winning hand, when in fact you don’t.
Notice that such a wrong belief, once it becomes “certainty”, might
look just like knowledge (to the believer):
your opponent may really think he “knows” you have a winning hand.
19
Distributed Knowledge
Suppose Alice would like to know with whom did Bob go out for
dinner. But she only knows he went out with one of his two friends,
Charles or Eve (but not both: they can’t stand each other). On the
other hand, suppose in fact Bob went out with Eve; so Charles must
obviously know that Bob didn’t go out with him.
If only Alice and Charles could put their knowledge together, they
would find out that Bob went out with Eve. So Alice gives a phone call
to Charles, they chat and find out.
Before the chat, none of them knew that Bob has gone out with Eve,
but this fact was distributed knowledge between the two of them:
putting their knowledge together was enough to ensure the knowledge
of this new fact.
20
“Everybody knows...”
Suppose that, in fact, everybody knows the road rules in France. For
instance, everybody knows that a red light means “stop” and a green
light means “go”. And suppose everybody respects the rules that (s)he
knows.
Question: Is this enough for you to feel safe, as a driver?
Answer: NO.
Why? Think about it!
21
Common Knowledge
Suppose the road rules (and the fact they are respected) are common
knowledge: everybody knows (and respects) the rules, and everybody
knows that everybody knows (and respects) the rules, and... etc.
Now, you can drive safely!
22
1.2. Puzzles and Paradoxes
I will now illustrate the above-mentioned themes via a number of
epistemic puzzles:
these are stories, involving multi-agent informational dynamics, that
appear to lead to surprising, “puzzling”, or even paradoxical
conclusions.
23
Epistemic Puzzle no. 0: The Coordinated Attack
Two divisions of the same army, commanded by general A and general
B, are camped on two hiltops overlloking a valley. In the valley awaits
the enemy (C). It is clear that if both divisions attack simultaneously
they will win, while if only division attacks it will be defeated. So
neither general will attack unless he is absolutely sure that the other
will attach with him. General A sends a messenger to general B to
coodinate a simultaneous attack, by conveying the message “attack at
dawn”. But it is possible that the messenger would be captured by the
enemy. Fortunately, on this particular night, everything goes smooth.
How long it will take them to coordinate an attack?
24
Well, B cannot attack at dawn, after receiving the message, since he’s
still not sure that A knows he received the message; indeed, A might
think it possible the messenger was captured, in which case A will not
attack at dawn, since he’ll fear B won’t attack. So B has to send
another messenger to A to confirm the receipt of the first message (an
’acknowledgment’). After receiving it, A knows that B got the first
message. But he still cannot attack, since he’s not sure B will: for all
that B knows, his messenger might have been captured (in which case
A wouldn’t know the first message was received). So A has to send
back to B another messenger, confirming the receipt of the previous
acknowledgment.
This goes forever, without achieving any coordination: even if no
messenger is captured, one can show that no finite number of succesful
deliveries of “acknowledgments to acknowledgments” can allow the
generals to attack!
25
Epistemic Puzzle no. 1: To learn is to falsify
Our starting example concerns a “love triangle”: suppose that Alice
and Bob are a couple, but Alice has just started an affair with Charles.
At some point, Alice sends to Charles an email, saying:
“Don’t worry, Bob doesn’t know about us”.
But suppose now that Bob accidentally reads the message (by, say,
secretely breaking into Alice’s email account).
Then, paradoxically enough, after seeing (and believing) the message
which says he doesn’t know..., he will know !
26
So, in this case, learning the message is a way to falsify it.
As we’ll see, this example shows that standard belief-revision
postulates may fail to hold in such complex learning ac-
tions, in which the message to be learned refers to the knowledge
of the hearer.
27
Epistemic Puzzle no. 2: Self-fulfilling falsehoods
Suppose Alice becomes somehow convinced that Bob knows everything
(about the affair).
This is false (Bob doesn’t have a clue), but nevertheless she’s so
convinced that she makes an attempt to warn Charles by sending him a
message:
”Bob knows everything about the affair!”.
As before, Bob secretely reads (and believes) the message. While false
at the moment of its sending, the message becomes true: now he knows.
28
So, communicating a false belief (i.e. Alice’s action) might be a
self-fulfilling prophecy: Alice’s false belief, once communi-
cated, becomes true.
In the same time, the action of (reading and) believing a falsehood
(i.e. Bob’s action) can be self-fulfilling: the false message, once
believed, becomes true.
29
Epistemic Puzzle no. 3: Self-enabling falsehoods
Suppose that in fact Alice was faithful, despite all the attempts
made by Charles to seduce her.
Out of despair, Charles comes up with a “cool” plan of how to break up
the marriage:
he sends an email which is identical to the one in the second puzzle
(bearing Alice’s signature and warning Charles that Bob knows about
their affair.) Moreover, he makes sure somehow that Bob will have the
opportunity to read the message.
Knowing Bob’s quick temper, Charles expects him to sue for a divorce;
knowing Alice’s fragile, volatile sensitivity, he also expects that, while
on the rebound, she’d be open for a possible relationship with himself
(Charles).
30
The plan works: as a result, Bob is mislead into “knowing” that he
has been cheated.
He promptly sends Alice a message saying: ”I’ll see you in court”.
After divorce, Charles makes his seductive move, playing the
friend-in-need. Again, the original message becomes true: now,
Alice does have an affair with Charles, and Bob knows it.
Sending a false message has enabled its validation.
31
Epistemic Puzzle no. 4: Muddy Children
Suppose there are 4 children, all of them being good logicians, exactly 3
of them having dirty faces. Each can see the faces of the others, but
doesn’t see his/her own face.
The father publicly announces:
“At least one of you is dirty”.
Then the father does another paradoxical thing: starts repeating over
and over the same question “Do you know if you are dirty or not,
and if so, which of the two?”
32
After each question, the children have to answer publicly, sincerely and
simultaneously, based only on their knowledge, without taking any
guesses. No other communication is allowed and nobody can lie.
One can show that, after 2 rounds of questions and answers, all the
dirty children will come to know they are dirty! So they give
this answer in the 3rd round, after which the clean child also comes
to knows she’s clean, giving the correct answer at the 4th round.
33
Muddy Children Puzzle continued
First Question: What’s the point of the father’s first announcement
(”At least one of you is dirty”)?
Apparently, this message is not informative to any of the children: the
statement was already known to everybody! But the puzzle wouldn’t
work without it: in fact this announcement adds information to the
system! The children implicitly learn some new fact, namely the fact
that what each of them used to know in private is now public knowledge.
34
Second Question: What’s the point of the father’s repeated questions?
If the father knows that his children are good logicians, then at each
step the father knows already the answer to his question,
before even asking it! However, the puzzle wouldn’t work without these
questions. In a way, it seems the father’s questions are “abnormal”, in
that they don’t actually aim at filling a gap in father’s knowledge; but
instead they are part of a Socratic strategy of
teaching-through-questions.
Third Question: How can the children’s statements of ignorance lead
them to knowledge?
35
The Amazon Island
This is another story encoding the same puzzle:
On the island of Amazonia, women are dominant and the law says that,
if at any point a woman knows her husband is cheating on her, she
must shoot him the same day at noon in the main square.
Now the queen (truthfully) tells the women: “At least one of your
husbands is a cheater. Whenever somebody’s husband is cheating, all
the other women know it”.
For 16 days, nothing happens. Then, in the 17th day, shootings are
heard.
Question: How many husbands died?
36
Puzzle no 5: Sneaky Children
Let us modify the last example a bit.
Suppose the children are somehow rewarded for answering as quickly as
possible, but they are punished for incorrect answers; thus they are
interested in getting to the correct conclusion as fast as possible.
Suppose also that, after the first round of questions and answers,
two of the dirty children “cheat” on the others by secretly
announcing each other that they’re dirty, while none of the
others suspects this can happen.
37
Honest Children Always Suffer
As a result, they both will answer truthfully “I know I am dirty” in the
second round.
One can easily see that the third dirty child will be totally
deceived, coming to the “logical” conclusion that... she is
clean!
So, after giving this wrong answer in the third round, she ends up by
being punished for her credulity, despite her impeccable logic.
38
Clean Children Always Go Crazy
What happens to the clean child?
Well, assuming she doesn’t suspect any cheating, she is facing
a contradiction: two of the dirty children answered too quickly,
coming to know they’re dirty before they were supposed to know!
If the third child simply updates her knowledge monotonically with this
new information (and uses classical logic), then she ends up believing
everything : she goes crazy!
39
The Dangers of Mercy
In the Amazonia version of the story, assume that again there are
exactly 17 cheating husbands (out of 1 million husbands on the island),
while the rest of 999983 husbands are faithful.
Consider what happens now if all the wives of the 17 cheating husbands
secretely decide to break the Queen’s rules, by quietly sparing the lives
of their husbands, even when they get to know that they are cheating.
We also assume that all the other wives do not suspect this: not only
that they strictly obey by the Queen’s rules, but they believe that it is
common knowledge that everybody else obeys by those same rules.
It’s easy to see that, in this case, 17 days will pass without any
shooting. But it’s also easy to show that in the 18th day, shots will be
heard. How many husbands will die in this scenario? How
many of these are innocent?40
Puzzle no 6: Surprised Children
The students in a high-school class know for sure that the date of
the exam has been fixed in one of the five (working) days of
next week: it’ll be the last week of the term, and it’s got to be an
exam, and only one exam.
But they don’t know in which day.
Now the Teacher announces her students that the exam’s date will
be a surprise: even in the evening before the exam, the students will
still not be sure that the exam is tomorrow.
41
Paradoxical Argumentation
Intuitively, one can prove (by backward induction, starting with
Friday) that, IF this announcement is true, then the exam
cannot take place in any day of the week.
So, using this argument, the students come to “know” that
the announcement is false: the exam CANNOT be a surprise.
GIVEN THIS, they feel entitle to dismiss the announcement, and...
THEN, surprise: whenever the exam will come (say, on Tuesday), it
WILL indeed be a complete surprise!
42
Puzzle no 7: The Lottery Paradox
A lottery with 1 million tickets (numbered from 1 to 1000000) was
announced. It is known that one (and only one) of the tickets is the
winner.
Alice receives a ticket (with the number 1, 570) as a birthday gift. Being
a good mathematician, Alice calculates that the probability that this
particular ticket is winning is 0.000001, i.e. practically infinitesimal. So
she believes that her ticket (no 1, 570) is not the winning one.
Of course, the same reasoning applies to any other ticket. So, for each
number between 1 to 1000000, Alice believes that the ticket with that
number is not winning.
On the other hand, she knows that one of these tickets IS the
winning one.
So Alice’s beliefs are inconsistent!
43
The Infinite Lottery
Maybe you think that Alice was wrong to believe that her ticket was
not winning, given that there was still some non-zero (though extremely
small) probability that it might actually be the winning ticket.
Well, then consider instead an infinite lottery, with tickets labeled by
arbitrary natural numbers.
Now, assuming the lottery is fair, the probability that a given number
is the winning one is actually = 0. So Alice is now absolutely right to
believe that a ticket with a(ny) given number is not winning. But
one of them has to be winning !
So Alice’s beliefs (taken as a whole) are still inconsistent! Though, in
this case, any finite subset of her beliefs is consistent...
44
Puzzle number 8: A Centipede Game
Consider the following game G, where Alice (a) is the first and third
player, and Bob (b) the second :
ONMLHIJKv0 : a //
��
ONMLHIJKv1 : b //
��
ONMLHIJKv2 : a //
��
_^]\XYZ[o4 : 4, 5
_^]\XYZ[o1 : 3, 0 _^]\XYZ[o2 : 2, 3 _^]\XYZ[o3 : 5, 2
In the leaves (“outcomes”, denoted by o’s), the first number is
Alice’s payoff, while the second is Bob’s payoff.
45
Backward Induction Method
We iteratively eliminate the obviously “bad” moves (that lead to
“bad” payoffs for the player making the move) in stages, proceeding
backwards from the leaves. The first elimination stage gives us:
ONMLHIJKv0 : a //
��
ONMLHIJKv1 : b //
��
ONMLHIJKv2 : a
��_^]\XYZ[o1 : 3, 0 _^]\XYZ[o2 : 2, 3 _^]\XYZ[o3 : 5, 2
46
Backward Induction, Continued
Next Stage:
ONMLHIJKv0 : a //
��
ONMLHIJKv1 : b
��_^]\XYZ[o1 : 3, 0 _^]\XYZ[o2 : 2, 3
47
Backward Induction: The Outcome
Final Stage:
ONMLHIJKv0 : a
��_^]\XYZ[o1 : 3, 0
So, according to this method, the outcome of the game should
by o1: Alice gets 3 dollars, while Bob gets nothing!
So the game stops at the first step, and the players have to be
“rationally” satisfied with (3, 0), when they could have got (4, 5) or
at least (5, 2) if they continued to play!
This conclusion strucks most people as pretty “irrational”.
48
Aumann’s Argument
But... it seems to be an inescapable conclusion of (commonly known)
“rationality”!
Indeed, suppose that it is common “knowledge” that
everybody is “rational”: always plays to maximize his/her profit.
Then, in particular, Alice is rational, so when choosing between
outcomes o3 and o4 (at node v2), she will choose o3 (giving her 5
dollars rather than 4). This justifies the first elimination stage.
Now, since “rationality” is common “knowledge”, Bob knows
that Alice is “rational”, so he can “simulate” the above
elimination argument in his mind : so now Bob “knows” that, if the
node v2 is reached during the game, then Alice will choose outcome
o3.
49
Given this information, he knows that, if arriving at node v1, the
only possible outcomes would be o2 and o3. From these two, o2 gives
Bob higher payoff (3 instead of 2). Since Bob is rational, it
follows that, if node b1 is reached during, Bob would choose o2. This
justifies the second elimination stage.
Again, all this is known to Alice: she knows that Bob is
rational and that he knows that she is rational, so she can
“simulate” all the above argument, concluding that at the initial node
v0, the possible outcomes are only o1 and o2. Being rational
herself, she has to choose o1 (giving her a higher payoff 3 > 2).
50
Counterargument
In the view of the above argument, let’s re-examine Bob’s reasoning
when he plans his move for node v1 in the Centipede game:
ONMLHIJKv0 : a //
��
ONMLHIJKv1 : b //
��
ONMLHIJKv2 : a //
��
_^]\XYZ[o4 : 4, 5
_^]\XYZ[o1 : 3, 0 _^]\XYZ[o2 : 2, 3 _^]\XYZ[o3 : 5, 2
Based on the above argument, Bob knows that, IF “rationality”
is “common knowledge”, then Alice should choose outcome
o1, thus finishing the game.
51
So Bob reasons like this: IF node v1 would be reached AT ALL, then
this could only happen if the above assumption (“common
knowledge of rationality”) was wrong! So, in this eventuality,
he will have to give up his “knowledge of Alice’s rationality”: she
would have already made what appeared as an “irrational” choice (of
v1 over o1). Even if he started the game believing that Alice was
rational, he may now reassess this assumption.
This undermines the justification for the first elimination step, at
least in Bob’s mind: once he’s not sure of Alice’s rationality, he
cannot be sure anymore that she will choose o3 over o4, if given this
opportunity.
52
Rational Pessimism about Other’s Rationality
So, the question is: what will Bob believe about Alice if he’d seen
her “irrationally” choosing v1 over o1?
One possible option is the “pessimistic” one: in this case, Bob
would conclude that Alice is “irrational”, and that moreover
she will continue to be “irrational” from then on.
This seems pretty reasonable: after all, if Alice has once behaved
weird, what’s to stop her from doing it again?!
Let’s assume that Bob does indeed adopt this attitude of “rational
pessimism” towards Alice’s rationality.
53
The Consequences of Pessimism
So Bob thinks that, IF node v1 would be reached then (Alice is
“irrational”, so that), if she’d be given the opportunity to choose
between o3 and o4, Alice would “stupidly” choose o4. So, as far
as Bob’s beliefs go, the “first” elimination stage goes now as
follows:
ONMLHIJKv0 : a //
��
ONMLHIJKv1 : b //
��
ONMLHIJKv2 : a // _^]\XYZ[o4 : 4, 5
_^]\XYZ[o1 : 3, 0 _^]\XYZ[o2 : 2, 3
54
Next Stage
Given that, and Bob’s rationality, the next stage according to
Bob is:
ONMLHIJKv0 : a //
��
ONMLHIJKv1 : b // ONMLHIJKv2 : a // _^]\XYZ[o4 : 4, 5
_^]\XYZ[o1 : 3, 0
55
Final Stage
We assumed common knowledge of rationality, so in fact Alice IS
rational: Bob is wrong to revise his belief, since she would never
choose o4 over o3. Even her choice of v1 over o1 is perfectly
justified, if we assume that she knows Bob’s belief revision
policy. Then the best move for Alice is to first choose v1 and later
choose o3:
ONMLHIJKv0 : a // ONMLHIJKv1 : b // ONMLHIJKv2 : a
��_^]\XYZ[o3 : 5, 2
56
Inconsistency?
So, assuming common knowledge of “rationality”, we “proved” both
that the backward induction outcome is reached and that it is not
reached!
Moreover, this reasoning can be generalized (as pointed out by
Binmore, Bonanno, Bicchieri, Reny, Brandenburger and others):
the argument underlying the backward induction method seems to
give rise to a fundamental paradox (the so-called “BI paradox”).
57
PUZZLE no 9: Wisdom of the Crowds
The “implicit knowledge” of a group is typically much higher than
even the knowledge of the most expert member of the group.
Estimating the weight of an ox. (Francis Galton)
Counting jelly beans in a jar. (Jack Treynor)
Navigating a maze. (Norman Johnson)
Finding Out which company was responsible for the Challenger
disaster!
Predicting election results!
And yet...
58
PUZZLE no 10: Informational Cascades
It is commonly known that there are two urns. Urn A contains twice as
many black marbles as white marbles. Urn B contains twice as many
white marbles as black marbles.
It is known that one (and only one) of the urns in placed in a room,
where people are allowed to enter one by one. Each person draws
randomly one marble from the room, looks at it and has to make a
guess: whether the urn is the room is Urn A or Urn B. The guesses are
recorded on a board in the room, so that the next person can see the
previous guesses.
What will happen?
59
The Circular Mill
An army ant, when lost, obeys a simple rule: follow the ant in front of
you!
Most of the time, this works well.
But the American naturalist William Beebe came upon a strange sight
in Guyana:
a group of army ants was moving in a huge circle, 1200 feet in
circumference. It took each ant two and a half hours to complete the
tour.
The ants went round and round for two days, till they all died.
If you think people are smarter than ants, think of the arms’ race in the
Cold War.
60
The Human Mill: Men Who Stare at Goats
About the U.S. Army’s exploration of psi research and
military applications of the paranormal.
General Brown: When did the Soviets begin this type of
research?
Brigadier General Dean Hopgood: Well, Sir, it looks like they
found out about our attempt to telephathically communicate
with one of our nuclear subs. The Nautilus, while it was under
the Polar cap.
General Brown: What attempt?
Dean: There was no attempt. It seems the story was a French
hoax.
61
Dean: But the Russians think the story about the story being a
French hoax is just a story, Sir.
General Brown: So they started doing psi research because they
thought we were doing psi research, when in fact we weren’t
doing psi research?
Dean: Yes sir. But now that they *are* doing psi research,
we’re gonna have to do psi research, sir.
Dean: We can’t afford to have the Russian’s leading the field in
the paranormal.
LESSON: Recall self-fulling falsehoods!!!
62
PUZZLE no 11: Pluralistic Ignorance
Emperor’s New Clothes.
The First Class example:
LECTURER: If there is anything that you feel it’s too difficult to
understand in my lecture, please ask questions! :-)
Anybody?!
63
1.3. Epistemic-Doxastic Logics
Epistemic Logic was first formalized by Hintikka (1962), who also
sketched the first steps in formalizing doxastic logic.
They were further developed and studied by philosophers and logicians
(Parikh, Stalnaker, van Benthem etc.), computer-scientists (Halpern,
Vardi, Fagin etc.) and economists (Aumann, Brandeburger, Samet etc).
64
Syntax of Single-Agent Epistemic-Doxastic Logic
ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | Kϕ | Bϕ
65
Models for Single-Agent Information
We are given a set of “possible worlds”, meant to represent all the
relevant epistemic/doxastic possibilities in a certain situation.
EXAMPLE 1: a coin is on the table, but the (implicit) agent doesn’t
know (nor believe he knows) which face is up.
�� ���� ��H
�� ���� ��T
66
Knowledge or Belief
The universal quantifier over the domain of possibilities is interpreted
as knowledge, or belief, by the implicit agent.
So we say the agent knows, or believes, a sentence ϕ if ϕ is true in
all the possible worlds of the model.
The specific interpretation (knowledge or belief) depends on the
context.
In the previous example, the agent doesn’t know (nor believe) that the
coin lies Heads up, and neither that it lies Tails up.
67
Learning: Update
EXAMPLE 2:
Suppose now the agent looks at the upper face of the coin and he sees
it’s Heads up.
The model of the new situation is now:
�� ���� ��H
Only one epistemic possibility has survived: the agent now
knows/believes that the coin lies Heads up.
68
Update as World Elimination
In general, updating corresponds to world elimination:
an update with a sentence ϕ is simply the operation of deleting all
the non-ϕ possibilities
After the update, the worlds not satisfying ϕ are no longer possible: the
actual world is known not to be among them.
69
Truth and Reality
But is ϕ “really” true (in the “real” world), apart from the agent’s
knowledge or beliefs?
For this, we need to specify which of the possible worlds is is the
actual world.
70
Real World
Suppose that, in the original situation (before learning), the coin lied
Heads up indeed (though the agent didn’t know, or believe, this).
We represent this situation by marking the actual (“real” state of
the) world with a red star:
�� ��* H�� ��T
71
Mistaken Updates
But what if the real world is not among the “possible” ones? What if
the agent’s sight was so bad that she only thought she saw the coin
lying Heads up, when in fact it lied Tails up?
After the “update”, her epistemically-possible worlds are just
�� ���� ��H
but we cannot mark the actual world here, since it doesn’t belong
to the agent’s model!
72
False Beliefs
Clearly, in this case, the model only represents the agent’s beliefs, but
NOT her “knowledge” (in any meaningful sense): the agent believes
that the coin lies Heads up, but this is wrong!
Knowledge is usually assumed to be truthful, but in this case the
agent’s belief is false.
But still, how can we talk about “truth” in a model in which the
actual world is not represented?!
73
Third-person Models
The solution is to go beyond the agent’s own model, by taking
an “objective” (third-person) perspective: the real pos-
sibility is always in the model, even if the agent believes it to
be impossible.
To point out which worlds are believed to be possible by the agent
we encircle them: these worlds form the “sphere of beliefs”.
“Belief ” now quantifies ONLY over the worlds in this sphere,
while “knowledge” still quantifies over ALL possible worlds.
EXAMPLE 3: �� ��HON MLHI JK �� ��* T
74
Example 4
In the Surprise Exam story, a possible initial situation (BEFORE the
Teacher’s announcement) might be given by:
ON MLHI JK�� ���� ��1
�� ���� ��2
�� ���� ��3
�� ���� ��4
�� ���� ��5
where i means that: the exam takes places in the i-th (working) day of
the week.
This encodes an initial situation in which the student knows that
there will be an exam in (exactly) one of the days, but he doesn’t
know the day, and moreover he doesn’t have any special belief
about this: he considers all days as being possible.
We are not told when will the exam take place: no red star.
75
Beliefs
EXAMPLE 5:
If however, the Student believes (for some reason or another) that the
exam will take place either Monday or Tuesday, then the correct
representation is:
�� ���� ��1
_^ ]\XY Z[�� ���� ��2
�� ���� ��3
�� ���� ��4
�� ���� ��5
Again, we are not told when is the exam, so no red star.
76
However, if we are told that the exam is in fact on Thursday (though
the student still doesn’t know this), then the model is:
�� ���� ��1
_^ ]\XY Z[�� ���� ��2
�� ���� ��3
�� ���� ��*4
�� ���� ��5
In this model, some of the student’s beliefs are false, since the real
world does NOT belongs to his “sphere of beliefs”.
77
Simple Models for Knowledge and Belief
For a set Φ of facts, a (single-agent, pointed) epistemic-doxastic
model is a triple
S = (S, S0, ‖.‖, s∗ ) , consisting of:
1. A set S of ”possible worlds” (or possible “states of the world”, also
known as “ontic states”). S defines the agent’s epistemic state:
these are the states that are “epistemically possible”.
2. A non-empty subset S0 ⊆ S, S0 6= ∅, called the “sphere of beliefs”,
or the agent’s doxastic state: these are the states that
“doxastically possible”.
3. A map ‖.‖ : Φ→ P(S), called the valuation, assigning to each
p ∈ Φ a set ‖p‖S of states.
4. A designated world s∗ ∈ S, called the “actual world”.
78
Interpretation
• The epistemic state S gives us an (implicit) agent’s state of
knowledge: he knows the real world belongs to S, but cannot
distinguish between the states in S, so cannot know which of them
is the real one.
• The doxastic state S0 gives us the agent’s state of belief : he
believes that the real world belongs to S0, but his beliefs are
consistent with any world in S0.
• The valuation tells us which ontic facts hold in which world:
we say that p is true at s if s ∈ ‖p‖.
• The actual world s∗ gives us the “real state” of the world: what
really is the case.
79
Truth
For any world w in a model S and any sentence ϕ, we write
w |=S ϕ
if ϕ is true in the world w.
When the model S is fixed, we skip the subscript and simpky
write w |= ϕ.
For atomic sentences, this is given by the valuation map:
w |= p iff w ∈ ‖p‖,
while for other propositional formulas is given by the usual truth
clauses:
80
w |= ¬ϕ iff w 6|= ϕ,
w |= ϕ ∧ ψ iff w |= ϕ and w |= ψ .
Disjunction, Conditional, Biconditional: We take ϕ ∨ ψ to be just
an abbreviation for ¬(¬ϕ ∧ ¬ψ), ϕ⇒ ψ to be just an abbreviation for
¬ϕ ∨ ψ, and ϕ⇔ ψ to be an abbreviation for (ϕ⇒ ψ) ∧ (ψ ⇒ ϕ).
As a consequence, we have e.g:
w |=S ϕ ∨ ψ iff either w |= ϕ or w |= ψ
etc.
81
Interpretation Map
We can extend the valuation ‖p‖S to an interpretation map ‖ϕ‖S for
all propositional formulas ϕ:
‖ϕ‖S := {w ∈ S : w |=S ϕ}.
Obviously, this has the property that
‖¬ϕ‖S = S \ ‖ϕ‖S,
‖ϕ ∧ ψ‖S = ‖ϕ‖S ∩ ‖ψ‖S,
‖ϕ ∨ ψ‖S = ‖ϕ‖S ∪ ‖ψ‖S.
We now want to extend the interpretation to all the sentences in
doxastic-epistemic logic.
82
Knowledge and Belief
Knowledge is defined as “truth in all epistemically possible
worlds”, while belief is “truth in all doxastically possible
worlds”:
w |= Kϕ iff t |= ϕ for all t ∈ S,
w |= Bϕ iff t |= ϕ for all t ∈ S0.
83
Validity
A sentence is valid over epistemic-doxastic models if it is true at every
state in every epistemic-doxastic model.
A sentence is satisfiable (over epistemic-doxastic models) if it is true
some state in some doxastic-epistemic model.
84
Consequences
For every sentences ϕ,ψ etc, the following are valid over
epistemic-doxastic models:
1. Veracity of Knowledge:
Kϕ⇒ ϕ
2. Positive Introspection of Knowledge:
Kϕ⇒ KKϕ
3. Negative Introspection of Knowledge:
¬Kϕ⇒ K¬Kϕ
4. Consistency of Belief:
¬B(ϕ ∧ ¬ϕ)
85
5. Positive Introspection of Belief:
Bϕ⇒ BBϕ
6. Negative Introspection of Belief:
¬Bϕ⇒ B¬Bϕ
7. Strong Positive Introspection of Belief:
Bϕ⇒ KBϕ
8. Strong Negative Introspection of Belief:
¬Bϕ⇒ K¬Bϕ
9. Knowledge implies Belief:
Kϕ⇒ Bϕ
86
Epistemic-Doxastic Logic: Sound and Complete Proof System
In fact, a sound and complete proof system for single-agent
epistemic-doxastic logic can be obtained by taking as axioms:
validities (1)-(4) and (7)-(9) above, together with all
propositional tautologies, as well as “Kripke’s axioms” for
knowledge and belief
K(ϕ⇒ ψ)⇒ (Kϕ⇒ Kψ) ,
B(ϕ⇒ ψ)⇒ (Bϕ⇒ Bψ) ,
and together with following inference rules:
Modus Ponens: From ϕ and ϕ⇒ ψ infer ψ.
Necessitation: From ϕ infer Kϕ.
87
Generalization
Many philosophers deny that knowledge is introspective, and some
philosophers deny that belief is introspective. In particular, both
common usage and Platonic dialogues suggest that people may
believe they know things that they don’t actually know.
Other of the above validities may also be debatable: e.g. some “crazy”
agents may have inconsistent beliefs.
So it is convenient to have a more general semantics, in which the
above principles do not necessarily hold, so that one can pick whichever
principles one considers true.
88
Kripke Semantics
For a set Φ of facts, a Φ-Kripke model is a triple
S = (S, {Ri}i∈I , ‖.‖, s∗ )i∈I
consisting of
1. a set S of ”possible worlds”
2. a family of binary accessibility relations Ri ⊆ S × S, indexed
by labels i ∈ I,
3. and a valuation ‖.‖ : Φ→ P(S), assigning to each p ∈ Φ a set ‖p‖Sof states
4. a designated world s∗: the “actual” one.
89
Kripke Semantics: Modalities
For atomic sentences and for Boolean connectives, we use the same
semantics (and notations) as on epistemic-doxastic models.
For every sentence ϕ, we can define a new sentence [Ri]ϕ by
(universally) quantifying over Ri-accessible worlds:
s |= [Ri]ϕ iff t |= ϕ for all t such that sRit.
The operator [Ri] is called a “(universal) Kripke modality”. When the
relation R = R1 is unique (i.e. card(I) = 1), we can leave it implicit
and abbreviate [R]ϕ as 2ϕ.
The dual existential modality is given by
< Ri > ϕ := ¬[Ri]¬ϕ.
Again, when R is unique, we can abbreviate < R > ϕ as 3ϕ.
90
Kripke Models for Knowledge and Belief
In a context when we interpret a modality 2ϕ as knowledge, we use
the notation Kϕ instead, and we denote by ∼ the underlying binary
relation R.
When we interpret the modality 2ϕ as belief, we use the notation Bϕ
instead, and we denote by → the underlying binary relation R.
So a Kripke model for (single-agent) knowledge and belief is of
the form (S,∼,→, ‖.‖, s∗), with K interpreted as the modality [∼] for
the epistemic relation ∼, and B as the modality [→] for the doxastic
relation →.
91
Example 3, again: knowledge
The agent’s knowledge in the concealed coin scenario can now be
represented as:
�� ��H44oo //
�� ��T jj
The arrows represent the epistemic relation ∼, which captures the
agent’s uncertainty about the state the world. An arrow from state s
to state t means that, if s were the real state, then the agent wouldn’t
distinguish it from state t: for all he knows, the real state might be t.
92
Knowledge properties
The fact that K in this model satisfied our validities (1)-(3) is now
reflected in the fact that ∼ is an equivalence relation in this model:
• The Veracity (known as axiom T in modal logic) Kϕ⇒ ϕ
corresponds to the reflexivity of the relation ∼.
• Positive Introspection (known as axiom 4 in modal logic)
Kϕ⇒ KKϕ corresponds to the transitivity of the relation ∼.
• Negative Introspection (known as axiom 5 in modal logic)
¬Kϕ⇒ K¬Kϕ corresponds to Euclideaness of the relation ∼:
if s ∼ t and s ∼ w then t ∼ w.
In the context of the other two, Euclideaness is equivalent to
symmetry:
if s ∼ t then t ∼ s.93
Epistemic Models
An epistemic model (or S5-model) is a Kripke model in which all
the accessibility relations are equivalence relations, i.e. reflexive,
transitive and symmetric (or equivalently: reflexive, transitive
and Euclidean).
94
Example 3, again: beliefs
The agent’s beliefs after the mistaken update are now representable as:�� ��H44oo
�� ��T
In both worlds (i.e. irrespective of what world is the real one), the
agent believes that the coin lies Heads up.
95
Belief properties
The fact that belief in this model satisfied our validities (4)-(6) is now
reflected in the fact that the doxastic accessibility in the above model
has the following properties:
• Consistency of beliefs (known as axiom D in modal logic)
¬(Bϕ ∧ ¬ϕ) corresponds to the seriality of the relation →:
∀s∃t such that s→ t.
• Positive Introspection for Beliefs (axiom 4) Bϕ⇒ BBϕ
corresponds to the transitivity of the relation →.
• Negative Introspection for Beliefs (axiom 5) ¬Bϕ⇒ B¬Bϕcorresponds to Euclideaness of the relation →.
96
Doxastic Models
A doxastic model (or KD45-model) is a Φ-Kripke model satisfying
the following properties:
• (D) Seriality: for every s there exists some t such that s→ t ;
• (4) Transitivity: If s→ t and t→ w then s→ w
• (5) Euclideaness : If s→ t and s→ w then t→ w
97
Putting together in the same structure the belief arrows → from the
previous example with the knowledge arrows from before, now denoted
by ∼, we obtain a Kripke model for both knowledge AND belief.
98
Properties connecting Knowledge and Belief
The fact that knowledge and belief in this model satisfied our validities
(7)-(9) is now reflected in the fact that the accessibility relations → in
the above model have the following properties:
• Strong Positive Introspection of beliefs Bϕ⇒ KBϕ
corresponds to
if s ∼ t and t→ w then s→ w.
• Strong Negative Introspection of beliefs ¬Bϕ⇒ K¬Bϕcorresponds to
if s ∼ t and s→ w then t→ w.
• Knowledge Implies Beliefs Kϕ⇒ Bϕ corresponds to
if s→ t then s ∼ t.99
Epistemic-Doxastic Kripke Models
A Kripke model satisfying all the above conditions on the relations ∼and → is called an epistemic-doxastic Kripke model.
There are two important observations to be made about these models:
first, they are completely equivalent to our simple, sphere-based
epistemic-doxastic models;
second, the epistemic relation is completely determined by the doxastic
relation.
100
Equivalence of Models
EXERCISE: For every epistemic-doxastic model S = (S, S0, ‖.‖, s∗)there exists a doxastic-epistemic Kripke model S′ = (S,∼,→, ‖.‖, s∗)(having the same set of worlds S, same valuation ‖.‖ and same real
world s∗), such that the same sentences of doxastic-epistemic
logic are true at the real world s in model S as in model S′:
s |=S ϕ iff s |=S′ ϕ ,
for every sentence ϕ.
Conversely, for every doxastic-epistemic Kripke model
S′ = (S,∼,→, ‖.‖, s∗) there exist a doxastic-epistemic model
S = (S, S0, ‖.‖, s∗) such that, for every sentence ϕ, we have:
s |=S ϕ iff s |=S′ ϕ .
101
Doxastic Relations Uniquely Determine Epistemic Ones
EXERCISE:
Given a doxastic Kripke model (S,→, ‖.‖, s∗) (i.e. one in which to
→ is serial, transitive and Euclidean), there is a unique relation
∼⊆ S × S such that (S,∼,→, ‖.‖, s∗) is a doxastic-epistemic
Kripke model.
This means that, to encode an epistemic-doxastic model as a Kripke
model, we only need to draw the arrows for the doxastic
relation.
102
S4 Models for weak types of knowledge
But, we can see that, in the setting of Kripke models, the properties
specific to “epistemic-doxastic models” are NOT
automatically satisfied.
So Kripke semantics is more general than the “sphere semantics”.
In fact, one can use Kripke semantics to interpret various weaker
notions of “knowledge”, e.g. a type of knowledge that is truthful
(factive) and positively introspective, but NOT necessarily negative
introspective.
An S4-model for knowledge is a Kripke model satisfying only
reflexivity and transitivity (but not necessarily symmetry or
Euclideaness).
103
Kripke Models for Non-Standard Notions of Belief
Similarly, by dropping the corresponding semantic conditions, one can
use Kripke models to represent non-introspective beliefs, or even
inconsistent beliefs.
104
The Problem of Logical Omniscience
However, it is easy to see that any Kripke modality 2 = [R] still
validates Kripke’s axiom
(K) 2(ϕ⇒ ψ)⇒ (2ϕ⇒ ψ),
and still satisfies the Necessitation Rule:
if ϕ is valid, then 2ϕ is valid.
So, if we interpret the modality as “knowledge” or “belief”, then every
logical validity is known/believed, and similarly every logical
entailment between two propositions is known/believed.
This means that Kripke semantics can only model “ideal” reasoners,
who may have limited access to external truths, but have unlimited
inference powers.
105
Later, towards the end of the course, we will see how a modified Kripke
semantics can avoid this problem.
106