goal based denial and wishful thinking

Post on 07-Aug-2015

158 Views

Category:

Technology

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Goal-Based Denial and Wishful ThinkingMichael PetychakisNational Technical University of Athens

The Problem

People make plansPeople deny a reality they do not likePeople like to think the best about their futurePeople share their beliefs with fellowsPeople get motivated and influenced by their friends and family

The Problem

People make plansPeople deny a reality they do not likePeople like to think the best about their futurePeople share their beliefs with fellowsPeople get motivated and influenced by their friends and family

Are agents able to behave in a similar way?

Multiple Benefits

Agent to Agent Communication

5

The Coordinated Attack Problem(aka, Two Generals’ or Warring

Generals Problem)❏ Two generals standing on opposite hilltops, trying

to coordinate an attack on a third general in a valley between them.

❏ Communication is via messengers who must travel across enemy lines (possibly get caught).

❏ If a general attacks on his own, he loses.❏ If both attack simultaneously, they win.❏ What protocol can ensure simultaneous attack?

6

The Coordinated Attack Problem

7

The Coordinated Attack Problem

(A Naive Protocol)❏ Let us call the generals:❏ S (sender)❏ R (receiver)

❏ Protocol for general S:❏ Send an “attack” message to R❏ Keeps sending until acknowledgement is

received❏ Protocol for general R:

❏ Do nothing until he receives a message “attack” from S

❏ If you receive a message, send an acknowledgement to S

8

The Coordinated Attack Problem(States)

❏ State of general S:

❏ A pair (msgS, ackS) where msg ∈ {0,1}, ack ∈ {0,1}

❏ msgS = 1 means a message “attack” was sent

❏ ackS = 1 means an acknowledgement was received

❏ State of general R:

❏ A pair (msgR, ackR) where msg ∈ {0,1}, ack ∈ {0,1}

❏ msgR = 1 means a message “attack” was received

❏ ackR = 1 means an acknowledgement was sent

❏ Global state: <(msgS, ackS),(msgR, ackR)>

❏ 4 possible local states per general &16 global states

9

The Coordinated Attack Problem

(Possible Worlds)

❏ Initial global state: <(0,0),(0,0)>

❏ State changes as a result of:

❏ Protocol events

❏ Nondeterministic effects of nature

❏ Change in states captured in a history

❏ Example:

❏ S sends a message to R, R receives it and sends an acknowledges, which is then received by S

❏ <(0,0),(0,0)>, <(1,0),(1,0)>, <(1,1),(1,1)>

❏ In our model: possible world = possible history

10

The Coordinated Attack Problem

(Indistinguishable Worlds)

❏ Defining the accessibility relation Ri:

❏ Two histories are indistinguishable to agent i if their final global states have identical local states for agent i

❏ Example: world

❏ <(0,0),(0,0)>, <(1,0),(1,0)>, <(1,0),(1,1)> is indistinguishable to general S from this world:

<(0,0),(0,0)>, <(1,0),(0,0)>, <(1,0),(0,0)>

❏ In words: S sends a message to R, but does not get an acknowledgement. This could be because R never received the message, or because he did but his acknowledgement did not make reach S

11

The Coordinated Attack Problem

(What do generals know?)

❏ Suppose the actual world is:

❏ <(0,0),(0,0)>, <(1,0),(1,0)>, <(1,1),(1,1)>

❏ In this world, the following hold:

❏ KSattack

❏ KRattack

❏ KSKRattack

❏ Unfortunately, this also holds:

❏ ¬KRKSKRattack

❏ R does not known that S knows that R knows that S intends to attack. Why? Because, from R’s perspective, the message could have been lost

12

The Coordinated Attack Problem

(What do generals know?)❏ Possible solution: ❏ S acknowledges R’s acknowledgement

❏ Then we have:❏ KRKSKRattack

❏ Unfortunately, we also have:❏ ¬KSKRKSKRattack

❏Is there a way out of this?

MotivationBelief revision is the process of changing beliefs to take into account a new piece of information. The

logical formalization of belief revision is researched in philosophy, in databases, and in artificial intelligence for the design of rational agents.

❏ Why should an agent always prefer new information over its previous beliefs?

❏ How can an agent autonomously generate its own order(s)among beliefs?

❏ Can human-like preferences, in belief revision, be adequately expressed using an order (or orders) among beliefs?

Wishful Thinking Revision (WTR)

❏Non-Prioritized❏Autonomous❏Context-oriented❏Simulates wishful thinking

Changes in the World: The idea

❏ Interpretation of a belief set B❏ the set of possible worlds where B is true

❏ Notification of some change in the actual world❏ The agent’s description of the possible states of

affairs must be modified accordingly:❏ Our description of the actual world is typically incomplete,

which means that there are several states of affairs (possible worlds) that are consistent with what we believe. Hence, an update must ensure that the changes are made true in the “candidate worlds” that survive the update.

Changes in the World: The idea

❏ Interpretation of a belief set B❏ the set of possible worlds where B is true

❏ Notification of some change in the actual world❏ The agent’s description of the possible states of

affairs must be modified accordingly:❏ Our description of the actual world is typically incomplete,

which means that there are several states of affairs (possible worlds) that are consistent with what we believe. Hence, an update must ensure that the changes are made true in the “candidate worlds” that survive the update.

Wishful Thinking

Denial

Passive

Active

WTR Agent Definition

If ag is the agent using WTR, our model assumes that its internal state contains, among other items, the following information:❏ The agent’s knowledge base, represented by

KB(ag)❏ The agent’s goals, represented by Goals(ag)❏ For each other agent agi, the subjective

credibility that our agent associates with agi, represented by Cred(ag, agi)

❏ The agent’s wishful thinking coefficient, represented by wt(ag).

Reasoning❏ Monotonic Reasoning

❏ KB |= f, then "g, KB Ù g |= f

❏ Inference engine only performs ask and tell to the KB, never retract

❏ Non-monotonic reasoning

❏ Allows KB |= f, and then KB Ù g |¹ f

❏ Previously derived facts can be retracted upon arrival (for example from sensors) of new, conflicting evidence

KB(ag) = { <A,Obs,{A}>,<A → B, Peter, {A → B}>,< B → C, Susan, { B → C}> }

KB(ag) = { <A,Obs,{A}>,<A → B, Peter, {A → B}>,< B → C, Susan, { B → C}>,< B, Der, {A, A → B}>,< C, Der, {A, A→ B, B → C}> }

Agent Trusts Peter to a certain degree of Belief

and Susan to another

Collected Data(β0) Wishful Thoughts(γ0)

Context(βγ) Wishful Beliefs(γ)

Base Beliefs(β)

Derived Beliefs

World Goals

ObservatoryCommunicationSupports

WTSupports

Valid Supports

Definitions

Scenarios (½)

Scenario 1

❏ Φbt = GDesc( gbt ) = have(Boat)

❏ Φal = GDesc( gal ) = alive(Mother)

❏ GImp( gbt ) = 0.35❏ GImp( gal ) = 0.95.

Scenario 2

❏ Cred(ag, David) = 0.5❏ Φ17 = inFlight

(Mother, 17).❏ not ΦbtCtxPrf ({Φbt,Φal,Φ17},

ag) - 0.775❏ CtxPrf ({not

Φbt,Φal,Φ17}, ag) - 1.749.

Scenario 3

❏ CtxPrf ({Φbt,Φal,Φ17}, ag) - 0.775

❏ CtxPrf ({Φbt,Φal,Φcr}, ag) - 0.811

❏ CtxPrf ({Φbt,Φ17,Φcr}, ag) - 0.808

❏ CtxPrf ({not Φbt,Φal,Φ17}, ag) - 1.749

❏ CtxPrf ({not Φbt,Φal,Φcr}, ag) - 1.793

❏ CtxPrf ({not Φbt,Φ17,Φcr}, ag) - 1.790.

Scenarios (2/2)

Scenario 4

❏ CtxPrf ({Φbt,Φal,Φ17}, ag) - 0.820

❏ CtxPrf ({Φbt,Φal,Φcr}, ag) - 0.811

❏ CtxPrf ({Φbt,Φ17,Φcr}, ag) - 0.833

❏ CtxPrf ({not Φbt,Φal,Φ17}, ag) - 1.804

❏ CtxPrf ({not Φbt,Φal,Φcr}, ag) - 1.793

❏ CtxPrf ({not Φbt,Φ17,Φcr}, ag) - 1.819.

Scenario 5

❏ CtxPrf ({Φbt,Φal,Φ17}, ag) - 0.820

❏ CtxPrf ({Φbt,Φal,JΦ17,Φcr}, ag) - 0.856

❏ CtxPrf ({Φbt,Φ17,Φcr},ag) - 0.833

❏ CtxPrf ({not Φbt,Φal,Φ17}, ag) - 1.804

❏ CtxPrf ({not Φbt,Φal,not Φ17,Φcr}, ag) - 1.846

❏ CtxPrf ({not Φbt,Φ17,Φcr}, ag) - 1.819.

Related Approaches

❏ Epistemic Modal Logic with Belief Operator

❏ Belief Revision (family of) Logics❏ Multi-Agent Systems

❏ Cognitive Agents❏ Description Logics Reasoning

Future

Conscious AgentsSynesthesia

Query OptimisationsAutomated Web Agents

Tim Berners-Lee originally expressed the vision of the Semantic Web as

follows:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links,

and transactions between people and computers. A "Semantic Web", which makes this possible, has yet to emerge, but when

it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to

machines. The "intelligent agents" people have touted for ages will finally materialize.

Thank You!Questions?

top related