game-theoretic foundations for argumentation

38
Game-Theoretic Foundations for Argumentation IYAD RAHWAN British University in Dubai & University of Edinburgh and KATE LARSON University of Waterloo and FERNANDO TOHM ´ E Universidad Nacional del Sur Game theory is becoming central to the design and analysis of computational mechanisms in which multiple entities interact strategically. In particular, the tools of mechanism design are used extensively to engineer incentives for truth-revelation into resource-allocation and preference- aggregation protocols, as evident in the work on combinatorial auctions and voting. On the other hand, traditionally, the design of logical reasoning procedures assumes that all logical formulae are available a priori for reasoning by a single agent. However, for reasoning using knowledge obtained from different entities in an open system (e.g. the Semantic Web), incentives are clearly equally important. Just as an auction is a rule that maps the bids of different agents into a social outcome by allocating resources, an inference procedure can map logical formulae revealed by different agents into logical conclusions. It is somewhat surprising that, to date, very little systematic investigation of incentives has been applied in the context of logical inference when formulae are distributed among self-interested agents. Against this background, we ask: how do we engineer inference procedures with desirable incentive properties, when knowledge is distributed among strategic agents? A general answer to this question requires a new paradigm, and is beyond a single paper. Our aim is to motivate such paradigm by exploring the question in the context of Dung’s abstract argumentation theory, which generalises a variety of nonmonotonic logics. We introduce Argumentation Mechanism Design (ArgMD), in which argument evaluation procedures are viewed as games with different agents deciding which arguments to reveal. We then undertake a detailed case study of the strategic properties of grounded semantics under different conditions. We close by making general remarks about how this framework may be further generalised. Categories and Subject Descriptors: I.2.11 [Artificial Intelligence]: Distributed Artificial Intelli- gence—Multiagent systems ; Coherence and coordination; I.2.4 [Artificial Intelligence]: Knowl- edge Representation Formalisms and Methods—representation languages General Terms: Theory, Economics Additional Key Words and Phrases: Game Theory, Mechanism Design, Argumentation Author’s address: Iyad Rahwan, Faculty of Informatics, British University in Dubai, P.O.Box 502216, Dubai, UAE, [email protected], http://homepages.inf.ed.ac.uk/irahwan/; Kate Lar- son, Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada, [email protected], http://www.cs.uwaterloo.ca/ ~ klarson/; Fernando Tohm´ e, Artificial Intelligence Research and Development Lab (LIDIA), Uni- versidad Nacional del Sur, Bah´ ıa Blanca, CONICET, Argentina, [email protected]. Permission to make digital/hard copy of all or part of this material without fee for personal or classroom use provided that the copies are not made or distributed for profit or commercial advantage, the ACM copyright/server notice, the title of the publication, and its date appear, and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific permission and/or a fee. c 20YY ACM 0004-5411/20YY/0100-0001 $5.00 Journal of the ACM, Vol. V, No. N, Month 20YY, Pages 1–38.

Upload: independent

Post on 24-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Game-Theoretic Foundations for Argumentation

IYAD RAHWAN

British University in Dubai & University of Edinburgh

and

KATE LARSON

University of Waterloo

and

FERNANDO TOHME

Universidad Nacional del Sur

Game theory is becoming central to the design and analysis of computational mechanisms in

which multiple entities interact strategically. In particular, the tools of mechanism design areused extensively to engineer incentives for truth-revelation into resource-allocation and preference-

aggregation protocols, as evident in the work on combinatorial auctions and voting.

On the other hand, traditionally, the design of logical reasoning procedures assumes that alllogical formulae are available a priori for reasoning by a single agent. However, for reasoning using

knowledge obtained from different entities in an open system (e.g. the Semantic Web), incentives

are clearly equally important. Just as an auction is a rule that maps the bids of different agentsinto a social outcome by allocating resources, an inference procedure can map logical formulae

revealed by different agents into logical conclusions. It is somewhat surprising that, to date, very

little systematic investigation of incentives has been applied in the context of logical inferencewhen formulae are distributed among self-interested agents.

Against this background, we ask: how do we engineer inference procedures with desirableincentive properties, when knowledge is distributed among strategic agents? A general answer to

this question requires a new paradigm, and is beyond a single paper. Our aim is to motivate such

paradigm by exploring the question in the context of Dung’s abstract argumentation theory, whichgeneralises a variety of nonmonotonic logics. We introduce Argumentation Mechanism Design

(ArgMD), in which argument evaluation procedures are viewed as games with different agents

deciding which arguments to reveal. We then undertake a detailed case study of the strategicproperties of grounded semantics under different conditions. We close by making general remarks

about how this framework may be further generalised.

Categories and Subject Descriptors: I.2.11 [Artificial Intelligence]: Distributed Artificial Intelli-

gence—Multiagent systems; Coherence and coordination; I.2.4 [Artificial Intelligence]: Knowl-

edge Representation Formalisms and Methods—representation languages

General Terms: Theory, Economics

Additional Key Words and Phrases: Game Theory, Mechanism Design, Argumentation

Author’s address: Iyad Rahwan, Faculty of Informatics, British University in Dubai, P.O.Box

502216, Dubai, UAE, [email protected], http://homepages.inf.ed.ac.uk/irahwan/; Kate Lar-

son, Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West,Waterloo, ON, N2L 3G1, Canada, [email protected], http://www.cs.uwaterloo.ca/

~klarson/; Fernando Tohme, Artificial Intelligence Research and Development Lab (LIDIA), Uni-

versidad Nacional del Sur, Bahıa Blanca, CONICET, Argentina, [email protected] to make digital/hard copy of all or part of this material without fee for personal

or classroom use provided that the copies are not made or distributed for profit or commercial

advantage, the ACM copyright/server notice, the title of the publication, and its date appear, andnotice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish,to post on servers, or to redistribute to lists requires prior specific permission and/or a fee.c© 20YY ACM 0004-5411/20YY/0100-0001 $5.00

Journal of the ACM, Vol. V, No. N, Month 20YY, Pages 1–38.

2 · Rahwan et al.

1. INTRODUCTION

Game theory is becoming increasingly important in the design and analysis of com-putational mechanisms in which multiple entities interact strategically. In particu-lar, the tools of mechanism design are now used extensively to engineer incentivesfor truth-revelation into communication protocols. For example, incentive proper-ties are important to ensure socially optimal resource allocation in combinatorialauctions [Cramton et al. 2006]. Incentives are also crucial in the design of votingprotocols and other preference aggregation mechanisms [Conitzer et al. 2007].

On the other hand, traditionally, the design of logical inference procedures haspredominantly focused on formal properties, such as soundness and completenesswith respect to different semantics, while assuming all logical formulae are availablea priori. This is analogous to assuming that all bids are truthfully known beforedetermining the winners in an auction, or assuming that true agent preferencesare available before applying a voting rule. It is somewhat surprising that, to date,little systematic investigation of incentives has been applied in the context of logicalinference, which can be seen as a mechanism for aggregating logical formulae. Ina system that reasons using knowledge obtained from different entities in an opensystem (e.g. self-interested agents sharing declarative information on the SemanticWeb [Berners-Lee et al. 2001]), incentives are clearly important.

In this paper, we argue that the design of logical inference procedures can greatlybenefit from a game-theoretic perspective. Just as an auction is a rule that mapsthe revealed preferences/bids of different agents into a social outcome by allocat-ing resources, an inference procedure can map logical formulae revealed by differentagents into logical conclusions. The question then becomes: how do we engineer in-ference procedures with desirable incentive properties, when knowledge is distributedamong strategic agents?

A general answer to the above question requires a new paradigm that marrieslogic and game theory in a new way. The details of this paradigm are beyond thescope of a single paper. Hence, our aim here is more modest, namely to motivatesuch paradigm by exploring the question in the context of abstract argumentationtheory, and to show how this paradigm can be useful through a specific case study.

We explore the above question by building on a logical model of abstract argu-mentation due to Dung [1995]. We do this for two reasons. Firstly, the notion ofargumentation lends itself naturally to a multi-agent setting: argumentation canbe seen as the multi-agent counterpart to logic, a duality that dates back to Greekantiquity. As such, argumentation includes an explicit account of nonmonotonicinference over potentially conflicting information coming from different sources.Secondly, Dung’s model has been shown to generalise a variety of nonmonotoniclogics, including extension-based approaches such as Reiter’s default logic [Reiter1980], argument-based approaches such as Pollock’s inductive defeasible logic [Pol-lock 1987], many varieties of defeasible logic programming semantics, and variousnew semantics [Baroni and Giacomin 2007]. As such, Dung’s framework providesus with a fairly general setting in which to conduct our analysis, as well as enablefuture analysis beyond the present paper.

In this context, we introduce Argumentation Mechanism Design (ArgMD), aframework in which an argument-evaluation procedure (or semantic criterion) isJournal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 3

viewed in terms of the game it defines. We show how argument evaluation proce-dures can be viewed as games with different agents deciding which arguments toreveal. The design of these procedures, then, becomes a mechanism design problem[Mas-Colell et al. 1995, Ch 14] akin to designing an auction. We also distinguish aparticular class of argumentation mechanisms, namely direct-revelation argumen-tation mechanisms, which significantly simplifies analysis.

To demonstrate the usefulness of this framework, we then undertake a detailedcase study of the grounded semantics,1 if it were viewed as a mechanism. In particu-lar, we explore the strategic properties of grounded semantics under two conditions:when agents can only hide arguments, and when agents can hide or lie about ar-guments. We show that, in general, an agent may have incentive to hide/lie in anattempt to influence the outcome in its favour. We fully characterise conditionsunder which the grounded semantics is strategy-proof 2 for a certain class of agentpreferences. Then, we show sufficient meaningful topological restrictions on the ar-gument graph3 to guarantee strategy-proofness, and discuss the weaker condition ofincentive compatibility. Our analysis demonstrates that strategic incentives dependon the intricate relationship between three key factors: (1) the form of agent pref-erences, (2) the structure of the argumentative scenario –i.e. the argument graph,and (3) the argument evaluation criterion used to decide the outcome. Finally,we discuss some thoughts on how our approach may be extended beyond Dung’stheory.

The paper4 advances the state-of-the-art in the strategic aspects of interactionamong knowledge-based agents in three ways. Firstly, the paper introduces the firstdefinition of the problem of formulating argument acceptance criteria in Dung-styleframeworks as a (game-theoretic) mechanism design problem. This new perspec-tive opens up many possibilities for designing argumentation protocols/criteria thathave desirable properties in truly adversarial settings. Such perspective on design-ing argumentation protocols and acceptance criteria has not been explored to-date.

The second main contribution of this paper is in demonstrating the power of theArgMD approach. We present the first comprehensive game-theoretic analysis ofan important argument acceptability criterion, namely the well-known (sceptical)grounded semantics. We characterise general necessary and sufficient conditions, aswell as sufficient graph-theoretic conditions, under which this mechanism is strategy-proof (i.e. truth-revealing) for a natural class of agent preferences. We also discussthe weaker notion of incentive compatibility.

Thirdly, our analysis demonstrates that the properties of argumentation mecha-nisms depend highly on the form of preferences agents have, and possibly on thestructure of the argument graph. A variety of other preferences and topologicalstructures are possible, and different ones may be sensible in different applicationsettings. Thus, our work opens new avenues of research on analysing argumen-

1Grounded semantics is a particular (conservative) criterion for selecting acceptable arguments

from a set of conflicting arguments. It accepts arguments that are either undefeated, or defendedby other accepted arguments.2A strategy profile is ‘strategy-proof’ if no agent has incentive to lie, regardless of what other

agents do.3An argument graph consists of a set of arguments and a defeat relation among them.4This paper is a significantly expanded version of an IJCAI conference paper [Rahwan et al. 2009].

Journal of the ACM, Vol. V, No. N, Month 20YY.

4 · Rahwan et al.

tation mechanisms under different preference conditions and relationships amongarguments.

The rest of the paper is organised as follows. In the next two sections, we presentthe necessary technical background in abstract argumentation frameworks, gametheory and mechanism design. In Section 4, we define multi-agent abstract argu-mentation as a mechanism design problem. Next, in Section 5, we define a scep-tical direct-revelation abstract argumentation mechanism and analyse its strategy-proofness extensively under various conditions. Then, we discuss the epistemicconditions for achieving strategy-proofness in Section 6. We explore the weakercondition of incentive-compatibility in Section 7. Finally, in Section 8, we brieflydiscuss how the work in this paper fits into the larger research programme we areadvocating. We discuss related work in Section 9 and conclude the paper in Section10. An appendix includes most proofs.

2. BACKGROUND I: ABSTRACT ARGUMENTATION FRAMEWORKS

The theory of argumentation is a rich, interdisciplinary area of research spanningphilosophy, communication studies, linguistics, and psychology. Its techniques andresults have found a wide range of applications in both theoretical and practicalbranches of artificial intelligence and computer science [Bench-Capon and Dunne2007; Rahwan and McBurney 2007]. Argumentation can be seen as a reasoningprocess consisting of the following four steps:

(1) Constructing arguments (in favour of / against a “statement”) from a knowl-edge base.

(2) Determining the different conflicts among the arguments.(3) Evaluating the acceptability of the different arguments.(4) Concluding, or defining the justified conclusions.

Many argumentation formalisms are built around an underlying logical languageL and an associated notion of logical consequence, which together define the conceptof argument as a tentative proof. However, in this paper, following the influentialwork of Dung [1995] and its subsequent extensions [Baroni and Giacomin 2007],we look at the argumentation process abstractly, that is, without reference to theinternal structure of arguments. We begin with Dung’s abstract characterisationof an argumentation system [Dung 1995]:

Definition 2.1 Argumentation framework. An argumentation framework is a pairAF = 〈A,〉 where A is a set of arguments and ⊆ A×A is a defeat relation. Wesay that an argument α defeats an argument β if and only if (α, β) ∈ (sometimeswritten α β).

Dung showed how this abstract framework generalises a variety of nonmonotoniclogics, including extension-based approaches such as Reiter’s well-known defaultlogic [Reiter 1980], argument-based approaches such as Pollock’s inductive defeasi-ble logic [Pollock 1987], and many varieties of logic programming semantics.

As an illustration, consider Reiter’s default logic. A default theory is a pair T =(D,W ) where D is a set of default rules and W is a background theory consisting ofa set of closed first-order sentences. Default rules are of the form ϕ:ψ1,...,ψn

γ , where ϕ,

Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 5

ψi and γ are classical formulae. Intuitively, the default rule means that if ϕ (calledthe prerequisite) is derivable and ∀i,¬ψi is not derivable, we may provisionallyderive γ (called the consequent). Formulae ψ1, . . . , ψn are the justifications of thedefault rule. Theory T can be interpreted as an argumentation framework AF =〈AT ,T 〉 in which an argument in AT is a derivation of a formula using defaultrules and the background theory, and T is defined when the conclusion of onederivation negates the justification of another (See [Dung 1995] for more details).

In this paper, we restrict ourselves to finite argumentation frameworks, that is,frameworks with finite sets of arguments. This assumption is widely adopted inthe literature and reflects the reasonable intuition that agents cannot produce newrelevant information forever. An argumentation framework can be represented as adirected graph in which vertices are arguments and directed arcs characterise defeatamong arguments.

Example 2.2. Argumentation framework AF = 〈α1, α2, α3, α4, α5, (α2, α1),(α3, α2), (α4, α1), (α5, α4)〉 corresponds to the graph in Figure 1.

α3 α2

α4

α1

α5

Fig. 1. A simple argument graph

Departing from the set of all (possibly conflicting) arguments, it is importantto know which of them can be relied on for inferring conclusions and for makingdecisions. Different semantics for the notion of acceptability have been proposedby Dung [1995]. These are stated in the following definitions.

Definition 2.3 Conflict-free, Defense. Let 〈A,〉 be an argumentation frame-work and S ⊆ A.

—S is conflict-free if and only if there exist no α ∈ S and β ∈ S such that α β.5

—S defends an argument α if and only if for each argument β ∈ A, if β α, thenthere exists an argument γ ∈ S such that γ β. We also say that argument αis acceptable with respect to S

Intuitively, a set of arguments is conflict free if no argument in that set defeatsanother. A set of arguments defends a given argument if it defeats all its defeaters.

Example 2.4. In Figure 1, the set α3, α5 defends argument α1.

We now look at different semantics that characterise the collective acceptability ofa set of arguments.

5This notion of “conflict-freeness” is independent of the logic, and is one of the advantages of

Dung’s abstract theory of argument. Different logics may instantiate the notion of conflict dif-

ferently. For example, in propositional logic, conflict-freeness might imply that the theory issatisfiable, whereas in a logic of actions it might mean that the effects of actions do not cancel

each other out.

Journal of the ACM, Vol. V, No. N, Month 20YY.

6 · Rahwan et al.

Definition 2.5 Characteristic function. Let AF = 〈A,〉 be an argumentationframework. The characteristic function of AF is FAF : 2A → 2A such that, givenS ⊆ A, we have FAF (S) = α ∈ A | S defends α.

When there is no ambiguity about the argumentation framework in question, wewill use F instead of FAF .

Definition 2.6 Acceptability semantics. Let S be a conflict-free set of argumentsin framework AF = 〈A,〉.

—S is admissible if and only if it defends any element in S (i.e. if S ⊆ F(S)).—S is a complete extension if and only if S = F(S).—S is a grounded extension if and only if it is the minimal (with respect to set-

inclusion) complete extension (or, alternatively, if S is the least fixed-point ofF(.)).6

—S is a preferred extension if and only if it is a maximal (with respect to set-inclusion) complete extension (or, alternatively, if S is a maximal admissibleset).

Let E = E1, . . . , En be the set of all possible extensions under a given semantics.

Intuitively, a set of arguments is admissible if it is a conflict-free set that defendsitself against any defeater –in other words, if it is a conflict free set in which eachargument is acceptable with respect to the set itself.

Example 2.7. In Figure 1, the sets ∅, α3, α5, and α3, α5 are all admis-sible simply because they do not have any defeaters. The set α1, α3, α5 is alsoadmissible since it defends itself against both defeaters α2 and α4.

An admissible set S is a complete extension if and only if all arguments defendedby S are also in S (that is, if S is a fixed point of the operator F). This capturesthe attitude of an agent that accepts everything it can defend. There may be morethan one complete extension, each corresponding to a particular consistent andself-defending viewpoint.

Example 2.8. In Figure 1, the admissible set α3, α5 is not a complete exten-sion, since it defends α1 but does not include α1. Similarly, sets α3, α5 arenot complete extensions, since F(α3) = α3, α5 and F(α5) = α3, α5. Theadmissible set α1, α3, α5 is the only complete extension, since F(α1, α3, α5) =α1, α3, α5.

As another example, consider the following.

Example 2.9. Consider the graph in Figure 2. Here, we have three completeextensions: α3, α1, α3 and α2, α3.

A grounded extension is the minimal complete extension. Intuitively, this meansthat it contains all the arguments which are not defeated, as well as the argumentswhich are defended directly or indirectly by non-defeated arguments. This can be

6For finite argument frameworks, this is also equivalent to S = ∪∞i=0F iAF (∅) [Dung 1995; Prakken

and Vreeswijk 2002].

Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 7

α2 α1α3

Fig. 2. An example with three complete extensions

seen as a non-committal view (characterised by the least fixed point of F). As such,there always exists a unique grounded extension. Dung [1995] showed that in finiteargumentation systems, the grounded extension can be obtained by an iterativeapplication of the characteristic function to the empty set.

Example 2.10. In Figure 1, the grounded extension is α1, α3, α5, which is theonly complete extension. This can also be calculated using the iterative applicationof F as follows:

– F1(∅) = α3, α5;– F2(∅) = F(F1(∅)) = α1, α3, α5;– F3(∅) = F(F2(∅)) = α1, α3, α5 = F2(∅);

Similarly, in Figure 2, the grounded extension is α3, which is the minimal com-plete extension with respect to set inclusion.

More intuitively, computing arguments in the grounded extension can be seen asa process of labelling nodes of the defeat graph. First, nodes that have no defeatersare labelled ‘undefeated’ and the nodes attacked by them are labelled ‘defeated.’Then, all labelled arguments are suppressed and the process is repeated on theresulting sub-graph, and so on. If in some iteration, no initial node is found, allunlabelled nodes are labelled as ‘defeated’ and the process terminates.

A preferred extension is a bolder and more committed view. It is a position thatcannot be extended –by including more arguments– without causing conflict (i.e.defeat among two or more arguments). Thus a preferred extension can be thoughtof as a maximal conflict-free set of hypotheses. There may be multiple preferredextensions, and the grounded extension is included in all of them.

Example 2.11. In Figure 1, α1, α3, α5 is the only preferred extension. Butin Figure 2, there are two preferred extensions: α1, α3 and α2, α3, which aremaximal complete extension with respect to set inclusion.

In the remainder of this paper we use the expression “Dung’s standard semantics”to refer to complete, grounded and preferred semantics. We use the unqualified term“extension” to refer to a complete, grounded or preferred extension. Now that theacceptability semantics of sets of arguments is defined, we can define the status ofany individual argument.

Definition 2.12 Argument status. Let 〈A,〉 be an argumentation system, andE1, . . . , En its extensions under a given semantics. Let α ∈ A.

(1) α is sceptically accepted if and only if α ∈ Ei, ∀Ei with i = 1, . . . , n.(2) α is credulously accepted if and only if ∃Ei such that α ∈ Ei.(3) α is rejected if and only if @Ei such that α ∈ Ei.

Journal of the ACM, Vol. V, No. N, Month 20YY.

8 · Rahwan et al.

An argument is sceptically accepted if it belongs to all extensions. Intuitively, anargument is sceptically accepted if it can be accepted without making any hypothe-ses beyond what is acceptable in all ‘possible worlds’ so to speak. On the otherhand, an argument is credulously accepted on the basis that it belongs to at leastone extension. Intuitively, an argument is credulously accepted if there is a possi-ble conflict-free set of hypotheses in which it is accepted. If an argument is neithersceptically nor credulously accepted, there is no basis for accepting it, and it istherefore rejected.

In the literature, a particular semantics often denotes the way the extensions arecharacterised as well as whether sceptical or credulous acceptance is being used.For example, one might refer to sceptical preferred semantics, in which an argumentis accepted if it belongs to the intersection of all preferred extensions, or credulouspreferred semantics, in which an argument is accepted if it belongs to at least onepreferred extension. Note that grounded semantics is necessarily sceptical, sincethere is only a unique grounded extension.

Definition 2.13 Acceptable Argument Set. Let 〈A,〉 be an argumentation sys-tem under semantics S – where S characterises both the nature of extensions andwhether sceptical or credulous acceptance is applied. We denote by Acc(〈A,〉,S) ⊆ A the set of acceptable arguments according to semantics S.

Finally, we list a couple of definitions, which will be needed in our subsequentanalysis.

Definition 2.14 Indirect defeat and defence [Dung 1995]. Let α, β ∈ A. We saythat α indirectly defeats β, written α → β, if and only if there is an odd-length pathfrom α to β in the argument graph. We say that α indirectly defends β, writtenα # β, if and only if there is an even-length path (with non-zero length) from α toβ in the argument graph.

Definition 2.15 Parents & initial arguments [Baroni and Giacomin 2007]. Givenan argumentation framework AF = 〈A,〉 and an argument α ∈ A, the parentsof argument α are denoted by parAF (α) = β ∈ A | β → α. Arguments inAF that have no parents are called initial arguments, and are denoted by the setIN (AF ) = α ∈ A | parAF (α) = ∅.

3. BACKGROUND II: GAME THEORY AND MECHANISM DESIGN

The contemporary theory of abstract argumentation frameworks is not concernedwith strategic issues. In fact, all argument acceptability semantics mentioned in theprevious section assume that a set of arguments and a defeat relation are given, andthe argument evaluation criteria merely computes the set of acceptable arguments.However, in a multi-agent setting, different arguments are likely to be presentedby different self-interested agents. Thus it is crucial to understand the possiblestrategic behaviour of these agents in terms of what arguments they should or wouldpresent. With an understanding of the strategic behaviour, it becomes possible toanalyse various argument evaluation criteria in terms of their manipulability. Moreimportantly, understanding strategic behaviour will allow us to devise argumentevaluation criteria that ensure that certain desired properties are achieved. ToJournal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 9

this end, we propose to apply the tools of game theory and mechanism design toabstract argumentation frameworks.

Mechanism design studies the problem of how to ensure that good system-widedecisions or outcomes arise in situations that involve multiple self-interested agents.Often the goal is to choose an outcome or make a decision which reflects the agents’preferences. The challenge, however, is that the agents’ preferences are private, andagents may try to manipulate the system so as to ensure an outcome or decisionwhich is desirable for themselves, possibly at the expense of others. In the restof this section we provide an overview of key game theory and mechanism designconcepts used in this paper. A more thorough introduction to game theory andmechanism design can be found in many game-theory and economics texts [Mas-Colell et al. 1995].

3.1 Game Theory

The field of game theory studies strategic interactions of self-interested agents. Weassume that there is a set of self-interested agents, denoted by I. We let θi ∈ Θi

denote the type of agent i which is drawn from some set of possible types Θi. Thetype represents the private information and preferences of the agent. An agent’spreferences are over outcomes o ∈ O, whereO is the set of all possible outcomes. Weassume that an agent’s preferences can be expressed by a utility function ui(o, θi)which depends on both the outcome, o, and the agent’s type, θi. Agent i prefersoutcome o1 to o2 when ui(o1, θi) > ui(o2, θi).

When agents interact, we say that they are playing strategies. A strategy foragent i, si(θi), is a plan that describes what actions the agent will take for everydecision that the agent might be called upon to make, for each possible piece ofinformation that the agent may have at each time it is called to act. That is, astrategy can be thought as a complete contingency plan for an agent. We let Σidenote the set of all possible strategies for agent i, and thus si(θi) ∈ Σi. When itis clear from the context, we will drop the θi in order to simplify the notation. Welet strategy profile s = (s1(θ1), . . . , sI(θI)) denote the outcome that results wheneach agent i is playing strategy si(θi). As a notational convenience we define

s−i(θ−i) = (s1(θi), . . . , si−1(θi−1), si+1(θi+1), . . . , sI(θI))

and thus s = (si, s−i). We then interpret ui((si, s−i), θi) to be the utility ofagent i with type θi when all agents play strategies specified by strategy profile(si(θi), s−i(θ−i)).

Since the agents are all self-interested, they will try to choose strategies whichmaximize their own utility. Since the strategies of other agents also play a rolein determining the outcome, the agents must take this into account. The solutionconcepts in game theory determine the outcomes that will arise if all agents arerational and strategic. The most well known solution concept is the Nash equilib-rium. A Nash equilibrium is a strategy profile in which each agent is following astrategy which maximizes its own utility, given its type and the strategies of theother agents.

Definition 3.1 Nash Equilibrium. A strategy profile s∗ = (s∗1, . . . , s∗I) is a Nash

equilibrium if no agent has incentive to change its strategy, given that no otherJournal of the ACM, Vol. V, No. N, Month 20YY.

10 · Rahwan et al.

agent changes. Formally,

∀i,∀s′i, ui(s∗i , s∗−i, θi) ≥ ui(s′i, s∗−i, θi).

Although the Nash equilibrium is a fundamental concept in game theory, it doeshave several weaknesses. First, there may be multiple Nash equilibria and so agentsmay be uncertain as to which equilibrium they should play. Second, the Nashequilibrium implicitly assumes that agents have perfect information about all otheragents, including the other agents’ preferences.

A stronger solution concept in game theory is the dominant-strategy equilibrium.A strategy si is said to be dominant if by playing it, the utility of agent i ismaximized no matter what strategies the other agents play.

Definition 3.2 Dominant Strategy. A strategy s∗i is dominant if

∀s−i, ∀s′i, ui(s∗i , s−i, θi) ≥ ui(s′i, s−i, θi).

Sometimes, we will refer to a strategy satisfying the above definition as weaklydominant. If the inequality is strict (i.e. > instead of ≥), we say that the strategyis strictly dominant.

A dominant-strategy equilibrium is a strategy profile where each agent is playinga dominant strategy. This is a very robust solution concept since it makes noassumptions about what information the agents have available to them, nor doesit assume that all agents know that all other agents are being rational (i.e. tryingto maximize their own utility). However, there are many strategic settings whereno agent has a dominant strategy.

3.2 Mechanism Design

The problem that mechanism design studies is how to ensure that a desirablesystem-wide outcome or decision is made when there is a group of self-interestedagents who have preferences over the outcomes. In particular, we often want theoutcome to depend on the preferences of the agents. This is captured by a socialchoice function.

Definition 3.3 Social Choice Function. A social choice function is a rule f :Θ1 × . . . × ΘI → O, that selects some outcome f(θ) ∈ O, given agent typesθ = (θ1, . . . , θI).

The challenge, however, is that the types of the agents (the θ′is) are private andknown only to the agents themselves. Thus, in order to select an outcome with thesocial choice function, one has to rely on the agents to reveal their types. However,for a given social choice function, an agent may find that it is better off if it doesnot reveal its type truthfully, since by lying it may be able to cause the social choicefunction to choose an outcome that it prefers. Instead of trusting the agents to betruthful, we use a mechanism to try to reach the correct outcome.

A mechanism M = (Σ, g(·)) defines the set of allowable strategies that agentscan chose, with Σ = Σ1 × · · · × ΣI where Σi is the strategy set for agent i, andan outcome function g(s) which specifies an outcome o for each possible strategyprofile s = (s1, . . . , sI) ∈ Σ. This defines a game in which agent i is free to selectany strategy in Σi, and, in particular, will try to select a strategy which will lead toJournal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 11

an outcome that maximizes its own utility. We say that a mechanism implementssocial choice function f if the outcome induced by the mechanism is the sameoutcome that the social choice function would have returned if the true types ofthe agents were known.

Definition 3.4 Implementation. A mechanism M = (Σ, g(·)) implements socialchoice function f if there exists an equilibrium s∗ such that

∀θ ∈ Θ, g(s∗(θ)) = f(θ).

While the definition of a mechanism puts no restrictions on the strategy spaces ofthe agents, an important class of mechanisms are the direct-revelation mechanisms(or simply direct mechanisms).

Definition 3.5 Direct-Revelation Mechanism. A direct-revelation mechanism is amechanism in which Σi = Θi for all i, and g(θ) = f(θ) for all θ ∈ Θ.

In words, a direct mechanism is one where the strategies of the agents are toannounce a type, θ′i to the mechanism. While it is not necessary that θ′i = θi, theimportant Revelation Principle (see below for more details) states that if a socialchoice function, f(·), can be implemented, then it can be implemented by a directmechanism where every agent reveals its true type [Mas-Colell et al. 1995]. In sucha situation, we say that the social choice function is incentive compatible.

Definition 3.6 Incentive Compatible. The social choice function f(·) is incentivecompatible (or truthfully implementable) if the direct mechanismM = (Θ, g(·)) hasan equilibrium (θ1, . . . , θn).

If the equilibrium concept is the dominant-strategy equilibrium, then the socialchoice function is strategy-proof. In this paper we will on occasion call a mechanismincentive-compatible or strategy-proof. This means that the social choice functionthat the mechanism implements is incentive-compatible or strategy-proof.

3.3 The Revelation Principle

Determining whether a particular social choice function can be implemented, and inparticular, finding a mechanism which implements a social choice function appearsto be a daunting task. In the definition of a mechanism, the strategy spaces of theagents are unrestricted, leading to an infinitely large space of possible mechanisms.However, the Revelation Principle states that we can limit our search to a specialclass of mechanisms [Mas-Colell et al. 1995, Ch 14].

Theorem 3.7 Revelation Principle. If there exists some mechanism thatimplements social choice function f in dominant strategies, then there exists adirect mechanism that implements f in dominant strategies and is truthful.

The intuitive idea behind the Revelation Principle is fairly straightforward. Sup-pose that you have a, possibly very complex, mechanism, M, which implementssome social choice function, f . That is, given agent types θ = (θ1, . . . , θI) thereexists an equilibrium s∗(θ) such that g(s∗(θ)) = f(θ). Then, the Revelation Prin-ciple states that it is possible to create a new mechanism, M′, which, when givenθ, will then execute s∗(θ) on behalf of the agents and then select outcome g(s∗(θ)).

Journal of the ACM, Vol. V, No. N, Month 20YY.

12 · Rahwan et al.

Thus, each agent is best off revealing θi, resulting in M′ being a truthful, directmechanism for implementing social choice function f .

The Revelation Principle is a powerful tool when it comes to studying imple-mentation. Instead of searching through the entire space of mechanisms to checkwhether one implements a particular social choice function, the Revelation Principlestates that we can restrict our search to the class of truthful, direct mechanisms.If we cannot find a mechanism in this space which implements the social choicefunction of interest, then there does not exist any mechanism which will do so.

It should be noted that while the Revelation Principle is a powerful analysis tool,it does not imply we should only design direct mechanisms. Some reasons whyone rarely sees direct mechanisms in the “real world” include (among others): theycan place a high computational burden on the mechanism since it is required toexecute agents’ strategies; agents’ strategies may be computationally difficult todetermine; and agents’ may not be willing to reveal their true types because ofprivacy concerns. Having said that, analysing direct mechanisms suffices for ourpurposes in this paper.

4. MECHANISM DESIGN FOR ABSTRACT ARGUMENTATION

In this section we define the mechanism design problem for abstract argumentation.In particular, we specify the agents’ type spaces and utility functions, what sortof strategic behavior agents might indulge in, as well as the kinds of social choicefunctions we are interested in implementing.

We define a mechanism with respect to an argumentation framework 〈A,〉 withsemantics S, and we assume that there is a set of I self-interested agents. We definean agent’s type to be its set of arguments.

Definition 4.1 Agent Type. Given an argumentation framework 〈A,〉, the typeof agent i, Ai ⊆ A, is the set of arguments that the agent is capable of puttingforward. A type may also include additional private information that determinethe agent’s preferences over the outcomes of argumentation.

There are two things to note about this definition. Firstly, an agent’s type canbe seen as a reflection of its expertise or domain knowledge. For example, medicalexperts may only be able to comment on certain aspects of forensics in a legal case,while a defendant’s family and friends may be able to comment on his/her character.Also, such expertise may overlap, so agent argument sets are not necessarily disjoint.For example, two medical doctors might have some identical argument, and so on.

The second thing to note about the definition is that agent types do not includethe defeat relation. In other words, we implicitly assume that the notion of defeatis common to all agents. That is, given two arguments, no agent would disputewhether or not one attacks another. This is a reasonable assumption in systemswhere agents use the same logic to express arguments or at least multiple logicsfor which the notion of defeat is accepted by everyone (e.g. conflict between aproposition and its negation). Disagreement over the defeat relation itself requiresa form of hierarchical (meta) argumentation [Modgil 2009], which is a powerfulconcept, but is beyond the scope of the present paper.

Given the agents’ types (argument sets), a social choice function f maps a typeJournal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 13

profile into a subset of arguments;

f : 2A × . . .× 2A → 2A

While our definition of an argumentation mechanism will allow for generic socialchoice functions which map type profiles into subsets of arguments, we will beparticularly interested in argument acceptability social choice functions.

Definition 4.2 Argument Acceptability Social Choice Functions. Given an argu-mentation framework 〈A,〉 with semantics S, and given a type profile (A1, . . . ,AI),the argument acceptability social choice function f is defined as the set of acceptablearguments given the semantics S. That is,

f(A1, . . . ,AI) = Acc(〈A1 ∪ . . . ∪ AI ,〉,S).

As is standard in the mechanism design literature, we assume that agents havepreferences over the outcomes o ∈ 2A, and we represent these preferences usingutility functions where ui(o,Ai) denotes agent i’s utility for outcome o when itstype is argument set Ai.

Agents may not have incentive to reveal their true type because they may be ableto influence the final argument status assignment by lying, and thus obtain higherutility. There are two ways that an agent can lie in our model. On one hand, anagent might lie by presenting arguments that it does not have in its argument set.A more insidious form of manipulation occurs when an agent decides to hide someof its arguments. By refusing to reveal certain arguments, an agent might be ableto break defeat chains in the argument framework, thus changing the final set ofacceptable arguments. For example, a witness may hide evidence that implicatesthe defendant if the evidence also undermines the witness’s own character.

Later in the paper, through our case study (Section 5), we will first assume thatthere is an external verifier that is capable of checking whether it is possible for aparticular agent to actually make a particular argument (Section 5.3). Informally,this means that presented arguments, while still possibly defeasible, must at leastbe based on some sort of demonstrable ‘plausible evidence.’ If an agent is caughtmaking up arguments then it will be removed from the mechanism. For example,in a court of law, any act of perjury by a witness is punished, at the very least, bycompletely discrediting all evidence produced by the witness. Moreover, in a courtof law, arguments presented without any plausible evidence are normally discarded(e.g. “I did not kill him, since I was abducted by aliens at the time of the crime!”).For all intents and purposes this assumption (also made by Glazer and Rubinstein[Glazer and Rubinstein 2001]) removes the incentive for an agent to make up facts.Later in the paper (Section 5.3), we will drop this assumption and explore the casewhere agents can also lie.

As mentioned in the previous subsection, a strategy of an agent specifies a com-plete plan that describes what action the agent takes for every decision that a playermight be called upon to take, for every piece of information that the player mighthave at each time that it is called upon to act. In our model, the actions avail-able to an agent involve announcing sets of arguments. Thus a strategy, si ∈ Σifor agent i would specify for each possible subset of arguments that could defineits type, what set of arguments to reveal. For example, a strategy might specify

Journal of the ACM, Vol. V, No. N, Month 20YY.

14 · Rahwan et al.

MD Concept ArgMD InstantiationAgent type θi ∈ Θi Agent’s arguments θi = Ai ⊆ A and possibly

other information relevant to determining its pref-erences

Outcome o ∈ O Accepted arguments Acc(.) ⊆ AUtility ui(o, θi) Preferences over 2A (what arguments end up being

accepted)Social choice function f : Θ1 × . . .×ΘI → O f(A1, . . . ,AI) = Acc(〈A1 ∪ . . . ∪ AI ,〉,S).

by some argument acceptability criterionMechanism M = (Σ, g(·)) where

Σ = Σ1 × · · · × ΣI and g : Σ→ O Σi is an argumentation strategy, g : Σ→ 2A

Direct mechanism: Σi = Θi Σi = 2A (every agent reveals a set of arguments)Truth revelation Revealing Ai

Table I. Abstract argumentation as a mechanism

that an agent should reveal only half of its arguments without waiting to see whatother agents are going to do, while another strategy might specify that an agentshould wait and see what arguments are revealed by others, before deciding how torespond. In particular, we place no restrictions on the allowable strategy spaces,when we initially define an argumentation mechanism. Later, when we talk aboutdirect argumentation mechanisms we will further restrict the strategy space.

We are now ready to define our argumentation mechanism. We first define ageneric mechanism, and then specify a direct argumentation mechanism, whichdue to the Revelation Principle (see Section 3.3), is the type of mechanism we willstudy in the rest of the paper.

Definition 4.3 Argumentation Mechanism. Given an argumentation frameworkAF = 〈A,〉 and semantics S, an argumentation mechanism is defined as

MSAF = (Σ1, . . . ,ΣI , g(·))

where Σi is an argumentation strategy space of agent i and g : Σ1 × . . .ΣI → 2A.

Note that in the above definition, the notion of strategy is broadly construedand would depend on the protocol used. In a direct mechanism, however, thestrategy spaces of the agents are restricted so that they can only reveal a subset ofarguments.

Definition 4.4 Direct Argumentation Mechanism. Given an argumentation frame-work AF = 〈A,〉 and semantics S, a direct argumentation mechanism is definedas

MSAF = (Σ1, . . . ,ΣI , g(·))

where Σi = 2Ai and g : Σ1 × . . .× ΣI → 2A.

In Table I, we summarise the mapping of multi-agent abstract argumentation asan instance of a mechanism design problem.

5. CASE STUDY: A MECHANISM BASED ON GROUNDED SEMANTICS

In this section, our aim is to demonstrate the power of the ArgMD approach byshowing how it can be used to systematically analyse the strategic incentives im-posed by a well-established argument evaluation criterion. In particular, we specifya direct-revelation argumentation mechanism, in which agents’ strategies are toJournal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 15

reveal sets of arguments, and where the mechanism calculates the outcome usingsceptical (grounded) extension.7 That is, we look at the grounded semantics as if itwas designed as a mechanism and analyse it from that perspective. We show that,in general, this mechanism gives rise to strategic manipulation. We prove, however,that under various conditions, this mechanism turns out to be strategy-proof.

We have chosen the grounded semantics in this case study because it is well-understood and contains arguments accepted by virtually every other semantics. Italso captures a strong form of scepticism due to its minimality. A consequence ofour subsequent analysis is that even when the mechanism adopts the most scepticalapproach, it may still fail to ensure truth revelation if agents are strategic, unlessthe argument graph satisfies some strong conditions.

5.1 The Mechanism

In a direct argumentation mechanism, each agent i’s available actions are Σi = 2Ai .We will refer to a specific action (i.e. set of declared arguments) as Ai ∈ Σi.

We now present a direct mechanism for argumentation based on a sceptical argu-ment evaluation criteria. The mechanism calculates the grounded extension giventhe union of all arguments revealed by agents.

Definition 5.1 Grounded Direct Argumentation Mechanism. A grounded direct ar-gumentation mechanism for argumentation framework 〈A,〉 isMgrnd

AF = (Σ1, . . . ,ΣI , g(.))where:

– Σi ∈ 2A is the set of strategies available to each agent;– g : Σ1 × · · · × ΣI → 2A is an outcome rule defined as: g(A1, . . . ,AI) =

Acc(〈A1 ∪ · · · ∪ AI ,〉,Sgrnd) where Sgrnd denotes sceptical grounded accept-ability semantics.

Before commencing the analysis below, let us reiterate our key assumptions:There is a common language (i.e. shared ontology) for describing/understandingarguments; The defeat relation is known by all agents (see discussion above); Agentsdo not know who has what arguments (i.e. types are private information); Not allarguments may end up being presented by their respective agents (i.e. they maynot reveal their true types).

5.2 Preferences: Agent-wise Focal Arguments

In many realistic dialogues, each agent i is interested in the acceptance of a par-ticular argument αi ∈ Ai, which we can refer to as the focal argument of agent i.In such case, other arguments in Ai\αi can merely be instrumental towards theacceptance of the focal argument. We are interested in characterising conditionsunder which the mechanism Mgrnd

AF is strategy-proof for scenarios in which eachagent has a focal argument.8 We begin with the following definition.

7In the remainder of the paper, we will use the terms sceptical, sceptical grounded, and groundedinterchangeably, since the paper focuses on the grounded semantics, and since there is always asingle grounded extension.8Other preference criteria are also reasonable, such as wanting to win as many arguments aspossible to impress an audience (see our earlier preliminary work [Rahwan and Larson 2008]), or

wanting to win any argument from a set that support the same conclusion.

Journal of the ACM, Vol. V, No. N, Month 20YY.

16 · Rahwan et al.

Definition 5.2 Focal Argument for an Agent. An agent i has a focal argumentαi ∈ Ai if and only if ∀o1, o2 ∈ O such that αi ∈ o1 and αi /∈ o2, we haveui(o1,Ai) > ui(o2,Ai), otherwise ui(o1,Ai) = ui(o2,Ai).

Let o ∈ O be an arbitrary outcome. If αi ∈ o, we say that agent i wins inoutcome o. Otherwise, i loses in outcome o.

5.3 Strategy-Proofness When Agents can Hide Arguments

In this Section, we will study the mechanismMgrndAF under the condition that agents

can manipulate the outcome by hiding arguments.

5.3.1 Characterising Strategy-Proofness. To investigate whether mechanismMgrndAF

is strategy-proof for any class of argumentation frameworks, consider the followingexample.

α1 α2 α3 α4

(a) Argument graph in the case of full revelation (b) Argument graph with α1 withheld

α2 α3 α4

Fig. 3. Hiding an argument is beneficial (case of focal arguments)

Example 5.3. Consider a grounded direct argumentation mechanism with agentsx, y and z with types Ax = α1, α4, Ay = α2 and Az = α3 respectively, andwith focal arguments defined as follows: αx = α4; αy = α2; αz = α3. Let the defeatrelation be defined as follows: = (α1, α2), (α2, α3), (α3, α4). If agents revealall their arguments, we have the graph shown in Figure 3(a), with the acceptedarguments marked by boxes. Here, agent z is the only winner.

It turns out that the mechanism is susceptible to strategic manipulation, evenif we suppose that agents do not lie by making up arguments (i.e. they may onlywithhold some arguments). In this case, for both agents y and z, revealing theirtrue types weakly dominates revealing nothing at all (since hiding their single focalarguments can only guarantee their respective loss). However, it turns out thatagent x is better off only revealing α4. By withholding α1, the resulting argumentnetwork becomes as depicted in Figure 3(b). Under this outcome, x wins, which isbetter for x than the truth-revealing strategy.

Remark 5.4. Given an arbitrary argumentation framework AF and agents withfocal arguments, mechanism Mgrnd

AF is not strategy-proof.

Having established this property, the natural question to ask is whether mecha-nism Mgrnd

AF is incentive compatible or strategy-proof under some additional con-ditions. The following theorem provides a full characterisation of strategy-proofmechanisms for sceptical argumentation frameworks for agents with focal argu-ments.Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 17

Theorem 5.5. Let AF be an arbitrary argumentation framework, and let GE (AF )denote its grounded extension. Mechanism Mgrnd

AF , in which agents can hide argu-ments and have focal arguments, is strategy-proof for agents with focal argumentsif and only if AF satisfies the following condition: ∀i ∈ I, ∀S ⊆ Ai and ∀A−i, wehave αi /∈ GE (〈Ai ∪ A−i,〉) implies αi /∈ GE (〈(Ai\S) ∪ A−i,〉).

Proof. See Appendix.

This result is totally consistent with the literature on mechanism design. Whilegeneral strategy-proof results obtain only at the cost of dropping other desirableproperties (like non-dictatorship, [Gibbard 1973; Satterthwaite 1975]), positive re-sults obtain by restricting the domain of types on which the mechanism is applied[Moulin 1980].

Although the above theorem gives us a full characterisation, it is difficult to applyin practice. In particular, the theorem does not give us an indication of how agents(or the mechanism designer) can identify whether the mechanism is strategy-prooffor a class of argumentation frameworks by appealing to their structural (graph-theoretic) properties. Below, we provide such analysis.

5.3.2 Sufficient Graph-Theoretic Conditions for Strategy-Proofness with FocalArguments. In this section, we investigate whether it is possible to provide sufficientgraph-theoretic conditions that ensure mechanism Mgrnd

AF is strategy-proof. Butbefore we can do this, we present a couple of lemmas.

The following lemma, which is necessary for our subsequent proofs, shows thateach acceptable argument is indirectly defended by some initial argument. Thelemma states that any acceptable argument is indirectly defended, against eachdefeater (i.e. parent), by some initial argument. This highlights that initial argu-ments play an important role in the defence of every other acceptable argument.While the lemma is similar to a result by Amgoud and Cayrol in their analysis ofpreference-based argumentation frameworks [Amgoud and Cayrol 2002, Proposi-tion 4.2], we list our formulation and proof here so the reader can appreciate theresult in the context of Dung’s theory.

Lemma 5.6. Let AF = 〈A,〉 be an argumentation framework. If argumentα ∈ Acc(AF,Sgrnd) then ∀P ∈ parAF (α), ∃β ∈ IN (AF ) such that β → P .

Proof. See Appendix.

As a side note, it is notable that the converse of the above lemma does nothold. Consider the argument graph depicted in Figure 4. Here, every defeater of α(namely P ) has an indirect defeater that is an initial argument (namely β) yet, αis not in the grounded extension.

αPβ α1 α2

α3

Fig. 4. Counter-example to the the converse of Lemma 5.6

Journal of the ACM, Vol. V, No. N, Month 20YY.

18 · Rahwan et al.

With the above result in place, we can now explore what happens when we adda new argument (and its associated defeats) to a given argumentation framework,thus resulting in a new argumentation framework. In particular, we are interestedin conditions under which arguments acceptable in the first framework are alsoaccepted in the second. We show that this is true under the condition that the newargument does not indirectly defeat arguments acceptable in the first framework.This is stated in the following lemma, the proof of which makes use of Lemma 5.6above.

Lemma 5.7. Let AF 1 = 〈A,1〉 and AF2 = 〈A ∪ α′,2〉 such that 1⊆2

and (2 \ 1) ⊆ (α′ × A) ∪ (A × α′). If α is in the grounded extension ofAF 1 and α′ does not indirectly defeat α, then α is also in the grounded extensionof AF 2.

Proof. See Appendix.

With the above lemma in place, we are now ready to provide an intuitive, graph-theoretic condition that is sufficient to ensure that Mgrnd

AF is strategy-proof whenagents have focal arguments.

Theorem 5.8. Suppose every agent i ∈ I has a focal argument αi ∈ Ai. If eachagent’s type contains no (in)direct defeat against αi (formally ∀i ∈ I, @α ∈ Ai suchthat α → αi), then Mgrnd

AF is strategy-proof.

Proof. See Appendix.

Note that in the theorem, → is over all arguments in A. Intuitively, to guaranteethe strategy-proof property for agents with focal arguments, it suffices that no(in)direct defeats exist from an agent’s own arguments to its focal argument. Saiddifferently, each agent i’s arguments must not undermine its own focal argument,neither explicitly and implicitly. By ‘explicitly,’ we mean that none of i’s ownarguments can defeat its focal argument. By ‘implicitly,’ we mean that other agentscannot possibly present arguments that reveal an indirect defeat line between i’sarguments and i’s own focal argument. More concretely, in Example 5.3 and Figure3(a), while agent x’s argument set Ax = α1, α4 is conflict-free, when agents yand z presented their own arguments α2 and α3, they revealed an implicit conflictbetween x’s arguments and its focal argument. In other words, they showed thatx contradicts himself (i.e. committed a fallacy of some kind).

Example 5.9 demonstrate a case in which the condition of Theorem 5.8 holds.

α1 α3 α6 α7 α8

α2

α4

α5

Fig. 5. Strategy-proof with focal arguments

Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 19

Example 5.9. Consider grounded direct argumentation mechanism with threeagents x, y and z with types Ax = α1, α6, α7, Ay = α2, α3, α7 and Az =α4, α5, α8 respectively, and with focal arguments defined as follows: αx = α1;αy = α7; αz = α8. Let the defeat relation be defined as follows: = (α1, α2),(α1, α3), (α1, α4), (α2, α5), (α2, α6), (α6, α7), (α7, α8). If agents reveal all theirarguments, we have the graph shown in Figure 5, with the accepted argumentsmarked by boxes. Agents x and z both win, while agent y loses.

In Example 5.9, the reader can verify that regardless of what other agents revealor hide, no individual agent can benefit (i.e. move its focal argument from beingrejected to being accepted) by hiding one of its own arguments.

One may reasonably ask if the sufficient condition in Theorem 5.8 is also neces-sary for agents to reveal all their arguments truthfully. As Example 5.10 shows, thisis not the case. In particular, for certain argumentation frameworks, an agent mayhave truthtelling as a dominant strategy despite the presence of indirect defeatsamong its own arguments.

α1 α2 α3 α4

α5

Fig. 6. Strategy-proofness despite indirect self-defeat

Example 5.10. Consider the variant of Example 5.3 with the additional argu-ment α5 and defeat (α5, α3). Let the agent types be Ax = α1, α4, α5, Ay = α2andAz = α3 respectively. Recall that the focal arguments were defined as follows:αx = α4; αy = α2; αz = α3. The full argument graph is depicted in Figure 6. Withfull revelation, the mechanism outcome rule produces the outcome o = α1, α4, α5.

Note that in Example 5.10, truth revelation is now a dominant strategy for xdespite the fact that α1 → α4 (note that here, x gains nothing by hiding α1). Thishinges on the presence of an argument (namely α5) that cancels out the negativeeffect of the (in)direct self-defeat among x’s own arguments.

5.3.3 On Relevance and Focal Arguments. Going back to Example 5.9 and Fig-ure 5, it is interesting to see that α6 and α7 are irrelevant as far as x’s focalargument (namely α1) is concerned. Yet x does not gain by hiding them –that isrevealing α6 and α7 weakly dominates hiding them. That is not to say that α6

and α7 are not relevant to the context of the dialogue, however. Indeed, they arerelevant since they influence the final acceptability status of the focal arguments ofother agents (namely αy = α7 and αz = α8). To make these notions more precise,we offer the following definitions.

Definition 5.11 Relevant Argument. Let 〈A,〉 be an argumentation framework.Let α1, α2 ∈ A. We say that α1 is relevant to α2 if and only if α1 = α2, or thereis a path from α1 to α2 in the corresponding argument graph. Otherwise, we saythat α1 is irrelevant to α2.

Journal of the ACM, Vol. V, No. N, Month 20YY.

20 · Rahwan et al.

It is clear to see that in Example 5.9 above, none of the arguments are relevantto α1. In particular, agent x’s own arguments α6 and α7 are irrelevant to x’sfocal argument α1. However, they are relevant to the context, which includes otheragents in the scenario.

Definition 5.12 Context-Relevant Argument. Let I be a set of agents with focalarguments A = α1, . . . , αI. Let agent types be A1, . . . ,AI with an arbitrarybinary defeat relation on A = A1 ∪ · · · ∪ AI . We say that an argument α ∈ A iscontext-relevant if and only if it is relevant to some focal argument in A. Otherwise,we say that α is context-irrelevant.

Again in Example 5.9 above, we can see that α6 and α7 are context-relevant, sincethere is a path from those arguments to agent z’s focal argument α8. Argumentα5, on the other hand, is context-irrelevant.

With the above definitions in place, we present the following theorem, whichstates that hiding context-irrelevant arguments does not change the outcome of themechanism as far as the agents’ preferences are concerned. In other words, the setof accepted focal arguments remains exactly the same.

Theorem 5.13. Suppose every agent i ∈ I has a focal argument αi ∈ Ai. Ifthe agents hide any arbitrary set of context-irrelevant arguments, the set of winningagents is the same as in the case of full-revelation.

Proof. See Appendix.

As can be seen in Example 5.9, hiding α4 and α5 does not change the fact thatx and z won (i.e. got their focal arguments accepted), since these arguments arecontext-irrelevant.

Together, Theorems 5.13 and 5.8 tell us that given an argumentation framework,as long as agents know that it is not possible for some of their arguments to becomerelevant to their focal argument or for a path to be created which causes an indirectdefeat of their focal argument, then they are best off revealing all their arguments.Hence, these results also give advice to an agent about what it should do givenknowledge of generic properties of the argumentation framework defined by thecontext.

5.4 Strategy-Proofness When Agents can also Lie

In the previous section, we restricted agent strategies to showing or hiding argu-ments in their own type. We did not allow agents to reveal arguments that areoutside of their types. In other words, agents were not allowed to lie by statingsomething they did not believe, but only by hiding something they do believe.

In this section, we relax this assumption. As we will show below, the characteri-zation of strategy-proofness is nearly identical to the case where agents could onlyhide, but not lie about, arguments (Theorem 5.5).

Theorem 5.14. Let AF be an arbitrary argumentation framework, and let GE (AF )denote its grounded extension. Mechanism Mgrnd

AF , in which agents can hide orlie about arguments, is strategy-proof for agents with focal arguments if and onlyif AF satisfies the following condition: ∀i ∈ I, ∀S ⊆ A and ∀A−i, we haveαi /∈ GE (〈Ai ∪ A−i,〉) implies αi /∈ GE (〈S ∪ A−i,〉).Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 21

Proof. The proof is essentially the same as the proof of Theorem 5.5. The onlydifference is that S ranges over A instead of Ai.

As we did in the case of hiding arguments, this result can be somehow weakened toyield a more intuitive sufficient condition for strategy-proofness. In particular, weprovide a sufficient graph-theoretic condition for strategy-proofness in cases whereagents can hide arguments or lie by revealing arguments not in their argument sets.

Theorem 5.15. Suppose every agent i ∈ I has a focal argument αi ∈ Ai, andthat agents can both hide or lie about arguments. If the following conditions hold:

(A) each agent’s type contains no (in)direct defeat against αi (formally ∀i ∈I, @β ∈ Ai such that β → αi);

(B) for any agent i, no argument outside i’s type (in)directly defends αi (formally∀i ∈ I, @β ∈ A\Ai such that β # αi);

then MgrndAF is strategy-proof.

Proof. See Appendix.

Let us try to interpret the above theorem intuitively. Basically, the theoremrests on two key conditions: (a) that an agent cannot benefit from hiding any ofits own arguments, because its arguments cannot “harm” its focal argument; and(b) that an agent cannot benefit from revealing any argument it does not have,because these arguments cannot “benefit” its focal argument. For the theorem tohold, these conditions must be satisfied for every agent, no matter what the otheragents reveal.

6. THE EPISTEMIC CONDITIONS FOR STRATEGY-PROOFNESS

Strategy-proofness means that the truthful results (given the true types of theagents) should be supported by a dominant strategy equilibrium. We showed thatstrategy-proofness requires some fairly strong conditions on the structure of theargument graph and agents’ arguments. One might also suspect that strategy-proofness requires strong epistemic requirements, such as common knowledge ofthe argument graph. In this section, we explore this question.

Let us first introduce the basics of the event-based approach to knowledge [Au-mann 1976]. Given a game we start with a class of states of the world. Each stateof the world represent any possible combination of actions, beliefs, information, etc.that is relevant for the outcome of the game. A fact or property of the game isidentified with an event, i.e. a set of states of the world in which it is true. We saythat an agent knows a fact F if in every possible state of the world that she cannotdistinguish from the actual state of the world, F is true.

Formally, we assume that agents partition the class of states of the world, Ω inclasses of the form Pi(ω) = ω′ ∈ Ω : ω

′is indistinguishable from ω. An event is

represented by F ⊆ Ω, i.e. the sets of the world in which it holds. Then, if ω is thetrue state of the world, we say that “i knows F” if and only if Pi(ω) ⊆ F . Givenany property T that can be logically derived from F , then “i knows F” implies “iknows T”.

We say that an event F is mutually known (or everyone knows it) if all playersknow it. Formally, F is mutually known in the state of the world ω if and only if for

Journal of the ACM, Vol. V, No. N, Month 20YY.

22 · Rahwan et al.

every i, Pi(ω) ⊆ F . We say that F is common knowledge at the state of the world ωif each agent knows F , knows that everybody else knows F , knows that everybodyelse knows that she knows F , etc. A useful way of representing common knowledgeis by means of self-evident events. Namely, an event E is said to be self-evident iffor all ω ∈ E, and each agent i, Pi(ω) ⊆ E. That is, if E is true, everybody knowsE. An event F is common knowledge in the state ω if there exists a self-evidentevent E such that ω ∈ E ⊆ F . Finally, an event F is distributed knowledge if itholds in all states that combine the information available to the agents. Formally,F is distributed knowledge at state ω, if ∩ni=1Pi(ω) ⊆ F [Fagin et al. 1995].

Suppose that the agents have a common prior probability distribution that yieldsa positive probability to states in which there is mutual knowledge of the payoffs andof the rationality of the agents while there is common knowledge of the conjecturesthe agents held about the actions of the others. Then, all the conjectures inducea profile of distributions that constitute a mixed Nash equilibrium of the game[Aumann and Brandenburger 1995].9 In the case of games in which only pureactions are played, a conjecture is the profile of actions that it is assumed that theothers will play.

Although this condition is sufficient for Nash equilibrium (agents might coordi-nate in an equilibrium just by accident), it is a good indication as to how strict theepistemic demands are to ensure that the outcome of a game is an equilibrium.

In our setting, these conditions can be substantially weakened. To do that, letus first note that each event F can be associated to a proposition p, and we willdenote this by F = [p].

In what follows, we will introduce, for each i two conditions, (A∗i ) and (B∗i ), theindividual “counterparts” of the conditions (A) and (B) of Theorem 5.15. Thatis:

(A∗i ) @β ∈ Ai such that β → αi);(B∗i ) @β ∈ A\Ai such that β # αi);

Then we have:

Theorem 6.1. If each agent i has true type Ai and knows MgrndAF as well as

(A∗i ) and (B∗i ) then 〈A1, . . . ,An〉 is a dominant-strategy equilibrium.

Proof. See Appendix.

Notice that this holds for any state of the world in which the (A∗i ) and (B∗i ) aretrue for every agent i and the mechanism isMgrnd

AF . As it can be seen in the proof ofTheorem 5.15, only the individual conditions (A∗i ) and (B∗i ) play a role in provingthat each individual agent is not better off by declaring something other than herreal type. So, each agent does not even need to know the actual graph 〈A,〉 toknow that if (A∗i ) and (B∗i ) hold, she is better off by declaring her own type.

One might argue that an agent i cannot know (A∗i ) and (B∗i ) without knowingA and . However, this need not be the case. For example, I may argue that “I’mthirsty,” and also argue that “I am hungry.” From my understanding of the domain,

9A mixed Nash equilibrium is an equilibrium in which each agent’s strategy can be mixed. A

mixed strategy is an assignment of a probability to each pure strategy.

Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 23

I may know that it is impossible for anyone to show that these two argumentsindirectly defeat one another (satisfying (A∗i )). Yet, I may know nothing about theexistence of a colleague’s argument in favour of, say, attending a particular movie.

Theorem 6.1 has this following consequence:

Corollary 6.2. Suppose each agent i has a true type Ai and can hide ar-guments or lie. If everyone knows [Mgrnd

AF is the mechanism] (i.e. it is mutuallyknown), and [(A∗i)]ni=1 and [(B∗i)]ni=1 are distributed knowledge among theagents (such that every i knows (A∗i ) and (B∗i )), then 〈A1, . . . ,An〉 is a dominant-strategy equilibrium.

Therefore, no property has to be common knowledge to ensure the truthful dec-laration result. All we need is mutual knowledge of the mechanism and distributedknowledge of the relevant conditions.

A similar claim can be made for hiding information, according to Theorem 5.8,since its proof only amounts to know [Mgrnd

AF is the mechanism] and [(A∗i )]:

Corollary 6.3. Suppose each agent i has a true type Ai and can only hide argu-ments. If everyone knowsMgrnd

AF and there is distributed knowledge that [(A*)]ni=1

(such that every i knows (A∗i )), then 〈A1, . . . ,An〉 is a dominant-strategy equilib-rium.

7. INCENTIVE COMPATIBILITY OF GROUNDED SEMANTICS

In Section 5, we explored in some detail the various conditions for the groundedmechanism Mgrnd

AF to be strategy-proof. That is, we explored conditions underwhich it is in the best interest of every agent to reveal its arguments truthfully,regardless of what other agents reveal. This corresponds to the strategy profilein which all agents reveal their arguments truthfully being a dominant strategyequilibrium.

However, dominant strategy equilibrium is a rather strong solution concept, andis unlikely to hold in many realistic scenarios. Indeed, the strategy-proofness of amechanism is a strong version of incentive compatibility. A mechanism is incentivecompatible if it enforces truth telling in some equilibrium, as opposed to a dom-inant strategy equilibrium. Formally, the social choice function f(·) is incentivecompatible (or truthfully implementable) if the direct mechanism M = (Θ, g(·))has an equilibrium (θ1, . . . , θn) (see Definition 3.6). Against this background, wenow explore conditions under which the grounded mechanism Mgrnd

AF is incentivecompatible –i.e. whether truth-telling is a Nash equilibrium.

The theorem below gives a full characterisation of cases in which mechanismMgrnd

AF is incentive compatible. Note that the condition in this theorem is moregeneral than that in Theorem 5.14 since strategy-proofness implies incentive com-patibility, but not vice versa.

Theorem 7.1. Let AF be an arbitrary argumentation framework, and let GE (AF )denote its grounded extension. Mechanism Mgrnd

AF , in which agents can hide orlie about arguments, is incentive compatible for agents with focal arguments ifand only if AF satisfies the following condition: ∀i ∈ I, ∀S ⊆ A, we have αi /∈GE (〈Ai ∪ A−i,〉) implies αi /∈ GE (〈S ∪ A−i,〉), where A−i = A1 ∪ · · · ∪Ai−1 ∪ Ai+1 ∪ · · · ∪ AI are the actual types of agents other than i.

Journal of the ACM, Vol. V, No. N, Month 20YY.

24 · Rahwan et al.

Proof. The proof is very similar to the proof of Theorem 5.5. The differenceis that, to deal with the condition of incentive compatibility, the condition onlyrequires Ai to be the best-response to the truthful declaration A−i by other agents(as opposed to being the best-response to any variation of A−i in which other agentshide/lie).

As an illustration, consider the following example. It illustrates that in somecases, truth-telling in a Nash equilibrium, but not a dominant strategy-equilibrium.In this case, the convenience of declaring the whole truth depends critically on whatothers reveal.

Example 7.2. Consider the argument graph in Figure 7(a). Suppose we havetwo agents x and y with a common focal argument αx = αy = α4. Suppose alsothat agent types are Ax = α1, α2, α4 and Ay = α3, α4. Figures 7(a)–(e) showthe result of hiding different combinations of arguments. The accepted argumentsin each graph are marked in squares. Let the corresponding outcomes be oa . . . oe.

(a) Full revelation (b) Hiding α1 (c) Hiding α1 & α3 (d) Hiding α1 & α2

α1

α2

α3

α4

α2

α3

α4

α2

α4

α3

α4

(e) Hiding α2 & α3

α4α1

Fig. 7. Two Nash equilibria, one of which is truthful

Let us analyse the incentives in Example 7.2. First, it is clear that agents mustalways reveal their shared focal argument α4. It is easy to see that the outcomeoa of full revelation (Figure 7(a)) is a Nash equilibrium, since neither agent canbenefit by unilaterally hiding arguments, given what the other agent is revealing.In particular, hiding α1 can be harmful, while hiding either α2 or α3 (given thatα1 is revealed) is inconsequential to the fate of the focal argument α4.

However, oa is not a dominant strategy equilibrium. This is because, from thepoint of view of agent y, revealing α3 can be harmful if agent x decides (for someperverse reason) to hide α1. Thus, for agent y, the strategy of revealing α3, α4is dominated by revealing α4. Indeed, the dominant strategy equilibrium in thiscase is for x to reveal α1, α4 and for y to reveal α4, resulting in outcome oe.

In Example 7.2, the full-revelation Nash equilibrium produces the same result (asfar as their focal argument α4 is concerned) as the dominant strategy equilibriumstrategy profile (α1, α4, α4). Thus, agents may still choose to coordinate onfull revelation, thus revealing all relevant information. And even if they choosethe dominant strategy equilibrium, if the judge is predominantly interested in thestatus of α4, then no harm is done. But, as we shall see below, things are notalways this pleasant.

We now present another example, which highlights that even when truth rev-elation is a Nash equilibrium, other equilibria may be more attractive to agents.Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 25

Consequently, agents may choose to coordinate on a different, non-truthful equilib-rium. In other words, a coordinated lie may be better for everyone.

Example 7.3. Consider the argument graph in Figure 8(a). Suppose we have twoagents x and y with types Ax = α1, α3 and Ay = α2, α4 respectively. Supposethe agents’ focal arguments are αx = α1 and αy = α4. With full revelation, themechanism outcome rule produces the outcome oa = (Figure 8(a)). Figures8(b), (c) and (d) show the argument graph and accepted arguments (denoted byboxes) resulting from hiding α2, α3 and α2, α3 respectively. Their respectiveoutcomes are ob = α3, oc = α2, and od = α1, α4.

(a) Full revelation

α1

α4

α2 α3

(b) Hiding α2

α1

α4

α3

(c) Hiding α3

α1

α4

α2

(d) Hiding α2 & α3

α1

α4

Fig. 8. Two equilibria, with the truthful equilibrium being Pareto dominated

Let us analyse the incentives in Example 7.3. First, it is clear that agents mustalways reveal their respective focal arguments α1 and α4. It is easy to see thatthe outcome oa of full revelation (Figure 8(a)) is Nash equilibrium. Neither agentcan benefit by unilaterally hiding its other argument: by hiding α2 (Figure 8(b)),agent y cannot alter the fate of its focal argument α4, since its defeater α3 remains.Symmetrically, by hiding α3 (Figure 8(c)), agent x cannot alter the fate of itsfocal argument α1. Having said that, if both agents coordinate by simultaneouslyhiding α2 and α3 (Figure 8(d)), they achieve outcome od = α1, α4 and get boththeir focal arguments accepted. This outcome is also a Nash equilibrium, since anydeviation can only harm the focal argument of some agent. Note that equilibrium odis more desirable to both agents than equilibrium oa – outcome od Pareto dominatesoa.10

Let us reflect a bit. Examples 7.2 and 7.3 highlight an important aspect ofthe grounded mechanism Mgrnd

AF . In the absence of the strong conditions thatguarantee truthful revelation via strategy-proofness (Theorems 5.5 and 5.14), wemay be comforted to know that there is a greater class of situations in whichagents coordinate on the truth-telling equilibrium (Theorem 7.1). However, whilethe truth-telling equilibrium may be attractive to agents (Example 7.2), it can attimes be a less desirable equilibrium (Example 7.3), making agents unlikely to selectit.

We summarise the above discussion in the following remark.

10An outcome o1 Pareto dominates o2 if o1 makes at least one agent better off without making

any agent worse off.

Journal of the ACM, Vol. V, No. N, Month 20YY.

26 · Rahwan et al.

Remark 7.4. With mechanism MgrndAF , under incentive compatibility, truth rev-

elation is not guaranteed to be the Pareto dominant equilibrium.

8. DISCUSSION: THE BIG PICTURE

We have introduced an approach to analysing argumentation-based inference pro-cedures/semantics as if they were game-theoretic mechanisms. While our analyticalresults have focused on the case of grounded semantics, the framework is very gen-eral, since Dung’s model captures a growing variety of semantics [Dung 1995; Baroniand Giacomin 2007]. Moreover, Dung’s model and his classical semantics have beenshown to generalise a variety of nonmonotonic logics, including extension-based ap-proaches such as Reiter’s well-known default logic [Reiter 1980], argument-basedapproaches such as Pollock’s inductive defeasible logic [Pollock 1987], and manyvarieties of logic programming semantics. As such, Dung’s framework provides uswith a very general setting in which to conduct our analysis, as well as futureanalysis beyond the present paper.

Dung’s classical argumentation semantics [Dung 1995], as well as more recentrefinements [Baroni and Giacomin 2007], can be viewed as a semantic entailmentrelation |= that maps an argumentation framework (i.e. conflicting knowledgebase) to a set of accepted arguments. Thus, we can define the grounded semanticsas an entailment relation |=grnd such that, given an arbitrary argumentation graph,〈A,〉, we write 〈A,〉 |=grnd α if and only if α ∈ A is in the grounded extensionof 〈A,〉. As another example, credulous preferred semantics can be defined as anrelation |=pref−cred such that 〈A,〉 |=pref−cred α if and only if α ∈ A is in somepreferred extension of 〈A,〉.

In the analysis conducted in this paper, the semantic entailment relation |=grnd

specified the ideal standard that we aspired to achieve using the inference procedureover arguments. So when we design an argument evaluation procedure `grnd , wetraditionally want to achieve two things. First, we want to prove that `grnd issound with respect to |=grnd , meaning that 〈A,〉 `grnd α implies 〈A,〉 |=grnd α.Secondly, we want to prove that `grnd is complete, meaning that 〈A,〉 |=grnd αimplies 〈A,〉 `grnd α. To simplify the notation, we can overload the operator|=grnd (respectively `grnd) such that we can express 〈A,〉 |=grnd S (respectively〈A,〉 `grnd S) if and only if ∀α ∈ S we have 〈A,〉 |=grnd α (respectively〈A,〉 `grnd α). This way, operators |=grnd and `grnd can be seen as functionsthat return the set of all entailed (respectively inferred) formulae.

Now, in this paper, we asked what happens when A is distributed (possiblyoverlapping) among a set of strategic agents I with different preference criteria,with each agent’s part denoted Ai such that A = A1 ∪ · · · ∪ AI . In this case,the ideal standard specified in the relation |=grnd can be viewed as a social choicefunction (recall Definition 3.3) which gives us the desired outcome given the unionof the agents’ actual arguments. So, while an inference procedure `grnd may beprovably sound and complete with respect to |=grnd , procedure `grnd would not bevery useful if agents have incentive to misreport their arguments, thus jeopardizingthe soundness and completeness of `grnd (since, in this case, `grnd would be appliedto a set of arguments other than A). What we did in this paper is show conditionsunder which |=grnd is implementable, i.e. where truth-revelation is an equilibriumJournal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 27

strategy profile induced by some mechanism `grnd . This question can be considered,more generally, for any other semantic entailment relation.

9. RELATED WORK

In this section, we briefly survey some key related literature and point out how itdiffers from and complements our work.

9.1 Game Semantics

We first contrast our work with work on so-called game semantics for logic, whichwas pioneered by logicians such as Lorenzen [1961] and Hintikka and Sandu [1997].Although many specific instantiations of this notion have been presented in theliterature, the general idea is as follows. Given some specific logic, the truth valueof a formula is determined through a special-purpose, multi-stage dialogue gamebetween two players, the verifier and falsifier. The formula is considered trueprecisely when the verifier has a winning strategy, while it will be false wheneverthe falsifier has the winning strategy. Similar ideas have been used to implementdialectical proof-theories for defeasible reasoning (e.g. by Prakken and Sartor [1997]and by Amgoud and Cayrol [2002]).

There is a fundamental difference between the aims of game semantics and ourArgMD approach. In game semantics, the goal is to interpret (i.e. characterise thetruth value of) a specific formula by appealing to a notion of a winning strategy.As such, each player is carefully endowed with a specific set of formulae to enablethe game to characterise semantics correctly (e.g. the verifier may own all thedisjunctions in the formula, while the falsifier is given all the conjunctions).

In contrast, ArgMD is about designing rules for argumentation among self-interested players who may have incentives to manipulate the outcome given avariety of possible individual preferences (specified in arbitrary instantiations of autility function). Our interest is in conditions that guarantee truth-revelation givendifferent classes of preferences. Game semantics have no similar notion of strategicmanipulation by hiding arguments or lying. Moreover, our framework allows anarbitrary number of players (as opposed to two agents).

9.2 Glazer and Rubinstein’s Argumentation Rules

In the economic literature, Glazer and Rubinstein [Glazer and Rubinstein 2001]explored the mechanism design problem of constructing rules of debate that max-imise the probability that a listener reaches the right conclusion given argumentspresented by two debaters. They study a very restricted setting, in which the worldstate is described by a vector ω = (w1, . . . , w5), where each ‘aspect’ wi has two pos-sible values: 1 and 2. If wi = j for j ∈ 1, 2, we say that aspect wi supportsoutcome Oj . Presenting an argument amounts to revealing the value of some wi.The setting is modelled as an extensive form game and analysed. In particular, theauthors investigate various combinations of procedural rules (stating in which orderand what sorts of arguments each debater is allowed to state) and persuasion rules(stating how the outcome is chosen by the listener). In terms of procedural rules,the authors explore: (1) one-speaker debate in which one debater chooses two argu-ments to reveal; (2) simultaneous debate in which the two debaters simultaneouslyreveal one argument each; and (3) sequential debate in which one debater reveals

Journal of the ACM, Vol. V, No. N, Month 20YY.

28 · Rahwan et al.

one argument followed by one argument by the other. Our mechanism is closer tothe simultaneous debate, but is much more general as it enables the simultaneousrevelation of an arbitrary number of arguments. Glazer and Rubinstein investigatea variety of persuasion rules. For example, in one-speaker debate, one rule analysedby the authors states that ‘a speaker wins if and only if he presents two argumentsfrom a1, a2, a3 or a4, a5.’ In a sequential debate, one persuasion rule statesthat ‘if debater D1 argues for aspect a3, then debater D2 wins if and only if hecounter-argues with aspect a4.’

The kinds of rules studied by Glazer and Rubinstein are arbitrary, very specific,and do not follow an intuitive principle of argument evaluation (e.g. like scepticism).This is not surprising, since their interest is not in evaluating rules studied typicallyin logic and computer science. The sceptical mechanism presented in this paperprovides a more natural criterion for argument evaluation, supplemented by a strongsolution concept that ensures all agents have incentive to reveal their arguments,and thus for the listener to reach the correct outcome. Moreover, our framework forargumentation mechanism design is more general in that it can be used to model avariety of more complex argumentation settings.

9.3 Strategy-Proofness of Belief Merging

The work presented in this paper is related to recent research on the strategic prop-erties of belief merging operators, to deal with situations where agents have prefer-ences on the possible outcomes of the merging process (i.e. the merged knowledgebases). Examples include work by Everaere et al. [2007] and by Chopra et al. [2006].Here, we will focus on the former as a representative.

Everaere et al. [2007] analysed various strategy-proofness properties for manyoperators from the literature on merging knowledge bases expressed in propositionallogic. These operators include different distance-based (e.g. based on the Hammingdistance between interpretations of the propositions) and syntax-based operators(e.g. selecting the maximum number of formulae while maintaining consistency).The preferences of an agent are represented using a ‘satisfaction index,’ whichcaptures the utility of a particular merging outcome. Three preference criteriaare explored: the weak drastic index gives 1 if the result of the merging process isconsistent with the agents own base, and 0 otherwise; the strong drastic index gives1 if the agents base is a logical consequence of the result of the merging process,and 0 otherwise; and the probabilistic index is based on the compatibility degreebetween the merged base and the agents own base measured using a ratio involvingthe number of models (assuming uniformly distributed outcomes).

The authors show that in general, none of the merging operators they exploreis strategy-proof. Then they explore various restrictions that achieve strategy-proofness: (1) restricting the number of agents; (2) restricting the number of modelssatisfying each agent’s base (e.g. complete bases that single models); (3) addingadditional integrity constraints that must be satisfied by the output; (4) restrictingthe strategy space (e.g. not allowing lying).

The key difference between our work and that of Everaere et al. [2007] and Chopraet al. [2006] is that their work is applied to merging operators, while our work isapplied in the context of argumentation semantics. While argumentation semanticscan also be seen as merging operators, they have different roots in nonmonotonicJournal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 29

reasoning. In particular, argument-based semantics take the structure of conflictbetween formulae/arguments explicitly into account in the merging process. There-fore, the complex structure of defeat and defence among arguments is the centralmeasure of rationality of the outcome. This contrasts with work on belief merging,in which other criteria, such as voting or distance between models, are used. Thesame can be said about work on strategy-proofness in the judgement aggregationliterature [List and Dietrich 2007], in which the typical question is whether theoutcome of a given aggregation function (e.g. plurality voting) is a set of formulaethat is satisfiable, etc.

Indeed, understanding of the relationship between belief merging and argumen-tation is still in its early stages. It would be interesting to explore the connectionsbetween incentives in the two paradigms in future work. Having said that, we be-lieve the general framework discussed in Section 8 has the potential to unify bothapproaches.

9.4 Merging of Dung Frameworks

Finally, we refer to recent work on merging multiple Dung-style argumentationgraphs presented by multiple agents [Coste-Marquis et al. 2007]. The authors use acombination of graph expansion, distance calculation and voting in order to arrive ata single argumentation framework. The key difference between this work and oursis that agents in the former are cooperative: they do not have conflicting preferencesover what the final framework should look like. As such, the possibility of hidingarguments (or any strategy behaviour) is not discussed. Instead, the authors focuson evaluating properties such as whether the merged argument graph preserves theacceptability of arguments on which all agents initially agreed (i.e. were acceptedin their individual argument graphs). Another difference between our study andthe work on merging is that in the latter, agents may disagree on the defeat relationitself, meaning that the notion of conflict is not defined objectively defined in theunderlying logic. It would be interesting to analyse strategic interaction of self-interested agents under such a condition.

10. CONCLUSION

Traditionally, the design of logical inference procedures has predominantly focusedon properties such as the soundness and completeness of reasoning with respectto different semantics, assuming all logical formulae are available a priori. In thispaper, we argued that the design of logical inference procedures can greatly benefitfrom a game-theoretic perspective. Just as an auction is a rule that maps therevealed preferences/bids of different agents into a social outcome by allocatingresources, an inference procedure can map logical formulae revealed by differentagents into logical conclusions. The question then becomes: how do we engineerinference procedures with desirable incentive properties? A general answer to thisabove question requires a new paradigm that marries logic and game theory in anew way. The details of this paradigm are beyond the scope of a single paper. Ouraim here was more modest, namely to motivate such paradigm by exploring thequestion in the context of abstract argumentation theory, and to show how thisparadigm can be useful through a case study.

Journal of the ACM, Vol. V, No. N, Month 20YY.

30 · Rahwan et al.

We introduced Argumentation Mechanism Design (ArgMD), in which an infer-ence procedure is viewed in terms of the game it defines. We took as our startingpoint Dung’s abstract model of argumentation that has been shown to generalisea variety of nonmonotonic reasoning logics. We showed how argument evaluationsemantics can be viewed as games with different agents deciding which argumentsto reveal. We then undertook a detailed case study of grounded semantics, ex-ploring its strategic properties under different conditions. We looked about bothstrategy-proofness and incentive compatibility, and explored knowledge conditionsfor achieving the former. We showed that even the most conservative argumentevaluation criterion can be prone to strategic manipulation if agents are strategic.We concluded by arguing that this type of analysis can be seen as an instance of amore general mechanism-design view of logic: a semantic entailment relation is a so-cial choice function that we would like to implement despite being uncertain aboutthe agent’s private knowledge bases, and the inference procedure is a mechanismthat (hopefully) implements this social choice function in equilibrium.

The work presented in this paper is just the beginning of what we envisage to bea growing area at the intersection of mechanism design and formal argumentationtheory specifically, and between mechanism design and formal logic more generally.For the first time in the literature on argumentation frameworks, we can now takethe study of strategies seriously when designing argument evaluation procedures (orsemantics). Without this game-theoretic perspective, using argumentation in real(open) agent systems has been a far-fetched prospect. It is for this very reason,we believe, that argumentation research has lagged behind in terms of practicalapplicability in open systems when compared to, for example, auction and votingprotocols. We envisage that our work, with its new approach to designing argu-mentation rules, will help bridge the gap between theory and application, and berelevant to applications such as Semantic Web reasoning.

We note that although the specific formal results presented in this paper focuson a very particular kind of acceptability semantics and a particular class of agentpreferences, their intended role is more general. Through these results, our aimis to make a case for the ArgMD approach in general, and to excite researchersabout the potential benefits of a game-theoretic perspective to designing logicalinference procedures. Through the case study on grounded semantics, we showedhow ArgMD enables a kind of analysis that was not possible before. Our machinerynow makes it possible to apply ArgMD to a variety of other (existing or new)argument acceptability criteria and under different classes of agent preferences.Furthermore, our work showed that ArgMD analysis can be non-trivial, requiringcareful attention to the interplay between properties of the argument defeat graphon one hand, and the form of agent preferences on the other. Hence, a major futuredirection is to examine the interplay between graph-theoretic, utility-theoretic, andgame-theoretic properties of argumentation systems under a variety of existing (andnew) acceptability criteria.

Another important direction of future work is exploring the dynamic aspects ofstrategic argumentation in extended interaction where arguments appear at differ-ent time steps. Focusing on direct mechanisms, as we did in this paper, is widelypractised in game theory because it simplifies analysis without losing generalityJournal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 31

about existence results. However, indirect mechanisms can be desirable in prac-tice for a variety of reasons, such as computational and communication efficiency(refer to Section 3.3 for more on this). Studying the dynamics of multi-stage ar-gumentation mechanisms would require modelling them as extensive-form games[Mas-Colell et al. 1995], but also analysing the dynamics of knowledge and beliefas a result of exchanging arguments [Fagin et al. 1995].

Finally, an important future direction, relevant to the theoretical computer sci-ence research community, is exploring the complexity issues related to argumen-tation mechanism design. Issues include the complexity of determining whether agiven argumentation game is strategy-proof or incentive compatible, and the com-plexity of strategic reasoning in argumentation mechanisms. Indeed, in the domainof voting rules, it has been shown that complexity can act as a barrier to strategicmanipulation even if manipulation cannot be ruled out theoretically [Conitzer et al.2007]. It will be interesting to identify whether argumentation mechanisms havesimilar properties.

APPENDIX: Detailed Proofs

A.1 Proof of Theorem 5.5

Theorem. Let AF be an arbitrary argumentation framework, and let GE (AF )denote its grounded extension. Mechanism Mgrnd

AF , in which agents can hide argu-ments and have focal arguments, is strategy-proof for agents with focal argumentsif and only if AF satisfies the following condition: ∀i ∈ I, ∀S ⊆ Ai and ∀A−i, wehave αi /∈ GE (〈Ai ∪ A−i,〉) implies αi /∈ GE (〈(Ai\S) ∪ A−i,〉).

Proof. ⇒) Let i ∈ I be an arbitrary agent with type Ai and focal argumentαi ∈ Ai. Suppose Mgrnd

AF is strategy-proof. This implies that ∀S ⊆ Ai and ∀A−i:

ui(GE (〈Ai ∪ A−i,〉),Ai) ≥ ui(GE (〈S ∪ A−i,〉),Ai)

Therefore, by definition of focal argument preferences: if αi ∈ GE (〈S ∪ A−i,〉)then αi ∈ GE (〈Ai ∪ A−i,〉). Then, by contraposition we have that:

αi /∈ GE (〈Ai ∪ A−i,〉) implies αi /∈ GE (〈S ∪ A−i,〉).

⇐) Suppose that given any A−i, we have that ∀i ∈ I, ∀S ⊆ Ai, αi /∈ GE (〈Ai ∪A−i,〉) implies αi /∈ GE (〈S ∪ A−i,〉).

We want to prove that:

ui(GE (〈Ai ∪ A−i,〉),Ai) ≥ ui(GE (〈S ∪ A−i,〉),Ai).

Suppose not. Then ∃i and ∃S′ ⊆ Ai such that

ui(GE (〈Ai ∪ A−i,〉),Ai) < ui(GE (〈S′∪ A−i,〉),Ai).

But this means that αi ∈ GE (〈S′ ∪ A−i,〉) while αi /∈ GE (〈Ai ∪ A−i,〉).Contradiction. Therefore, i has no incentive to declare any arguments other thanthose of her type, and thus the mechanism is strategy-proof.

A.2 Proof of Lemma 5.6

Lemma. Let AF = 〈A,〉 be an argumentation framework. If argument α ∈Acc(AF,Sgrnd) then ∀P ∈ parAF (α), ∃β ∈ IN (AF ) such that β → P .

Journal of the ACM, Vol. V, No. N, Month 20YY.

32 · Rahwan et al.

Proof. Recall that the grounded extension is calculated by iterative applicationof the monotonic function F to the empty set until we reach the least fixed point.Let the least fixed-point of AF be FT (∅) for some integer T ≥ 1. We proceed byinduction on the iterative application of F .

—Base Step: If F1(∅) is the least fixed-point of AF , then since F1(∅) = IN (AF ),the lemma is trivially satisfied. Otherwise, F2(∅) = F(F1(∅)) includes all argu-ments in F1(∅) and all arguments defended by F1(∅). This, by definition, impliesthat ∀α ∈ F2(∅), either:(1) α ∈ IN (AF ); or(2) ∀Pα ∈ parAF (α), ∃β ∈ F1(∅) = IN (AF ) such that there is an odd-length

path (of length 1) from β to Pα, namely the path β Pα. Hence, bydefinition, β → Pα

—Induction Step: Suppose that for some integer 1 < k ≤ T , ∀αk−1 ∈ Fk−1(∅),either(1) αk−1 ∈ IN (AF ); or(2) ∀Pαk−1 ∈ parAF (αk−1), ∃β ∈ IN (AF ) such that β → Pαk−1 with odd-length

path of length 2k − 1.We need to prove that ∀αk ∈ Fk(∅), ∀Pαk

∈ parAF (αk), ∃β ∈ IN (AF ) such thatβ → Pαk

.By definition, Fk(∅) = F(Fk−1(∅)) includes all arguments in Fk−1(∅) and allarguments defended by Fk−1(∅). This, by definition, implies that ∀αk ∈ Fk(∅),either:(1) αk ∈ Fk−1(∅) and we are done; or(2) ∀Pαk

∈ parAF (αk), ∃αk−1 ∈ Fk−1(∅) such that there is an odd-length path(of length 1) from αk−1 to Pαk

, namely the path αk−1 Pαk. But this

means that there is an odd-length path β → Pαk−1 αk−1 Pαkof length

2k + 1 from β to Pαk. Hence, by definition, β → Pαk

. And we are done.

When k = T , then the process stops with all arguments in the grounded extension.Therefore, by induction, ∀αT ∈ Acc(AF,Sgrnd) then ∀PαT

∈ parAF (αT ), ∃β ∈IN (AF ) such that β → PαT

.

A.3 Proof of Lemma 5.7

Lemma. Let AF 1 = 〈A,1〉 and AF2 = 〈A ∪ α′,2〉 such that 1⊆2 and(2 \1) ⊆ (α′×A)∪(A×α′). If α is in the grounded extension of AF 1 andα′ does not indirectly defeat α, then α is also in the grounded extension of AF 2.

Proof. Let the least fixed-point of AF 1 be FNAF1(∅) for some integer N ≥ 1

(recall Definition 2.5). Similarly, let the least fixed-point of AF 2 be FMAF2(∅) for

some integer M ≥ 1.We need to show that if α ∈ FNAF1

(∅) and if it is not the case that α′ →2 α, thenα ∈ FMAF2

(∅).Let α ∈ FNAF1

(∅). Let us first take the case where parAF1(α) = ∅; that is

@Pα ∈ A such that Pα 1 α. The assumption (not α′ →2 α) implies (not α′ 2 α).Therefore, @Pα ∈ A∪α′ such that Pα 2 α, and thus by definition, α ∈ FMAF2

(∅).And the implication is satisfied.Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 33

Otherwise, α has at least one defeater in AF 1. Let Pα ∈ parAF1(α) be an

arbitrary defeater of α in AF 1. Since α ∈ FNAF1(∅), then by definition, α is defended

by FNAF1(∅). Therefore, ∃β ∈ FNAF1

(∅) such that β 1 Pα. And since 1⊆2,then β 2 Pα. Now, we have two possible cases:

(1) β ∈ IN (AF 1). We ask whether β ∈ IN (AF 2). Suppose the contrary. Thiswould only be true if α′ 2 β (since there can be no other new defeaters of βinduced by 2). However, this leads to an odd-length path of length 3 from α′

to α, namely α′ 2 β 2 Pα 2 α. Contradiction! Thus β ∈ IN (AF 2) and,by definition, β ∈ FMAF2

(∅). Therefore, ∃β ∈ FMAF2(∅) such that β 2 Pα, and

the implication is satisfied.(2) β /∈ IN (AF 1) but β ∈ FNAF1

(∅). We just need to prove that β ∈ FMAF2(∅). We

do so recursively on the defenders of β until we reach an initial argument.Let Pβ ∈ parAF1

(β) be an arbitrary defeater of β in AF 1. Since β ∈ FNAF1(∅),

then ∃γ ∈ FNAF1(∅) such that γ 1 Pβ . And since 1⊆2, then γ 2 Pβ .

Now, again, we have two possible cases:(a) γ ∈ IN (AF 1). We ask whether γ ∈ IN (AF 2). Suppose the contrary. This

would only be true if α′ 2 γ (since there can be no other new defeaters of γinduced by 2). However, this leads to an odd-length path of length 5 fromα′ to α, namely α′ 2 γ 2 Pβ 2 β 2 Pα 2 α. Contradiction! Thusγ ∈ IN (AF 2) and, by definition, γ ∈ FMAF2

(∅). Therefore, ∃γ ∈ FMAF2(∅)

such that γ 2 Pα. Therefore, ∃β ∈ FMAF2(∅) such that β 2 Pα, and the

implication is satisfied.(b) γ /∈ IN (AF 1) but γ ∈ FNAF1

(∅). As above, the proof proceeds recursivelyfrom step (2) to show that γ ∈ FMAF2

(∅).The above recursion will always terminate with step (2)(a) since, accordingto Lemma 5.6, if α is acceptable in AF 1, then each direct defeater of α isindirectly defeated by some initial argument (since we are only dealing withfinite argumentation frameworks).

Therefore, every defeater of α is also defeated by an acceptable argument in AF 2.Therefore α ∈ FMAF2

(∅), and implication is satisfied.

A.4 Proof of Theorem 5.8

Theorem. Suppose every agent i ∈ I has a focal argument αi ∈ Ai. If eachagent’s type contains no (in)direct defeat against αi (formally ∀i ∈ I, @α ∈ Ai suchthat α → αi), then Mgrnd

AF is strategy-proof.

Proof. Let A′−i = (A′1, . . . ,A′i−1,A′i+1, . . . ,A′I) be arbitrary revelations fromall agents not including i. We will show that agent i is always best off revealingAi. That is, no matter what sets of arguments the other agents reveal, agent iis best off revealing its full set of arguments. Formally, we will show that ∀i ∈ Iui(Acc(〈A′1∪· · ·∪Ai∪· · ·∪A′I ,〉,Sgrnd),Ai) ≥ ui(Acc(〈A′1∪· · ·∪Ai ∪· · ·∪A′I ,〉,Sgrnd),Ai) for any Ai ⊂ Ai.

We use induction over the sets of arguments agent i may reveal, starting from thefocal argument αi. We show that, considering any strategy A′′i ⊆ Ai, revealing onemore argument can only increase i’s chance of getting αi accepted, i.e. it (weakly)improves i’ utility.

Journal of the ACM, Vol. V, No. N, Month 20YY.

34 · Rahwan et al.

– Base Step: If Ai = αi, then trivially, revealing Ai weakly dominates revealing∅.

– Induction Step: Suppose that revealing argument set A′′i ⊆ Ai weakly dominatesrevealing any subset of A′′i . We need to prove that revealing any additionalargument can increase, but never decrease the agent’s utility. In other words, weneed to prove that revealing any set A′i, where A′′i ⊂ A′i ⊆ Ai and |A′i| = |A′′i |+1,weakly dominates revealing A′′i .Let α′ where α′ = A′i −A′′i be the new argument.Suppose the focal argument αi is in the grounded extension when revealing A′′i(formally αi ∈ Acc(〈A′1 ∪ · · · ∪A′′i ∪ · · · ∪A′I ,〉,Sgrnd)). We need to show thatafter adding α′, argument αi remains in the grounded extension. Formally, weneed to show that αi ∈ A′i ∩ Acc(〈A′1 ∪ · · · ∪ A′i ∪ · · · ∪ A′I ,〉,Sgrnd). This istrue from Lemma 5.7, and from the fact that Ai does not include indirect defeatsagainst αi.

Thus, by induction, revealing the full set Ai weakly dominates revealing any sub-setthereof.

A.5 Proof of Theorem 5.13

Theorem. Let AF be an arbitrary argumentation framework, and let GE (AF )denote its grounded extension. Mechanism Mgrnd

AF , in which agents can hide argu-ments, is strategy-proof for agents with focal arguments if and only if AF satisfiesthe following condition: ∀i ∈ I, ∀S ⊆ Ai and ∀A−i, we have αi /∈ GE (〈Ai∪A−i,〉) implies αi /∈ GE (〈(Ai\S) ∪ A−i,〉).

Proof. We only need to show that the set of accepted focal arguments is thesame in both cases. Let AF = 〈A,〉 be the argumentation framework corre-sponding to full revelation by all agents –i.e. A = A1 ∪ · · · ∪ AI . Let αi be a focalargument for some arbitrary agent i. And let A− ⊆ A be the set of argumentsafter agents hide an arbitrary set of context-irrelevant arguments. We will showthat αi will have exactly the same status in the new argumentation frameworkAF− = 〈A−,〉. Formally, we will prove that αi ∈ Acc(AF ,Sgrnd) if and only ifαi ∈ Acc(AF−,Sgrnd).

⇒ Suppose αi ∈ Acc(AF ,Sgrnd). We now show that αi ∈ Acc(AF−,Sgrnd).Recall that the least fix-points of AF and AF− are ∪∞i=0F iAF (∅) and ∪∞i=0F iAF−

(∅),respectively (see Definition 2.5). We need to show that if αi ∈ ∪∞i=0F iAF (∅),then αi ∈ ∪∞i=0F iAF−

(∅).We will prove the more general proposition: ∪∞i=0F iAF (∅) ⊆ ∪∞i=0F iAF−

(∅) byinduction.—Base Step: We prove that F1

AF (∅) ⊆ F1AF−

(∅).Let α ∈ F1

AF (∅). This implies that α ∈ IN (AF ). Now since A− ⊆ A, thenα cannot have any defeaters in AF−. Therefore, α ∈ IN (AF−) and thusnecessarily α ∈ F1

AF−(∅).

—Induction Step: Let F tAF (∅) ⊆ F tAF−

(∅) for some positive integer t. We nowshow that F t+1

AF (∅) ⊆ F t+1AF−

(∅).Let α ∈ F t+1

AF (∅). This implies that ∀Pα ∈ parAF (α), ∃β ∈ F tAF (∅) such thatβ Pα. Now since A− ⊆ A, then α cannot have any defeater in AF− not

Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 35

already appearing in AF , i.e. parAF−(α) ⊆ parAF (α). However, all thesedefeaters are themselves defeated by accepted arguments since F tAF (∅) ⊆F t

AF−(∅). Hence, α ∈ F t+1

AF−(∅).

By induction, we conclude that ∪∞i=0F iAF (∅) ⊆ ∪∞i=0F iAF−(∅). In particular, if

αi ∈ ∪∞i=0F iAF (∅) then αi ∈ ∪∞i=0F iAF−(∅).

⇐ Suppose αi ∈ Acc(AF−,Sgrnd). We now show that αi ∈ Acc(AF ,Sgrnd).Recall that A\A− contains only arguments which are context-irrelevant to thefocal arguments A. Therefore, in AF , there is no argument α ∈ A\A− whichhas a path towards αi. In particular, there is no argument α ∈ A\A− whichhas an odd length path towards αi (i.e. which indirectly defeats αi). Accordingto Lemma 5.7, this implies that αi ∈ Acc(AF ,Sgrnd).

A.6 Proof of Theorem 5.15

Theorem. Suppose every agent i ∈ I has a focal argument αi ∈ Ai, and thatagents can both hide or lie about arguments. If the following conditions hold:

(A) each agent’s type contains no (in)direct defeat against αi (formally ∀i ∈I, @β ∈ Ai such that β → αi);

(B) for any agent i, no argument outside i’s type (in)directly defends αi (formally∀i ∈ I,@β ∈ A\Ai such that β # αi);

then MgrndAF is strategy-proof.

Proof. What we want to prove is that ∀i, ∀A−i and ∀S 6= Ai, if (A) and (B)hold, then

ui(GE (〈Ai ∪ A−i,〉),Ai) ≥ ui(GE (〈S ∪ A−i,〉),Ai).

Following the definition of focal arguments, our goal above can be rephrased asproving, for any arbitrary S 6= Ai, that:

αi /∈ GE (〈Ai ∪ A−i,〉) implies αi /∈ GE (〈S ∪ A−i,〉).

Suppose αi /∈ GE (〈Ai ∪ A−i,〉), and let us show that αi /∈ GE (〈S ∪ A−i,〉).We do this by showing, recursively, that a given αi, there must exist a β α

such that for all z β:z /∈ GE (〈Ai ∪ A−i,〉) then z /∈ GE (〈S ∪ A−i,〉)implies α /∈ GE (〈Ai ∪ A−i,〉) then α /∈ GE (〈S ∪ A−i,〉)

The Base Step will show that the agent cannot change the fate of αi by hidingits immediate defeater or defending it with a new argument without violating (A)or (B). We need the recursion because the agent might reinstate some defender ofαi by similarly altering the fate of another relevant argument down the line. Wecan only be sure this is not possible once we reach the initial arguments, as shownin the recursion termination condition.

—Base Step: From αi /∈ GE (〈Ai∪A−i,〉), it follows that ∃β ∈ Ai∪A−i, β αi

for which @z ∈ GE(〈Ai ∪ A−i,〉) such that z β. Let β1 = β be such adefeater.

Journal of the ACM, Vol. V, No. N, Month 20YY.

36 · Rahwan et al.

By assumption (A), β1 /∈ Ai (since otherwise, we would have β1 αi andtherefore β1 → αi). But since β1 ∈ Ai ∪ A−i, we conclude that β1 ∈ A−i. Thisin turn implies that β1 ∈ S ∪ A−i. In other words, the defeaters of αi givenaction profile Ai ∪ A−i are preserved when the agent changes the action profileto S ∪ A−i.We will now show that @z ∈ GE(〈S ∪ A−i,〉) such that z β1.Let z1 ∈ S∪A−i, such that z1 β1, be an arbitrary defeater of β1 when agent ilies. By assumption (B), we conclude that z1 /∈ A\Ai (since otherwise, we wouldhave z1 β1 αi and therefore z # αi for some z ∈ A\Ai). This in turnimplies that z1 ∈ Ai. In other words, no new defenders of αi can be introducedwhen the agent moves from action profile Ai ∪ A−i to action profile S ∪ A−i.Therefore, since we already know that z1 /∈ GE(〈Ai ∪ A−i,〉), z1 cannot beeither in GE(〈S ∪ A−i,〉). Then:

z1 /∈ GE (〈Ai ∪ A−i,〉) implies z1 /∈ GE (〈S ∪ A−i,〉). (∗)

Suppose now that αi ∈ GE (〈S ∪ A−i,〉). This would mean that for every β ∈Ai∪A−i, β αi, ∃z ∈ GE(〈S∪A−i,〉) such that z β, but then, for β1 thereshould exist a z1 satisfying this condition. But by (∗), no z1 ∈ GE (〈S∪A−i,〉).Therefore, we have that from (∗) it follows that:

αi /∈ GE (〈Ai ∪ A−i,〉) implies αi /∈ GE (〈S ∪ A−i,〉).

—Recursive Step: Let zk ∈ Ai ∪A−i be an arbitrary argument such that zk βk

for some βk with βk → αi. Assume that zk /∈ GE (〈Ai ∪A−i,〉). We will showthat zk /∈ GE (〈S ∪ A−i,〉).From zk /∈ GE (〈Ai ∪ A−i,〉), it follows that ∃β ∈ Ai ∪ A−i such that β zk

while @zk+1 ∈ GE(〈Ai ∪ A−i,〉). Let βk+1 = β be such a defeater.By assumption (A), βk+1 /∈ Ai (since otherwise, we would have βk+1 → αi forsome βk+1 ∈ Ai). But since βk+1 ∈ Ai ∪ A−i, we conclude that βk+1 ∈ A−i.This in turn implies that βk+1 ∈ S ∪ A−i. In other words, the defeaters of zk+1

given action profile Ai ∪ A−i are preserved when the agent changes the actionprofile to S ∪ A−i.We will now show that @z ∈ GE(〈S ∪ A−i,〉) such that z βk+1.Let zk+1 ∈ S∪A−i such that zk+1 βk+1 be an arbitrary defeater of βk+1 whenagent i lies. By assumption (B), we conclude that zk+1 /∈ A\Ai (since otherwise,we would have zk+1 βk+1 → αi and therefore z # αi for some z ∈ A\Ai).This in turn implies that zk+1 ∈ Ai. In other words, no new defenders of αi canbe introduced when the agent moves from action profile Ai∪A−i to action profileS∪A−i. Therefore, since we already know that zk+1 /∈ GE(〈Ai∪A−i,〉), zk+1

cannot be either in GE(〈S ∪ A−i,〉). Then:

zk+1 /∈ GE (〈Ai ∪ A−i,〉) implies zk+1 /∈ GE (〈S ∪ A−i,〉). (∗∗)

Suppose now that zk ∈ GE (〈S ∪ A−i,〉). This would mean that for everyβ ∈ Ai ∪ A−i, β αi, ∃z ∈ GE(〈S ∪ A−i,〉) such that z β, but then,for βk there should exist a zk satisfying this condition. But by (∗∗), no zk+1 ∈GE (〈S ∪ A−i,〉). Therefore, we have that from (∗∗) it follows that:

Journal of the ACM, Vol. V, No. N, Month 20YY.

Game-Theoretic Foundations for Argumentation · 37

zk /∈ GE (〈Ai ∪ A−i,〉) implies zk /∈ GE (〈S ∪ A−i,〉).

The recursion must eventually reach some βK ∈ IN (〈Ai ∪A−i,〉) with βK →αi. Let zK ∈ S∪A−i such that zK βK be an arbitrary defeater of βK when agenti lies. By assumption (B), we conclude that zK /∈ A\Ai (since otherwise, we wouldhave zK βK → αi and therefore z # αi for some z ∈ A\Ai). But this in turnimplies that zK ∈ Ai. But this contradicts with the fact that βK ∈ IN (〈Ai∪A−i,〉). Hence, no such zK exists and therefore zK /∈ GE (〈Ai∪A−i,〉). Furthermore,we conclude that βK /∈ IN (〈S ∪ A−i,〉) and thus βK /∈ GE(〈S ∪ A−i,〉).Therefore, zK /∈ GE (〈Ai ∪ A−i,〉) such that zK βK . In summary:

zK /∈ GE (〈Ai ∪ A−i,〉) implies zK /∈ GE (〈S ∪ A−i,〉).

By the above recursion, we conclude that, since αi /∈ GE (〈Ai∪A−i,〉) it followsthat αi /∈ GE (〈S ∪ A−i,〉). Therefore, Mgrnd

AF is strategy-proof.

A.7 Proof of Theorem 6.1

Recall that:

(A∗i ) @β ∈ Ai such that β → αi);(B∗i ) @β ∈ A\Ai such that β # αi);

Theorem. If each agent i has true type Ai and knows MgrndAF as well as (A∗i )

and (B∗i ) then 〈A1, . . . ,An〉 is a dominant-strategy equilibrium.

Proof. Suppose that the real state of the world is any ω such that [MgrndAF is the mechanism],

[(A*)] and [(B*)] are events known by each i. Then, Pi(ω) ⊆ [MgrndAF is the mechanism],

Pi(ω) ⊆ [(A*)] and Pi(ω) ⊆ [(B*)]. Therefore, it means that also Pi(ω) ⊆[Mgrnd

AF is the mechanism] ∩ [(A*)] ∩ [(B*)]. Since, by the proof of Theorem 5.15:

[MgrndAF is the mechanism] ∩ [(A*)] ∩ [(B*)]⊆[Mgrnd

AF is strategy-proof]

and, in turn [MgrndAF is strategy-proof]⊆[Aiis the best declaration], we have that i

knows [Aiis the best declaration]. Since each i is rational, will declare her owntype Ai. Then, 〈A1, . . . ,An〉 is a dominant-strategy equilibrium.

ACKNOWLEDGMENTS

The authors are grateful to Sherief Abdallah, Pietro Baroni, Andrew Clausen, andVincent Conitzer for useful discussions and remarks.

REFERENCES

Amgoud, L. and Cayrol, C. 2002. A reasoning model based on the production of acceptablearguments. Annals of Mathematics and Artificial Intelligence 34, 1–3, 197–215.

Aumann, R. 1976. Agreeing to disagree. Annals of Statistics 4, 1236–1239.

Aumann, R. and Brandenburger, A. 1995. Epistemic conditions for Nash equilibrium. Econo-metrica 63, 5, 1161–1180.

Baroni, P. and Giacomin, M. 2007. On principle-based evaluation of extension-based argumen-

tation semantics. Artificial Intelligence 171, 10–15, 675–700.

Journal of the ACM, Vol. V, No. N, Month 20YY.

38 · Rahwan et al.

Bench-Capon, T. J. M. and Dunne, P. E. 2007. Argumentation in artificial intelligence. Arti-

ficial Intelligence 171, 10–15, 619–641.

Berners-Lee, T., Hendler, J., and Lassila, O. 2001. The Semantic Web. Scientific American,29–37.

Chopra, S., Ghose, A., and Meyer, T. 2006. Social choice theory, belief merging, and strategy-

proofness. Information Fusion 7, 61–79.

Conitzer, V., Sandholm, T., and Lang, J. 2007. When are elections with few candidates hardto manipulate? Journal of the ACM 54, 3, 14.

Coste-Marquis, S., Devred, C., Konieczny, S., Lagasquie-Schiex, M.-C., and Marquis, P.

2007. On the merging of Dung’s argumentation systems. Artificial Intelligence 171, 10–15,730–753.

Cramton, P., Shoham, Y., and Steinberg, R., Eds. 2006. Combinatorial Auctions. MIT Press.

Dung, P. M. 1995. On the acceptability of arguments and its fundamental role in nonmonotonic

reasoning, logic programming and n-person games. Artificial Intelligence 77, 2, 321–358.

Everaere, P., Konieczny, S., and Marquis, P. 2007. The strategy-proofness landscape ofmerging. Journal of Artificial Intelligence Research 28, 49–105.

Fagin, R., Halpern, J. Y., Moses, Y., and Vardi, M. Y. 1995. Reasoning about knowledge.

MIT Press, Cambridge MA, USA.

Gibbard, A. 1973. Manipulation of voting schemes. Econometrica 41, 587–601.

Glazer, J. and Rubinstein, A. 2001. Debates and decisions: On a rationale of argumentationrules. Games and Economic Behavior 36, 158–173.

Hintikka, J. and Sandu, G. 1997. Game-theoretical semantics. In Handbook of Logic and

Language, J. van Benthem and A. ter Meulen, Eds. Elsevier, Amsterdam, The Netherlands,

361–410.

List, C. and Dietrich, F. 2007. Strategy-proof judgment aggregation. Economics and Philoso-phy 23, 3, 269–300.

Lorenzen, P. 1961. Ein dialogisches konstruktivitatskriterium. In Infinitistic Methods. Pergamon

Press, Oxford, UK, 193–200.

Mas-Colell, A., Whinston, M. D., and Green, J. R. 1995. Microeconomic Theory. OxfordUniversity Press, New York NY, USA.

Modgil, S. 2009. Reasoning about preferences in argumentation frameworks. Artificial Intelli-

gence, doi: 10.1016/j.artint.2009.02.001.

Moulin, H. 1980. On strategy-proofness and single peakedness. Public Choice 35, 4, 437–455.

Pollock, J. L. 1987. Defeasible reasoning. Cognitive Science 11, 481–518.

Prakken, H. and Sartor, G. 1997. Argument-based logic programming with defeasible priorities.Journal of Applied Non-classical Logics 7, 25–75.

Prakken, H. and Vreeswijk, G. 2002. Logics for defeasible argumentation. In Handbook of

Philosophical Logic, second ed., D. Gabbay and F. Guenthner, Eds. Vol. 4. Kluwer AcademicPublishers, Dordrecht, Netherlands, 219–318.

Rahwan, I. and Larson, K. 2008. Mechanism design for abstract argumentation. In 7th In-ternational Joint Conference on Autonomous Agents & Multi Agent Systems, AAMAS’2008,

Estoril, Portugal, L. Padgham, D. Parkes, J. Mueller, and S. Parsons, Eds. 1031–1038.

Rahwan, I., Larson, K., and Tohme, F. 2009. A characterisation of strategy-proofness for

grounded argumentation semantics. In Proceedings of the 21st International Joint Conference

on Artificial Intelligence (IJCAI). (to appear).

Rahwan, I. and McBurney, P. 2007. Guest editors’ introduction: Argumentation technology.IEEE Intelligent Systems 22, 6, 21–23.

Reiter, R. 1980. A logic for default reasoning. Artificial Intelligence 13, 81–132.

Satterthwaite, M. A. 1975. Strategy-proofness and Arrow’s conditions: Existence and corre-

spondence theorems for voting procedures and social welfare functions. Journal of EconomicTheory 10, 187–217.

Received February 1986; November 1993; accepted January 1996

Journal of the ACM, Vol. V, No. N, Month 20YY.