lectures in cooperative game theory

123
Lecture 1 Introduction and Definition of TU games 1.1 Introduction The study of the theory of games gained popularity after the publication of the classical book “Theory of games and economic behaviour” by von Neumann and Morgenstern in 1944[6]. A game is a mathematical model of an interaction between rational agents. The model first contains the strategy space of any participants (i.e., the actions that are available). From these different possible action, one can build the set of all possible states of the interaction. Then, the model contains the rules of interaction. These rules describe what happens when participants take some valid action, i.e., they determine what state is reached when the participants take some actions. Each participant has her own preferences between the different states, and each participant acts so as to obtain the best possible outcome (this is what we call rational, and we will come back to this later). The field of strategic game studies conflicting interactions and has been made popular with the contributions of Nash. Cooperative games is another field that analyses cooperation between agents. The basic framework of cooperative game was introduced by von Neumann and Morgenstern in [6] and is called the characteristic function. A coalition is simply a set of agents that interacts. The so called characteristic function provides a payoff to each coalition. It is important to note that the payoff is given to a coalition, not to individual agent. The natural question that arises is the following: if all the agents want to cooperate, how should they share the payoff? Actually, if the agents also need to decide whether they want to cooperate and with which other agents, we come up with the following two key questions: the selection problem: which coalitions are going to form? the sharing problem: once the members have formed a coalition (i.e., they have self-organized, interacted, and the coalition formed received a payoff), the prob- lem is then how to distribute it to the different members of the coalition. The goal of the course is to answer these two questions. 1

Upload: alfascorpion

Post on 28-Oct-2014

81 views

Category:

Documents


0 download

DESCRIPTION

Lecture Notes on Co-operative Game Theory

TRANSCRIPT

Page 1: Lectures in Cooperative Game Theory

Lecture 1Introduction and Definition of TU games

1.1 IntroductionThe study of the theory of games gained popularity after the publication of the classicalbook “Theory of games and economic behaviour” by von Neumann and Morgensternin 1944[6]. A game is a mathematical model of an interaction between rational agents.The model first contains the strategy space of any participants (i.e., the actions that areavailable). From these different possible action, one can build the set of all possiblestates of the interaction. Then, the model contains the rules of interaction. These rulesdescribe what happens when participants take some valid action, i.e., they determinewhat state is reached when the participants take some actions. Each participant hasher own preferences between the different states, and each participant acts so as toobtain the best possible outcome (this is what we call rational, and we will come backto this later). The field of strategic game studies conflicting interactions and has beenmade popular with the contributions of Nash. Cooperative games is another field thatanalyses cooperation between agents.

The basic framework of cooperative game was introduced by von Neumann andMorgenstern in [6] and is called the characteristic function. A coalition is simply aset of agents that interacts. The so called characteristic function provides a payoff toeach coalition. It is important to note that the payoff is given to a coalition, not toindividual agent. The natural question that arises is the following: if all the agentswant to cooperate, how should they share the payoff? Actually, if the agents also needto decide whether they want to cooperate and with which other agents, we come upwith the following two key questions:

• the selection problem: which coalitions are going to form?

• the sharing problem: once the members have formed a coalition (i.e., they haveself-organized, interacted, and the coalition formed received a payoff), the prob-lem is then how to distribute it to the different members of the coalition.

The goal of the course is to answer these two questions.

1

Page 2: Lectures in Cooperative Game Theory

2 Lecture 1. Introduction and Definition of TU games

In some parts of the course, we will focus only on the sharing problem and we willassume that all the participants intend to cooperate, forming one coalition containingall the participants (we will call this coalition the grand coalition).

The solutions to the sharing problem are called solution concepts and they are basedon different interpretations of fairness. Unfortunately, there is no unique and acceptedsolution concept. For example, one possible criterion is stability: participants shouldnot have an incentive to “change” (we will make this notion more formal later, butfor now, let us consider that no participant want to change of coalitions or to ask for adifferent share of the payoff). As we will see, not all games can satisfy the most naturalcriterion modeling stability. Consequently, many other solution concepts have beenproposed. A large part of the course will be about introducing the different solutionconcepts and study their properties.

We will also study some interesting special classes of games (i.e., restrictions ofthe model). A class of game may be interested because of its applications. For exam-ple, the term coalition is often used in political science: parties may form alliances toobtain more power. Consequently, we will study a class of games that models votingsituation. A classes of game can also be interested because of some special properties(e.g., a solution with some properties is guaranteed to exist). The properties can alsobe computational. However, with cooperative games, one needs to be careful whendealing with computational issues as the input of the game is by nature exponential.Indeed, one needs to reason about all possible coalitions, i.e., all possible subset of theset of agents. Consequently, there are some interesting issues in representing the gamesand computing a solution. There are also some interesting issues to use, in practice,some solution concepts.

Finally, we will study different variations about modeling of cooperation. So far,we have talked about sharing the value of a coalition. But in some cases, there is notreally a value to share: by forming a coalition, the members are in a specific state ofthe world and experience a corresponding satisfaction (e.g. think about which groupof people to talk with at a party).

The course will mainly focus on the game theoretic aspect of cooperative games,and we will also study AI related issues towards the end of the course. Here is a roughoutline of the course.

Solution concepts

The coreGames with coalition structure and the bargaining setThe nucleolusThe kernelThe Shapley value

Specific class of games Voting gamesRepresentation and complexity

different model NTU games and hedonic games

misc Coalition formation and related issues

Page 3: Lectures in Cooperative Game Theory

1.2. TU games 3

There is no official textbook for this course. There will be provide a lecture notefor each class as this one. Here are some sources I used to prepare the class:

• The last three chapters of book “A course in game theory” by Osborne and Ru-binstein [3] are devoted to cooperative games. I will use some of this materialfor the lectures on the core, the bargaining set, the kernel, the nucleolus and theShapley value.

• The book “An introduction to the theory of cooperative games”by Peleg andSudhölter [4] contains a rigorous and precise treatment of cooperative games. Iused this book for some precision, but it is a more advanced textbook.

• For computational aspects and some advanced topics, you can read “Computa-tional Aspects of Cooperative Game Theory” [1] by Chalkiadakis, Elkind andWooldridge.

• Whenever appropriate, I will also refer to article from the literature.

1.2 TU games

The game theory community has extensively studied the coalition formation prob-lem [2, 3]. The literature is divided into two main models, depending on whetherutility can be transferred between individuals. In a transferable utility game (or TUgame), it is assumed that agents can compare their utility and that a common scale ofutility exists. In this case, it is possible to define a value for a coalition as the worth thecoalition can achieve through cooperation. The agents have to share the value of thecoalition, hence utility needs to be transferable. In a so-called non-transferable utilitygame (or NTU game), inter-personal comparison of utility is not possible, and agentshave a preference over the different coalitions of which it is a member. In this section,we introduce the TU games.

1.2.1 Definitions

In the following, we use a utility-based approach and we assume that “everything has aprice”: each agent has a utility function that is expressed in currency units. The use ofa common currency enables the agents to directly compare alternative outcomes, andit also enables side payments. The definition of a TU game is simple: it involves aset of players and a characteristic function (a map from sets of agents to real numbers)which represents the value that a coalition can achieve. The characteristic functionis common knowledge and the value of a coalition depends only on the other playerspresent in its coalition.

Page 4: Lectures in Cooperative Game Theory

4 Lecture 1. Introduction and Definition of TU games

Notations

We consider a set N of n agents. A coalition is a non-empty subset of N . The set Nis also known as the grand coalition. The set of all coalitions is 2N and its cardinalityis 2n. A coalition structure (CS) S = {C1, · · · , Cm} is a partition of N : each set Ci is acoalition with ∪mi=1Ci = N and i 6= j ⇒ Ci ∩ Cj = ∅. We will denote SC the set of allpartitions of a set of agnets C ⊆ N . The set of all CSs is then denoted as SN , its sizeis of the order O(nn) and ω(n

n2 ) [5].

TU games

1.2.1. DEFINITION. A transferable utility game (TU game) is defined as a pair (N, v)where N is the set of agents, and v : 2N → R is a characteristic function.

The characteristic function (or valuation function) v : 2N → R provides the worth orutility of a coalition. Note that this definition assumes that the valuation of a coalitionC does not depend on the other coalitions present in the population.

Standard Assumptions: It is usually assumed that the value of the empty coalition∅ is 0 (i,.e. v(∅) = 0; we will make that assumption throughout the class. Moreover,it is often the case that the value of each coalition is non-negative (when agents makeprofits) or else that the value of each coalition is non-posititive (when the membersshare some costs). During this class, we will assume that for each coalition C, v(C) ≥0. However, most of the definitions and results can be easily adapted.

A first example of a TU game is the majority game. Assume that the number ofagents n is odd and that the agents decide between two alternatives using a majorityvote. Also assume that no agent is indifferent, i.e., an agent always strictly prefers onealternative over the other. We model this by assigning to a “winning coalition” thevalue 1 and to the other ones the value 0, i.e.,

v(C) ={

1 when |C| > n2

0 otherwise

Some types of TU games

We now describes some types of valuation functions. First, we introduce a notion thatwill be useful on many occasion: the notion of marginal contribution. It represent thecontribution of an agent when it joins a coalition.

1.2.2. DEFINITION. The marginal contribution of agent i ∈ N for a coalitionC ⊆ N \ {i} is mci(C) = v(C ∪ {i})− v(C).

The maximal marginal contribution mcmaxi = maxC⊆N\{i}mci(C) can been seen

as a threat that an agent can use against a coalition: the agent can threatens to leaveits current coalition to join the coalition that produces mcmax

i , arguing that it is able togenerate mcmax

i utils. The minimal marginal contribution mcmini = minC⊆N\{i}mci(C)

Page 5: Lectures in Cooperative Game Theory

1.2. TU games 5

is a minimum acceptable payoff: if the agent joins any coalition, the coalition willbenefit by at most mcmin

i , hence agent i should get at least this amount.

Additive (or inessential): ∀C1, C2 ⊆ N | C1 ∩ C2 = ∅ , v(C1 ∪ C2) = v(C1) + v(C2).When a TU game is additive, v(C) =

∑i∈C v(i), i.e., the worth of each coalition

is the same whether its members cooperate or not: there is no gain in cooper-ation or any synergies between coalitions, which explains the alternative name(inessential) used for such games.

Monotone: ∀C1 ⊆ C2 ⊆ N , v(C1) ≤ v(C2). For example, the valuation function of themajority game is monotone: when more agents join a coalition, they cannot turnthe coalition from a winning to a losing one. Many games are monotone, how-ever, we can imagine non-monotone games. For instance, the overhead causedby costs of communication or the effort to cooperate may be such that addinganother agent may decrease the value of a coalition. Another example featurestwo agents that dislike each other: the productivity of a coalition may decreasewhen both of them are members of the coalition.

Superadditive: ∀C1, C2 ⊆ N | C1 ∩ C2 = ∅ , v(C1 ∪ C2) ≥ v(C1) + v(C2), in otherwords, any pair of coalitions is best off by merging into one. Many games aresuper-additive. As we have assumed that the value of a coalition is positive, su-peradditivity implies monotonicity (but the converse is not necessarily true). Insuch games, social welfare is maximised by forming the grand coalition. Conse-quently, the agents have incentives to form the grand coalition.

Subadditive: ∀C1, C2 ⊆ N | C1 ∩ C2 = ∅ , v(C1 ∪ C2) ≤ v(C1) + v(C2): the agents arebest off when they are on their own, i.e., cooperation not desirable.

Convex games: A valuation is convex if for all C ⊆ T and i /∈ T v(C ∪ {i})− v(C) ≤v(T ∪ {i}) − v(T ). So a valuation function is convex when the marginal con-tribution of each player increases with the size of the coalition he joins. Convexvaluation functions are superadditive.

Unconstrained. The valuation function can be superadditive for some coalitions, andsubadditive for others: some coalitions should merge when others should remainseparated. This is the most difficult and interesting environment.

Solutions

The valuation function provides a value to a set of agents, not to individual agents. Thepayoff distribution x = {x1, · · · , xn} describes how the worth of the coalition is sharedbetween the agents, where xi is the payoff of agent i.

It will be useful to talk about the payoff obtained by the members of a coalition andwe will use the notation x(C) =

∑i∈C x(i).

Page 6: Lectures in Cooperative Game Theory

6 Lecture 1. Introduction and Definition of TU games

(N, v)

v : 2N → R

(CS, x)

S ∈ SN , i.e.,S = {C1, . . . , Ck}, Ci ⊆ N , i 6= j ⇒ Ci ∩ Cj = ∅

x ∈ Rn

?

TU gamePayoff configuration

Figure 1.1: What is solving TU games?

We can finally formalize the solution of a TU game (N, v) by introducing the con-cept of payoff configuration (PC). A payoff configuration (PC) is a pair (S, x) whereS ∈ SN is a CS and x is a payoff distribution. The CS answers the selection prob-lem when the payoff distribution answers the sharing problem. This is illustrated inFigure 1.1.

Let us now illustrate all these concepts with the following three-player TU gamedescribed in Table 1.1. In this example, there are three agents named 1, 2 and 3. Thereare 7 possible coalitions and the value of each coalition is given in the table. Thereare 5 CSs which are the following: {{1}, {2}, {3}}, {{1, 2}, {3}}, {{1}, {2, 3}},{{2}, {1, 3}}, {{1, 2, 3}}. This game is monotone and superadditive, but it is notconvex.

N = {1, 2, 3}v({1}) = 0, v({2}) = 0, v({3}) = 0

v({1, 2}) = 90

v({1, 3}) = 80

v({2, 3}) = 70

v({1, 2, 3}) = 105

Table 1.1: An example of a TU game

What PC should be chosen? Should the agents form the grand coalition and shareequally the value? The choice of the coalition can be justified by arguing that it isthe coalition that generates the most utility for the society (the game is superadditive).However, is an equal share justified? Agent 3 could propose to agent 1 to form {1, 3}and to share equally the value of this coalition (hence, 40 for each agent). Actually,agent 2 can make a better offer to agent 1 by proposing an equal share of 45 if theyform {1, 2}. Agent 3 could then propose to agent 1 to form {1, 3} and to let it get46 (agent 3 would then have 34). Is there a PC that would be preferred by all agentsat the same time? In this course, you will learn different ways to solve this problem.Unfortunately, as for many other games, we will see that there is not one unique bestsolution.

Page 7: Lectures in Cooperative Game Theory

1.2. TU games 7

1.2.2 Rationality conceptsIn this section, we discuss some desirable properties that link the coalition values tothe agents’ individual payoff. In other words, these properties are constraints that onewould like to satisfy.

Feasible solution: First, one should not distribute more utility than is available. Apayoff x is feasible when

∑i∈N xi ≤ v(N).

Anonymity: A solution is independent of the names of the agents. This is a prettymild solution that will always be satisfied.

Efficiency: x(N) = v(N) the payoff distribution is an allocation of the whole worthof the grand coalition to all the players. In other words, no utility is lost at thelevel of the population. This is particularly relevant for superadditive games.

Individual rationality: An agent i will be a member of a coalition only when xi ≥v({i}), i.e., to be part of a coalition, a player must be better off than when it ison its own.

Group rationality: ∀C ⊆ N , x(C) ≥ v(C), i.e., the sum of the payoff of a coalitionshould be at least the value of the coalition (there should not be any loss at thelevel of a coalition).

Pareto optimal payoff distribution: It may be desirable to have a payoff distributionwhere no agent can improve its payoff without lowering the payoff of anotheragent. More formally, a payoff distribution x is Pareto optimal iff @y ∈ Rn | ∃i ∈N | {yi > xi and ∀j 6= i, yj ≥ xj}.

Reasonable from above: an agent should get at most its maximal threat, i.e., xi ≤mcmax

i .

Reasonable from below: the agent should get at least its minimum acceptable rewardxi ≥ mcmin

i .

Some more notions will be helpful to discuss some solution concepts. The first isthe notion of imputation, which is a payoff distribution with the minimal acceptableconstraints.

1.2.3. DEFINITION. An imputation is a payoff distribution that is efficient and individ-ually rational for all agents.

An imputation is a solution candidate for a payoff distribution, and can also be usedto object a payoff distribution.

The second notion is the excess which can be seen as an amount of complaint or asa potential strength depending on the view point.

Page 8: Lectures in Cooperative Game Theory

8 Lecture 1. Introduction and Definition of TU games

1.2.4. DEFINITION. The excess related to a coalition C given a payoff distribution x ise(C, x) = v(C)− x(C).

When e(C, x) > 0, the excess can be seen as an amount of complaint for the currentmembers of C as some part of the value of the coalition is lost. When C is not actuallyformed, some agent i ∈ C can also see the excess as a potential increase of its payoff ifC was to be formed. Some stability concepts (the kernel and the nucleolus, see below)are based on the excess of coalitions. Another stability concept can also be defined interms of the excess.

Page 9: Lectures in Cooperative Game Theory

Bibliography

[1] Georgios Chalkiadakis, Edith Elkind, and Michael Wooldridge. Computationalaspects of cooperative game theory. Morgan & Claypool, 2011.

[2] James P. Kahan and Amnon Rapoport. Theories of Coalition Formation. LawrenceErlbaum Associates, Publishers, 1984.

[3] Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. The MITPress, 1994.

[4] Bezalel Peleg and Peter Sudhölter. Introduction to the theory of cooperative coop-erative games. Springer, 2nd edition, 2007.

[5] Tuomas W. Sandholm, Kate S. Larson, Martin Andersson, Onn Shehory, and Fer-nando Tohmé. Coalition structure generation with worst case guarantees. ArtificialIntelligence, 111(1–2):209–238, 1999.

[6] John von Neumann and Oskar Morgenstern. Theory of games and economic be-havior. Princeton University Press, 2nd edition, 1947.

9

Page 10: Lectures in Cooperative Game Theory

Lecture 2The Core

Let us assume that we have a TU game (N, v) and that we want to form the grandcoalition. We model cooperation between all the agents in N and we focus on thesharing problem: how to distribute the payoff v(N) to all agents. The idea for definingone solution is to consider a payoff distribution in which no agent has an incentive tochange coalition to gain additional payoff. This is what is called stability.

The Core, which was first introduced by Gillies [2], is the most attractive and natu-ral way to define stability. A payoff distribution is in the Core when no group of agentshas any incentive to form a different coalition. This is a strong condition for stability,so strong that some games may have an empty core. In this lecture, we will first intro-duce the definition of the core and consider some graphical representations for gameswith up to three players. Then, we will present some games that are guaranteed to havea non-empty core. Finally, we will present a theorem that characterizes games withnon-empty core: the Bondareva-Shapley theorem. We will give some intuition aboutthe proof, relying on results from linear programming, and we will use this theorem toshow that market games have a non-empty core.

2.1 Definition and graphical representation for gameswith up to three players

We consider a TU game (N, v). We assume that all the agents cooperate by formingthe grand coalition and that they receive a payoff distribution x. We want the grandcoalition to be stable, i.e., no agent should have an incentive to leave the grand coali-tion. We will say that x is in the core of the game (N, v) when no group of agents hasan incentive to leave the grand coalition and form a separate coalition.

2.1.1. DEFINITION. [Core] A payoff distribution x ∈ Rn is in the Core of a TU game(N, v) iff x is an imputation that is group rational, i.e.,

Core(N, v) = {x ∈ Rn | ∑i∈N xi = v(N) ∧ ∀C ⊆ N x(C) ≥ v(C)}.

11

Page 11: Lectures in Cooperative Game Theory

12 Lecture 2. The Core

A payoff distribution is in the Core when no group of agents has any interest inrejecting it, i.e., no group of agents can gain by forming a different coalition. Notethat this condition has to be true for all subsets of N (group rationality). As a specialcase, this ensures individual rationality. Another way to define the Core is in terms ofexcess:

2.1.2. DEFINITION. [Core] The Core of a TU game (N, v) is the set of payoff distri-butions x ∈ Rn, such that ∀C ⊆ N , e(C, x) ≤ 0.

In other words, a PC is in the Core when there exists no coalition that has a positiveexcess. This definition is attractive as it shows that no coalition has any complaint:each coalition’s demand can be granted.

To be in the core, a payoff distribution must satisfy a set of 2n weak linear inequal-ities: for each coalition C ⊆ N , we have v(C) ≤ x(C). The Core is therefore closedand convex, and we can try to represent it geometrically.

Let us consider the following two-player game ({1, 2}, v) where v({1}) = 5,v({2}) = 5, and v({1, 2}) = 20. The core of the game is a segment defined as follows:core(N, v) = {(x1, x2) ∈ R2 | x1 ≥ 5, x2 ≥ 5, x1 + x2 = 20} and is representedin Figure 2.1. This example shows that, although the game is symmetric, most of thepayoffs in the core are not fair. Core allocations focus on stability only and they maynot be fair.

x1

x2

0 5 10 15 200

5

10

15

20

Figure 2.1: Example of a core allocation

It is possible to represent the core for game with three agents. For a game ({1, 2, 3}, v),the efficiency condition is v({1, 2, 3}) = x1 + x2 + x3, which is a plane in a 3-dimensional space. On this plane, we can draw the conditions for individual rationality

Page 12: Lectures in Cooperative Game Theory

2.2. Games with non-empty core 13

and for group rationality. Each of these conditions partitions the space into two regionsseparated by a line: one region is compatible with a core allocation, the other regionis not. The core is the intersection of all the compatible regions. Figure 2.2 representsthe core of a three-player game.

Core(N, v)

x1 + x2 = 4

x1 + x3 = 3

x2 + x3 = 5

(1, 6, 1) (1, 0, 7)

(7, 0, 1)

x1 = 1

x3 = 1 x2 = 0

(1, 5, 2

)(1, 4, 3

)(1, 3, 4

)(1, 2, 5

)(1, 1, 6

)

(2, 5, 1)

(3, 4, 1)

(4, 3, 1)

(5, 2, 1)

(6, 1, 1)

(2, 0, 6)

(3, 0, 5)

(4, 0, 4)

(5, 0, 3)

(6, 0, 2)

v({1}) = 1 v({1, 2}) = 4v({2}) = 0 v({1, 3}) = 3v({3}) = 1 v({2, 3}) = 5v(∅) = 0 v({1, 2, 3}) = 8

Figure 2.2: Example of a three-player game: The core is the area in green

There are, however, multiple concerns associated with using the notion of the Core.First and foremost, the Core can be empty: the conflicts captured by the characteristicfunction cannot satisfy all the players simultaneously. When the Core is empty, at leastone player is dissatisfied by the utility allocation and therefore blocks the coalition.Let us consider the following example from [3]: v({A,B}) = 90, v({A,C}) = 80,v({B,C}) = 70, and v(N) = 120. In this case, the Core is the PC where the grandcoalition forms and the associated payoff distribution is (50, 40, 30). If v(N) is in-creased, the size of the Core also increases. But if v(N) decreases, the Core becomesempty.

Exercise: How can you modify the game in Figure 2.2 so that the core becomes empty?

2.2 Games with non-empty coreIn the previous section, we saw that some games have an empty core. In this section,we provide examples of some classes of games that are guaranteed to have a non-emptycore. In the following we will show that convex games and minimum cost spanningtree games have a non empty core.

We start introducing an example that models bankruptcy: individuals have claimsin a resource, but the value of the resource is not sufficient to meet all of the claims(e.g., a man leaves behind an estate worth less than the value of its debts). The problemis then to share the value of the estate among all the claimants. The value of a coalition

Page 13: Lectures in Cooperative Game Theory

14 Lecture 2. The Core

C is defined as the amount of the estate which is not claimed by the complement ofC, in other words v(C) is the amount of the estate that the coalition C is guaranteed toobtain.

2.2.1. DEFINITION. Bankruptcy game A Bankruptcy game (N,E, v) where N is theset of claimants, E ∈ R+ is the estate and c ∈ Rn

+ is the claim vector (i.e., ci is theclaim of the ith claimant. The valuation function v : 2N → R is defined as follows.For a coalition of claimants C, v(C) = max

{0, E −∑i∈N\C ci

}.

First, we show that a bankruptcy game is convex.

2.2.2. THEOREM. Every bankruptcy game is convex.

Proof. Let (N,E, c) be a bankruptcy game. Let S ⊆ T ⊆ N , and i /∈ T . We want toshow that

v(S ∪ {i})− v(S) ≤ v(T ∪ {i})− v(T ),

or equivalently that

v(S ∪ {i}) + v(T ) ≤ v(T ∪ {i}) + v(S).

For all C ⊆ N , we note c(C) =∑

j∈Ccj , then we can write:

E −∑

j∈N\Ccj = E −

j∈Ncj +

j∈Cci = E − c(N) + c(C).

Let ∆ = E −∑j∈N cj = E − c(N). We have E −∑

j∈N\Ccj = ∆ + c(C).

First, observe that ∀(x, y) ∈ R2, max{0, x}+ max{0, y} = max{0, x, y, x+ y}.

v(S ∪ {i}) + v(T ) = max

0, E −

j∈N\(S∪{i})cj

+ max

0, E −

j∈N\Tcj

= max {0, ∆ + c(S) + ci}+ max {0, ∆ + c(T )}= max {0, ∆ + c(S) + ci, ∆ + c(T ), 2∆ + c(S) + ci + c(T )}

v(T ∪ {i}) + v(S) = max

0, E −

j∈N\(T∪{i})cj

+ max

0, E −

j∈N\Scj

= max {0, ∆ + c(T ) + ci}+ max {0, ∆ + c(S)}= max {0, ∆ + c(T ) + ci, ∆ + c(S), 2∆ + c(T ) + ci + c(S)}

Then, note that since S ⊆ T , c(S) ≤ c(T ). Thenmax {0, ∆ + c(T ) + ci, ∆ + c(S), 2∆ + c(T ) + ci + c(S)} =

max {0, ∆ + c(T ) + ci, 2∆ + c(T ) + ci + c(S)}.

Page 14: Lectures in Cooperative Game Theory

2.2. Games with non-empty core 15

We also have:∆ + c(S) + ci ≤ ∆ + c(T ) + ci.∆ + c(T ) ≤ ∆ + c(T ) + ci.

It follows that max {0, ∆ + c(S) + ci, ∆ + c(T ), 2∆ + c(S) + ci + c(T )}≤ max {0, ∆ + c(T ) + ci, 2∆ + c(T ) + ci + c(S)}

which proves that v(S ∪ {i}) + v(T ) ≤ v(T ∪ {i}) + v(S). 4 �

Now, we show an important property of convex games: they are guaranteed to havea non-empty core. We define a payoff distribution where each agent gets its marginalcontribution, given that the agents enter the grand coalition one at a time in a givenorder, and we show that this payoff distribution is an imputation that is group rational.

2.2.3. THEOREM. A convex game has a non-empty core.

Proof. Let us assume a convex game (N, v). Let us define a payoff vector x in thefollowing way: x1 = v({1}) and for all i ∈ {2, . . . , n}, xi = v({1, 2, . . . , i}) −v({1, 2, . . . , i − 1}). In other words, the payoff of the ith agent is its marginal contri-bution to the coalition consisting of all previous agents in the order {1, 2, . . . , i− 1}.

Let us prove that the payoff vector is efficient by writing up and summing the payoffof all agents:

x1 = v({1})x2 = v({1, 2} − v({1})

. . .xi = v({1, 2, . . . , i})− v({1, 2, . . . , i− 1})

. . .xn = v({1, 2, . . . , n})− v({1, 2, . . . , n− 1})∑

i∈N xn = v({1, 2, . . . , n}) = v(N)

By summing these n equalities, we obtain the efficiency condition:∑i∈N xn = v({1, 2, . . . , n}) = v(N).4

Let us prove that the payoff vector is individually rational. By convexity, we havev({i})− v(∅) ≤ v({1, 2, . . . , i})− v({1, 2, . . . , i− 1}), hence v({i}) ≤ xi. 4

Finally, let us prove that the payoff vector is group rational. Let C ⊆ N , C ={a1, a2, . . . , ak} and let us consider that a1 < a2 < . . . < ak. It is obvious that{a1, a2, . . . , ak} ⊆ {1, 2, . . . , ak}. Using the convexity assumption, we obtain the fol-lowing:

Page 15: Lectures in Cooperative Game Theory

16 Lecture 2. The Core

v({a1})− v(∅) ≤ v({1, 2, . . . , a1})− v({1, 2, . . . , a1 − 1}) = xa1v({a1, a2})− v({a1}) ≤ v({1, 2, . . . , a2})− v({1, 2, . . . , a2 − 1}) = xa2

. . .v({a1, a2, . . . , al})− v({a1, a2, . . . , al−1}) ≤ v({1, 2, . . . , al})− v({1, 2, . . . , al − 1}) = xal

. . .v({a1, a2, . . . , ak})− v({a1, a2, . . . , ak−1}) ≤ v({1, 2, . . . , ak})− v({1, 2, . . . , ak − 1}) = xak

v(C) = v({a1, a2, . . . , ak}) ≤ ∑ki=1 xak = x(C)

By summing these k inequalities, we obtain:v(C) = v({a1, a2, . . . , ak}) ≤

∑ki=1 xak = x(C), which is the group rationality con-

dition. 4 �

Consequently, if a game is convex, we know that we can guarantee a stable payoffdistribution. Moreover, we now know one easy way to compute one of these stablepayoffs.

Another example of games that have a non-empty core are the class of minimumcost spanning tree game. This game features a set of houses that have to be connectedto a power plant. The houses can be directly linked to the power plant, or to anotherhouse. Let N be the set of houses, and let P be the power plant. Let us define N∗ =N∪{0}. For (i, j) ∈ N2

∗ , i 6= j, the cost of connecting i and j by the edge eij is ci,j . Fora coalition of houses C ⊆ N , let Γ(C) be a minimum cost spanning tree over the set ofedges C∩{P}. In other words, when the houses form a coalition C, they try to minimizethe cost of connected them to the power plant. Let (N, c) be the corresponding costgame in which the cost of coalition C ⊆ N is defined as c(C) =

∑(i,j)∈Γ(C) cij .

1

23

4

50

2.2.4. THEOREM. Every minimum cost spanning tree game has a non-empty core.

Proof. Let us define a cost distribution x and then we will show that x is in the core.Let T = (N,EN) a minimum cost spanning tree for the graph

(N∗, c{ij}⊆N2∗

). Let

i be a customer. Since T is a tree, there is a unique path (0, a1, . . . , ak, i) from 0 to i.The cost paid by agent i is defined by xi = cak,i.

This cost allocation is efficient by construction of x.We need to show the cost allocation is group rational, i.e., for all coalition S, we

have x(S) ≤ v(S) (it is a cost, which explains the inequality).Let S ⊂ N and TS = (S ∪ {0}, Es) be a minimum cost spanning tree of the graph(S ∪ {0}, c{ij}∈S∪{0}

). Let extand the tree TS to a graph T+

S = (N∗, E+N) by adding

Page 16: Lectures in Cooperative Game Theory

2.3. Characterization of games with a non-empty core 17

the remaining customers N \ S, and for each customer i ∈ N \ S, we add the edge ofEN ending in i, i.e., we add the edge (ak, i). The graph T+

S has |S| + |N \ S| edgesan is connected. Hence, T+

S is a spanning tree. Now, we note that c(S) + x(N \ S) =∑eij⊆E+

Ncij ≥

∑eij⊆EN

= c(N) = x(N). The inequality is due to the fact that T+S is

a spanning tree, and T is a minimum spanning tree. It follows that x(S) ≤ v(S). 4 �

2.3 Characterization of games with a non-empty core

We saw that the core may be empty, but that some classes of games have a non-emptycore. The next issue is whether we can characterize the games with non-empty core.It turns out that the answer is yes, and the characterization has been found indepen-dently by Bondareva (1963) and Shapley (1967), resulting in what is now known asthe Bondareva–Shapley theorem. This result connects results from linear program-ming with the concept of the core. In the following, we will first write the definitionof elements in the core as an optimization problem. Then, we will briefly introducelinear programming and we will use a result to charaterize the games with non-emptycore, which is the Bondareva-Shapley theorem. Finally, we will apply the Bondareva-Shapley theorem to market games.

2.3.1 Expressing the core as an optimization problem

The main idea is to consider that the core can be expressed as a solution of a con-straint linear optimization problem where the condition imposed by group rationalityare the constraints of the optimization problem and the objective function is the sumof the payoffs of the agents. Let us consider a TU game (N, v), let x denote a payoffdistribution and let us consider the following optimization problem:

(LP )

{minx(N)subject to x(C) ≥ v(C) for all C ⊆ N , S 6= ∅

The linear constraints are the constraints of group rationality: for each coalition C ⊆N , we want x(C) ≥ v(C). Satisfying these constraints only is easy: one simply needsto choose large enough values for each xi. If an element y ∈ Rn satisfies all theseconstraints (this is called a feasible solution), then y is group rational. The grouprationality assumption for the grand coalition guarantees that we have y(N) ≥ v(N).For y to be in the core, it also needs to be efficient. This forces us to choose values thatare not too large for the yi. The idea is then to search for the elements that minimizey(N) =

∑i∈N yi.

When solving this optimization problem, two things may happen. Either the mini-mum value found is v(N), or it is a value strictly greater.

Page 17: Lectures in Cooperative Game Theory

18 Lecture 2. The Core

• In the first case, the solutions x of the optimization problems are elements of thecore: they satisfy the constraints – hence, they are group rational – and since theminimum is v(N), x is efficient as well.

• In the second case, it is not possible to satisfy both group rationality and effi-ciency, and the core of the game is empty.

• there is no other cases as a solution of the optimization problem would satisfyall the constraints, in particular the one for the grand coalition.

The optimization problem we wrote is called a linear program. It minimizes alinear function of a vector x subject to a set of constraints where each constraint is aninequality: a linear combination of x is larger than a constant. This problem is a wellestablished problem in optimization and in the following, we give a brief introductionto such problems.

2.3.2 A very brief introduction to linear programmingThe goal of this section is to briefly introduce linear programming, which is a specialkind of optimization problems: the problem is about maximizing a linear functionsubject to linear constraints. More formally, a linear program has the following form:

max cTx

subject to{Ax ≤ b,x ≥ 0

where

• x ∈ Rn is a vector of n variables

• c ∈ Rn is the objective function

• A is a m× n matrix

• b ∈ Rn is a vector of size n

A and b represent the linear constraints. Let us look at a simple example:

maximize 8x1 + 10x2 + 5x3

subject to{

3x1 + 4x2 + 2x3 ≤ 7 (1)x1 + x2 + x3 ≤ 2 (2)

In this example, we can recognize the different components A, B and C to be:

A =

(3 4 21 1 1

)b =

(72

)c =

8105

.

We say that a solution is feasible when it satisfies the constraints. For our example,we have:

Page 18: Lectures in Cooperative Game Theory

2.3. Characterization of games with a non-empty core 19

• 〈0, 1, 1〉 is feasible, with objective function value 15.

• 〈1, 1, 0〉 is feasible, with objective function value 18, hence it is a better solution.

Next, we introduce the notion of the dual of a LP: it is another linear programwhich goal is to find an upper bound to the objective function of the original LP. Let usfirst look at our example and let us consider the following two linear transformations:

(1)× 1 + (2)× 6 ë 9x1 + 10x2 + 8x3 ≤ 19(1)× 2 + (2)× 2 ë 8x1 + 10x2 + 6x3 ≤ 18

by taking linear combinations over the constraints, we are able to form a new constraintthat provides an upper bound for the objective function. The reason is that in the newconstraint we formed, the coefficients for x1, x2 and x3 are larger or equal to the ones ofthe objective function, hence, it must be the case that the bound is an upper bound forthe objective function. Using the second new constraint, we observe that the solutioncannot be better than 18. But we already found one feasible solution with a value of18, so we have solved the problem! 4

Hence, one idea of the dual is to find a new constraint that is a linear combinationof all the constraints of the primal: yTA ≤ yT b (where y ∈ Rm). This new constraintmust generate the lowest value – as yT b will be the upper bound of a solution –, andthe coefficient of yTA must be larger than the coefficients of the objective function,i.e., yTA ≥ cT . Hence, the dual can be written in the following way:

Primal Dual

max cTx

subject to{Ax ≤ b,x ≥ 0

min yT b

subject to{yTA ≥ cT ,y ≥ 0

The following theorems link the solution of the primal and the dual problems.

2.3.1. THEOREM (DUALITY THEOREM). When the primal and the dual are feasible,they have optimal solutions with equal value of their objective function.

2.3.2. THEOREM (WEAK LP DUALITY). For a pair x, y of feasible solutions of theprimal LP and its dual LP, the objective functions are mutual bounds:

yT b ≤ cTx

If thereby cTx = yT b (equality holds), then these two solutions are optimal for bothLPs.

Proof. We have yTAx ≥ yT b since Ax ≥ b, y ≥ 0, and yTAx ≤ cTx since yTA ≤ c ,x ≥ 0. It is immediate that equality of the objective functions implies optimality. �

Page 19: Lectures in Cooperative Game Theory

20 Lecture 2. The Core

2.3.3 Linear programming and the coreNow, let us go back to the core. The linear programming problem that corresponds tothe core is:

(LP )

{minx(N)subject to x(C) ≥ v(C) for all C ⊆ N , S 6= ∅

First, this formulation is not exactly the one what have just introduced since it is aminimization and the constraints are of the form: “a linear combination of x is greaterthan a constant”. It should not be difficult to get convinced that these two kinds ofoptimization problems are symmetrical and have similar properties. In terms of theconventional way to write the primal, we identify the following components:

• the vector c ∈ Rn is the vector 〈1, 1, . . . , 1〉.

• the vector b ∈ R2n contains the value of each coalition, i.e., we can index theelements of b by using a coalition and the elements of b are bC = v(C).

• The matrix A has 2n rows (one for each coalition) and n columns (one for eachagent). The entries of A are either 0 or 1. Let us consider one coalition C, thecorresponding constraint for the core is

∑k∈C xk ≥ v(C). Let us say that the

value of coalition C appears in row i of vector b, i.e. the constraint about C isexpressed in the ith row of Ax ≥ b. Consequently, the ith row of A encodeswhich agent are present in coalition C: the entry A(i, j) is 1 if j ∈ C and 0otherwise.

Now, we write the dual which maximises yT b over all vectors y ∈ R2n

+ .

max yT b

subject to{yTA ≤ cT ,y ≥ 0

Now, let us introduce some notations to help us write the matrix A.

2.3.3. DEFINITION. [Characteristic vector] Let C ⊆ N . The characteristic vectorχC ∈ RN of C is the member of RN defined by

χiC =

{1 if i ∈ C0 if i ∈ N \ C

The characteristic vector of a coalition simply encodes which agents are present in acoalition. For example, for n = 4, χ{2,4} = 〈0, 1, 0, 1〉. This will be helpful to expressthe rows of A.

2.3.4. DEFINITION. [Map] A map is a function 2N \ ∅ → R+ that gives a positiveweight to each coalition.

Page 20: Lectures in Cooperative Game Theory

2.3. Characterization of games with a non-empty core 21

A map can be seen as a positive weight that is given to each coalition. Hence, thesolution y of the dual can be called a map.

2.3.5. DEFINITION. [Balanced map] A function λ : 2N \ ∅ → R+ is a balanced mapiff∑C⊆N λ(C)χC = χN . For convenience, we will write λ(C) = λC .

We provide an example in Table 2.1 for a three-player game. λC is a scalar and χC isa vector of Rn, so the condition features the equality between a sum over 2n vectorsof Rn and χN ∈ Rn that is nothing but the vector of Rn containing the value 1 foreach entry. This will be useful to write the constraints of the dual (we will give furtherexplanation in the following).

i 1 2 3λ{1,2}χi

{1,2}12

12

0λ{1,3}χi

{1,3}12

0 12

λ{2,3}χi{2,3} 0 1

212

λC =

{12

if |C| = 20 otherwise

Each of the column sums up to 1.12χ{1,2} + 1

2χ{1,3} + 1

2χ{2,3} = χ{1,2,3}

Table 2.1: Example of a balanced map for n = 3

One can interpret a balanced map as a percentage of time spent by each agent ineach possible coalition: for each agent i, the sum of the map for all coalitions contain-ing agent i must sum up to one.

Given these notational tools, let us re-write the dual.

• for the objective function: yT · b is the dot product of the variable y with thevalue of each coalition. If we use a coalition to index the entries of the vector y– or if we say we are using a map y –, the objective function can be written as∑C⊆N yCv(C).

• for the constraints, we have yTA ≤ cT . First cT is a vector composed of 1. It isalso the vector χN as all the agents are present in N .

Then we have the dot product between yT and A: the result of this product isa vector of size n. Let us consider the ith entry of the product: it is the dotproduct between yT and the ith column of A (both vectors are of size 2n and wecan indexed them using coalition). We can write this as

∑C⊆N yCA(C, i) and we

note that A(C, i) = 1 if i ∈ C and 0 otherwise. That is here that our notationcomes handy and we can write

∑C⊆N yCA(C, i) =

∑C⊆N yCχ

iC . Writing for the

entire vector, we finally have yTA =∑C⊆N yCχC .

Finally, we have shown that the constraints become∑C⊆N yCχC ≤ χN .

Page 21: Lectures in Cooperative Game Theory

22 Lecture 2. The Core

With our notation, we can now write the dual of LP as:

(DLP )

max∑C⊆N yCv(C)

subject to{ ∑

C⊆N yCχC ≤ χN and,yC ≥ 0 for all C ⊆ N , C 6= ∅.

Let us consider a game (N, v) with a non-empty core. This means that the dual isfeasible (there are payoff distributions that satisfy the constraints) and that the optimalpayoff has a value of v(N), i.e. it is efficient.

Note that the dual is also feasible. Since one can always define a balanced map, weare guaranteed that there exists some y ∈ R2n

+ such that∑C⊆N yCχC ≤ χN ).

Since v(N) is the minimum of the primal, by Theorem 2.3.2 it is an upper boundof the dual and it follows that max

∑C⊆N yCv(C) ≤ v(N). With this, we conclude that

if a game has a non-empty core we have max∑C⊆N yCv(C) ≤ v(N). To characterize

games with a non-empty core, we need to prove the converse. First, let us give a nameto our condition.

2.3.6. DEFINITION. [Balanced game] A game is balanced iff for each balanced mapλ we have ∑

C⊆N,C6=∅λ(C)v(C) ≤ v(N).

Let us consider that a game (N, v) is balanced, i.e., for each balanced map λ, wehave

∑C⊆N λCv(C) ≤ v(N). We know that the dual is feasible (using any balanced

map). With the use of a balanced map, we reach the equality for the constraints (i.e.each constraint is an inequality, but with the balanced map we reach an equality). Sincethe coefficients are positive, we will not be able to improve the optimal value of thedual. Hence, v(N) is the optimal value.

Now, let us go back to the primal. The vector 〈0, . . . , 0〉 is feasible, so the primal isfeasible. Using theorem 2.3.2, we know that v(N) is a lower bound for the primal, i.ev(N) ≤ x(N). Applying group rationality to the grand coalition, we also know thata solution must satisfy x(N) ≥ v(N). Consequently, v(N) is also the solution to theprimal. Hence, the core is non-empty.

We have thus proved a characterization of games with non-empty core. This resultswas established independently by Bondareva (1963) and Shapley (1967).

2.3.7. THEOREM (BONDAREVA-SHAPLEY THEOREM). A TU game has a non-emptycore iff it is balanced.

This theorem completely characterizes the set of games with a non-empty core.However, it is not always easy or computationally feasible to check that it is a balancedgame.

Page 22: Lectures in Cooperative Game Theory

2.3. Characterization of games with a non-empty core 23

2.3.4 Application to market gamesOne example of coalitional games coming from the field of economics is a marketgame. This game models an environment where there is a given, fixed quantity of aset of continuous good. Initially, these goods are distributed among the players in anarbitrary way. The quantity of each good is called the endowment of the good. Eachagent i has a valuation function that takes as input a vector describing its endowmentfor each good and that output a utility for possessing these goods (the agents do notperform any transformation, i.e., the goods are conserved as they are). To increase theirutility, the agents are free to trade goods. When the agents are forming a coalition, theyare trying to allocate the goods such that the social welfare of the coalition (i.e. thesum of the utility of each member of the coalition) is maximized. We now provide theformal definition.

A market is a quadruple (N,M,A, F ) where

• N is a set of traders

• M is a set of m continuous good

• A = (ai)i∈N is the initial endowment vector

• F = (fi)i∈N is the valuation function vector, each fi is continuous and concave.

• v(S) = max

{∑

i∈Sfi(xi)

∣∣∣ xi ∈ Rm+ ,∑

i∈Sxi =

i∈Sai

}

• we further assume that the fi are continuous and concave.

Let us assume that the players form the grand coalition: all the players are in themarket and try to maximize the sum of utility of the market. How should this utility beshared amond the players? One way to answer this question is by using an allocationthat is in the core. One interesting property is that the core of such game is guaranteedto be non-empty, and one way to prove it is to use the Bondareva-Shapley theorem.

2.3.8. THEOREM. Every Market Game is balanced.

Proof.f : Rn → R is concave iff ∀α ∈ [0, 1], ∀(x, y) ∈ Rn, f(αx + (1 − α)y) ≥

αf(x) + (1 − α)f(y). It follows from this definition that for f : R → R, ∀x ∈ Rn,∀λ ∈ Rn

+ such that∑n

i=1 λi = 1, we have f(∑n

i=1 λixi) ≥∑n

i=1 λif(xi).Since the fis are continuous,

∑i∈S fi(xi) is a continuous mapping from

T={

(xi)i∈S | ∀i ∈ Rk+, ∀xi ∈ Rk

+,∑

i∈S xi =∑

i∈S ai}

to R. Moreover, T is compact(it is closed and bounded). Thanks to the extreme value theorem from calculus, weconclude that

∑i∈S fi(xi) attains a maximum.

Page 23: Lectures in Cooperative Game Theory

24 Lecture 2. The Core

For a coalition S ⊆ N , let xS = 〈xS1 , . . . , xSn〉 be the endowment that achievesthe maximum value for the coalition S, i.e., v(S) =

∑i∈S fi(x

Si ). In other words, the

members of S have made some trades that have improved the value of the coalition Sup to its maximal value.

Let λ be a balanced map. Let y ∈ Rn+ defined as follows: yi =

∑S∈Ci λSx

Si where

Ci is the set of coalitions that contains agent i.First, note that y is a feasible payoff function.

i∈Nyi =

i∈N

S∈CiλSx

Si =

S⊆N

i∈SλSx

Si =

S⊆NλS∑

i∈SxSi

=∑

S⊆NλS∑

i∈Sai since xSi was achieved by a sequence of trades within the members of S

=∑

i∈Nai∑

S∈CiλS

=∑

i∈Nai as λ is balanced,

(i.e., the sum of the weights over all coalitionsof one agent sums up to 1

)

Then, by definition of v, we have v(N) ≥∑i∈N fi(yi). 4

The fi are concave and since∑

S∈Ci λS = 1, we have

fi(∑

S∈CiλSx

Si ) ≥

S∈Ci

λSfi(xSi ).

It follows:

v(N) ≥∑

i∈Nfi(yi) ≥

i∈Nfi(∑

S∈CiλSx

Si ) ≥

i∈N

S∈CiλSfi(x

Si ) ≥

S⊆NλS∑

i∈Sfi(x

Si ) ≥

S⊆NλSv(S).

This inequality proves that the game is balanced. 4 �

2.4 Extension of the coreThere are few extensions to the concept of the Core. As discussed above, one main is-sue of the Core is that it can be empty. In particular, a member of a coalition may blockthe formation so as to gain a very small payoff. When the cost of building a coalition isconsidered, it can be argued that it is not worth blocking a coalition for a small utilitygain. The strong and weak ε-Core concepts model this possibility. The constraintsdefining the strong (respectively the weak) ε-Core become ∀T ⊆ N, x(T ) ≥ v(T )− ε,(respectively ∀T ⊆ N, x(T ) ≥ v(T )−|T | ·ε). In the weak Core, the minimum amount

Page 24: Lectures in Cooperative Game Theory

2.5. Games with Coalition Structure 25

of utility required to block a coalition is per player, whereas for the strong Core, it is afixed amount. If one picks ε large enough, the strong or weak ε-core will exist. Whendecreasing the value of ε, there will be a threshold ε? such that for ε < ε? the ε coreceases to be non-empty. This special ε-core is then called the the least core.

2.5 Games with Coalition StructureThus far, we stated that the grand coalition is formed. With this definition, checkingwhether the core is empty amounts to checking whether the grand coalition is stable.In many studies in economics, the superadditivity of the valuation function is not ex-plicitly stated, but it is implicitly assumed and hence, it makes sense to consider onlythe grand coalition. But when the valuation function is not superadditive, agents mayhave an incentive to form a different partition.

We recall that a coalition structure (CS) is a partition of the grand coalitions. If Sis a CS, then S = {C1, . . . , Cm} where each Ci is a coalition such that ∪m

i=1Ci = N andi 6= j ⇒ Ci ∩ Cj = ∅.

Aumann and Drèze discuss why the coalition formation process may generate aCS that is not the grand coalition [1]. One reason they mention is that the valuationmay not be superadditive (and they provide some discussion about why it may bethe case). Another reason is that a CS may “reflect considerations that are excludedfrom the formal description of the game by necessity (impossibility to measure orcommunicate) or by choice” [1]. For example, the affinities can be based on location,or trust relations, etc.

2.5.1. DEFINITION. [Game with coalition structure] A game with coalition structureis a triplet (N, v, S), where (N, v) is a TU game, and S is a particular CS. In addition,transfer of utility is only permitted within (not between) the coalitions of S, i.e., ∀C ∈S, x(C) ≤ v(C).

Another way to understand this definition is to consider that the problems of decid-ing which coalition forms and how to share the coalition’s payoff are decoupled: thechoice of the coalition is made first and results in the CS. Only the payoff distributionchoice is left open. The agents are allowed to refer to the value of coalition with agentsoustide of their coalition (i.e., opportunities they would get outside of their coalition)to negotiate a better payoff. Aumann and Drèze use an example of researchers in gametheory that want to work in their own country, i.e., they want to belong to the coalitionof game theorists of their country. They can refer to offers from foreign countries inorder to negotiate their salaries. Note that the agents’ goal is not to change the CS, butonly to negotiate a better payoff for themselves.

First, we need to define the set of possible payoffs: the payoff distributions suchthat the sum of the payoff of the members of a coalition in the CS does not exceed thevalue of that coalition. More formally:

Page 25: Lectures in Cooperative Game Theory

26 Lecture 2. The Core

2.5.2. DEFINITION. [Feasible payoff] Let (N, v,S) be a TU game with CS. The set offeasible payoff distributions is X(N,v,S) = {x ∈ Rn | ∀C ∈ Sx(C) ≤ v(C)}.

A payoff distribution x is efficient with respect to a CS S when ∀C ∈ S,∑

i∈C xj =v(C). A payoff distribution is an imputation when it is efficient (with respect to thecurrent CS) and individually rational (i.e., ∀i ∈ N , xi ≥ v({i})). The set of allimputations for a CS S is denoted by Imp(S). We can now state the definition of thecore:

2.5.3. DEFINITION. [Core] The core of a game (N, v,S) is the set of all PCs (S, x)such that x ∈ Imp(S) and ∀C ⊆ N ,

∑i∈C xj ≥ v(C), i.e.,

core(N, v,S) = {x ∈ Rn | (∀C ∈ S, x(C) ≤ v(C)) ∧ (∀C ⊆ N, x(C) ≥ v(C))}.

We now provide a theorem by Aumann and Drèze which shows that the core satisfiesa desirable properties: if two agents can be substituted, then a core allocation mustprovide them identical payoffs.

2.5.4. DEFINITION. [Substitutes] Let (N, v) be a game and (i, j) ∈ N2. Agents i andj are substitutes iff ∀C ⊆ N \ {i, j}, v(C ∪ {i}) = v(C ∪ {j}).

Since the agents have the same impact on all coalitions that do not include them, itwould be fair if they obtained the same payoff. For the core of a game in CS, this isindeed the case.

2.5.5. THEOREM. Let (N, v,S) be a game with coalition structure, let i and j be sub-stitutes, and let x ∈ core(N, v,S). If i and j belong to different members of S, thenxi = xj .

Proof. Let (i, j) ∈ N2 be substitutes, C ∈ S such that i ∈ C and j /∈ C. Letx ∈ Core(N, v,S). Since i and j are substitutes, we have

v((C \ {i}) ∪ {j}) = v((C \ {i}) ∪ {i}) = v(C).

Since x ∈ Core(N, v,S), we have ∀C ⊆ N , x(C) ≥ v(C), we apply this to thecoalition (C \ {i}) ∪ {j}:0 ≥ v((C \ {i}) ∪ {j})− x((C \ {i}) ∪ {j}) = v(C)− x(C) + xi − xj . Since C ∈ Sand x ∈ Core(N, v,S), we have x(C) = v(C). We can then simplified the previousexpression and we obtain xj ≥ xi.

Since i and j play symmetric roles, we have also xi ≥ xj and finally, we obtainxi = xj . 4 �

Aumann and Drèze made a link from a game with CS to a special superadditivegame (N, v) called the superadditive cover [1].

Page 26: Lectures in Cooperative Game Theory

2.5. Games with Coalition Structure 27

2.5.6. DEFINITION. [Superadditive cover] The superadditive cover of (N, v) is thegame (N, v) defined by

v(C) = maxP∈SC

{∑

T∈Pv(T )

}∀C ⊆ N \ ∅

v(∅) = 0

In other words, v(C) is the maximal value that can be generated by any partition ofC1. The superadditive cover is a superadditive game. The following theorem, from [1]shows that a necessary condition for (N, v,S) to have a non empty core is that S is anoptimal CS.

2.5.7. THEOREM. Let (N, v,S) be a game with coalition structure. Then

a) Core(N, v,S) 6= ∅ iff Core(N, v) 6= ∅ ∧ v(N) =∑

C∈Sv(C)

b) if Core(N, v,S) 6= ∅, then Core(N, v,S) = Core(N, v)

Proof. Proof of part a)

⇒ Let x ∈ Core(N, v,S). We show that x ∈ Core(N, v) as well. Let C ⊆ N \ ∅and PC ∈ SC be a partition of C. By definition of the core, for every S ⊆ N wehave x(S) ≥ v(S). The payoff of coalition C is

x(C) =∑

i∈Cxi =

S∈PCx(S) ≥

S∈PCv(S),

which is valid for all partitions of C. Hence, x(C) ≥ maxPC∈SC

S∈PCv(S) = v(C).

We have just proved ∀C ⊆ N \ ∅, x(C) ≥ v(C), and so x is group rational. 4

We now need to prove that v(N) =∑

C∈Sv(C).

x(N) =∑C∈S v(C) since x is in the core of (N, v,S) (efficient). Applying the

inequality above, we have x(N) =∑C∈S v(C) ≥ v(N).

Applying the definition of the valuation function v, we have v(N) ≥∑C∈S v(C).Consequently, v(N) =

∑C∈S v(C) and it follows that x is efficient for the game

(N, v)4

Hence x ∈ Core(N, v).

1Note that for the grand coalition, we have v(N) = maxP∈SN

{∑

T∈Pv(T )

}, i.e., v(N) is the maximum

value that can be produced by N . We call it the value of the optimal coalition structure. For someapplication, on issue (that will be studied later) is to find this value.

Page 27: Lectures in Cooperative Game Theory

28 Lecture 2. The Core

⇐ Let’s assume x ∈ Core(N, v) and v(N) =∑

C∈Sv(C). We need to prove that

x ∈ Core(N, v,S).

For every C ⊆ N , x(C) ≥ v(C) since x is in the core of Core(N, v). Thenx(C) ≥ max

PC∈SC

S∈PCv(S) ≥ v(C) using {C} as a partition of C, which proves x is

group rational. 4

x(N) = v(N) =∑C∈S v(C) since x is efficient. It follows that ∀C ∈ S, we

must have x(C) = v(C), which proves x is feasible for the CS S, and that x isefficient.4

Hence, x ∈ Core(N, v,S). 4

proof of part b):

We have just proved that x ∈ Core(N, v) implies that x ∈ Core(N, v,S) and x ∈Core(N, v,S) implies that x ∈ Core(N, v). This proves that if Core(N, v,S) 6= ∅,Core(N, v) = Core(N, v,S).

Page 28: Lectures in Cooperative Game Theory

Bibliography

[1] Robert J. Aumann and Jacques H Drèze. Cooperative games with coalition struc-tures. International Journal of Game Theory, 3(4):217–237, 1974.

[2] Donald B. Gillies. Some theorems on n-person games. PhD thesis, Department ofMathematics, Princeton University, Princeton, N.J., 1953.

[3] James P. Kahan and Amnon Rapoport. Theories of Coalition Formation. LawrenceErlbaum Associates, Publishers, 1984.

29

Page 29: Lectures in Cooperative Game Theory

Lecture 3The bargaining set

The notion of core stability is maybe the most natural way to describe stability. How-ever, some games have an empty core. If agents adopts the notion of the core, theywill be unable to reach an agreement about the payoff distribution. If they still want tobenefit from the cooperation with other agents, they need to relax the stability require-ments. In this lecture, we will see that the notion of bargaining set is one way to reachan argument and to maintain some notion of stability (though of course, it is a weakerversion of stability).

The definition of the bargaining set is due to Davis and Maschler [1]. This notion isabout the stability of a given coalition structure (CS). The agents do not try to changethe nature of the CS, but simply to find a way to distribute the value of the differentcoalitions between the members of each coalition. Let us assume that a payoff distri-bution is proposed. Some agents may form an objection against this payoff distributionby pointing out a problem of that distribution and by offering a different payoff distri-bution that eliminates this issue (or improves the situation). If all other agents agreewith this objection, the payoff distribution should change as proposed. However, someother agents may form a counter-objection showing some shortcomings of the objec-tion. The idea of stability in this context is to ensure that, for each possible objection,there exists a counter-objection. When this is the case, there is no ground for chang-ing the payoff distribution, which provides some stability. In the following, we willdescribe the precise notion of objections and counter-objections.

3.1 Objections, counter-objections and theprebargaining set

Let (N, v,S) be a game with coalition structure and x an imputation. For the bar-gaining set, an objection from an agent i against a payoff distribution x is targeting aparticular agent j, in the hope of obtaining a payment from j. The goal of agent i is toshow that agent j gets too much payoff as there are some ways in which some agents

31

Page 30: Lectures in Cooperative Game Theory

32 Lecture 3. The bargaining set

but j can benefit. The objection can take the following (informal) form:

I get too little in the imputation x, and agent j gets too much! I can form acoalition that excludes j in which some members benefit and all membersare at least as well off as in x.

We recall that in a game with coalition structure (N, v,S), the agents do not try tochange the CS, but only obtain a better payoff. The set X(N,v,S) of feasible payoffvectors for (N, v,S) is defined as X(N,v,S) = {x ∈ Rn | ∀C ∈ S,∑i∈C xi ≤ v(C)}. Weare now ready to formally define an objection.

3.1.1. DEFINITION. [Objection] Let (N, v,S) be a game with coalition structure, x ∈X(N,v,S) , C ∈ S be a coalition, and i and j two distinct members of C ((i, j) ∈ C2,i 6= j). An objection of i against j is a pair (P, y) where

• P ⊆ N is a coalition such that i ∈ P and j /∈ P .

• y ∈ Rp where p is the size of P

• y(P ) ≤ v(P ) (y is a feasible payoff distribution for the agents in P )

• ∀k ∈ P , yk ≥ xk and yi > xi (agent i strictly benefits from y, and the othermembers of P do not do worse in y than in x.)

An objection is a pair (P, y) that is announced by an agent i against a particular agent jand a payoff distribution x. It can be understood as a potential threat to form coalitionP , which contains i but not j. If the agents in P really deviate, agent i will benefit(strictly), and the other agents in P are guaranteed not to be worse off (and may evenbenefit from the deviation). The goal is not to change the CS, but simply to update thepayoff distribution. In this case, agent i is calling for a transfer of utility from agent jto agent i.

The agent that is targeted by the threat may try to show that she deserves the payoffxj . To do so, her goal is to show that, if the threat was implemented, there is anotherdeviation that would ensure that j can still obtain xj and that no agent (except maybeagent i) would be worse off. In that case, we say that the objection is ineffective. Agentj can summarize her argument by saying:

I can form a coalition that excludes agent i in which all agents are at leastas well off as in x, and as well off as in the payoff proposed by i for thosewho were offered to join i in the argument.

The formal definition of a counter-objection is the following.

3.1.2. DEFINITION. [Counter-objection] A counter-objection of agent j to the objec-tion (P, y) of agent i is a pair (Q, z) where

• Q ⊆ N is a coalition such that j ∈ Q and i /∈ Q.

Page 31: Lectures in Cooperative Game Theory

3.1. Objections, counter-objections and the prebargaining set 33

• z ∈ Rq where q is the size of Q

• z(Q) ≤ v(Q) (z is a feasible payoff distribution for the agents in Q)

• ∀k ∈ Q, zk ≥ xk (the members of Q get at least the value in x)

• ∀k ∈ Q ∩ P zk ≥ yk (the members of Q which are also members of P get atleast the value promised in the objection)

In a counter-objection, agent j must show that she can protect her payoff xj in spiteof the existing objection of i. Agents in the deviating coalition Q should improve theirpayoff compared to x. For those who were also members of the deviating coalitionP with agent i, they should make sure that they obtain a better payoff than in y. Inthis way, all agents in P and Q benefit. Note that agent i is in P and not in Q, andconsequently, i may be worse off in this counter-objection.

When an objection has a counter-objection, no agent will be willing to follow agenti and implement the threat. Hence, the agents do not have any incentives to change thepayoff distribution and the payoff is stable.

3.1.3. DEFINITION. [Stability] Let (N, v,S) a game with coalition structure. A vectorx ∈ X(N,v,S) is stable iff for each objection at x there is a counter-objection.

The definition of the pre-bargaining set is then simply the set of payoff distributionsthat are stable. Not that they are stable for the specific definition of objections andcounter-objections we presented (we can, and we will, think about other ways to defineobjections and counter-objections).

3.1.4. DEFINITION. [Pre-bargaining set] The pre-bargaining set (preBS) is the set ofall stable members of X(N,v,S).

We will explain later the presence of the prefix pre in the definition. For now, weneed to wonder about the relationship with the core. The idea was to relax the stabilityrequirements of the core. The following lemma states that indeed, we have only relaxedthem:

3.1.5. LEMMA. Let (N, v,S) a game with coalition structure, we have

Core(N, v,S) ⊆ preBS(N, v,S).

Proof. Let us assume that the core of (N, v,S) is non-empty and that x ∈ Core(N, v,S).Given the payoff x, no agent i has any objection against any other agent j. Hence, thereare no objections, and the payoff is stable according to the pre-bargaining set. �

Page 32: Lectures in Cooperative Game Theory

34 Lecture 3. The bargaining set

3.2 An exampleLet us now consider an example using a 7-player simple majority game, i.e., we con-sider the game ({1, 2, 3, 4, 5, 6, 7}, v) defined as follows:

v(C) =

{1 if |C| ≥ 40 otherwise

Let us consider x = 〈−1

5,1

5, . . . ,

1

5〉. It is clear that x(N) = 1. Let us now prove

that x is in the pre-bargaining set of the game (N, v, {N}).First note that objections within members of {2, 3, 4, 5, 6, 7} will have a counter-

objection by symmetry (i.e., for i, j ∈ {2, 3, 4, 5, 6, 7}, if i has an objection (P, y)against j, j can use the counter-objection (Q, z) with Q = P \ {i} ∪ {j} and zk = ykfor k ∈ P \ {i} and zj = yi).

Hence, we only have to consider two type of objections (P, y): the ones of 1 againsta member of {2, 3, 4, 5, 6, 7}, and the ones from a members of {2, 3, 4, 5, 6, 7} againsti. We are going to treat only the first case, we leave the second as an exercise.

Let us consider an objection (P, y) of agent i against a member of {2, 3, 4, 5, 6, 7}.Since the members {2, . . . , 7} play symmetric roles, we consider an objection of 1against 7 using successively P = {1, 2, 3, 4, 5, 6}, P = {1, 2, 3, 4, 5}, P = {1, 2, 3, 4},P = {1, 2, 3}, P = {1, 2} and P = {1}. For each case, we will look for a counter-objection (Q, z) of player 7.

• We consider that P = {1, 2, 3, 4, 5, 6}. We need to find the payoff vector y ∈ R6

so that (P, y) is an objection.

y = 〈α, 15

+ α2,15

+ α3, . . . ,15

+ α6〉,The conditions for (P, y) to be an objection are the following:

– each agent is as well off as in x: α > −15, αi ≥ 0

– y is feasible for coalition P :∑6

i=2

(αi + 1

5

)+ α ≤ 1.

w.l.o.g 0 ≤ α2 ≤ α3 ≤ α4 ≤ α5 ≤ α6.

Then6∑

i=2

(1

5+ αi

)+ α =

5

5+

6∑

i=2

αi + α = 1 +6∑

i=2

αi + α ≤ 1.

Then6∑

i=2

αi ≤ −α <1

5.

ë We need to find a counter-objection for (P, y).

claim: we can choose Q = {2, 3, 4, 7} and z = 〈15

+ α2,15

+ α3,15

+ α4,15

+ α5〉z(Q) = 1

5+ α2 + 1

5+ α3 + 1

5+ α4 + 1

5+ α5 = 4

5+∑5

i=2 αi ≤ 1 since∑5i=2 αi ≤

∑6i=2 αi <

15

so z is feasible.

Page 33: Lectures in Cooperative Game Theory

3.2. An example 35

It is clear that ∀i ∈ Q, zi ≥ xi 4and that ∀i ∈ Q ∩ P , zi ≥ yi 4

Hence, (Q, z) is a counter-objection. 4

• Now, let us consider that P = {1, 2, 3, 4, 5}. The vector y = 〈α, 15

+ α2,15

+α3,

15

+ α4,15

+ α5〉 is an objection when

α > −15, αi ≥ 0,

5∑

i=2

(1

5+ αi) + α ≤ 1

This time, we have5∑

i=2

(1

5+ αi) + α =

4

5+

5∑

i=2

αi + α ≤ 1

then5∑

i=2

αi ≤ 1− 4

5− α =

1

5− α and finally

5∑

i=2

αi ≤1

5− α < 2

5.

ë We need to find a counter-objection to (P, y)

claim: we can choose Q = {2, 3, 6, 7}, z = 〈15

+ α2,15

+ α3,15, 1

5〉

It is clear that ∀i ∈ Q, zi ≥ xi 4and ∀i ∈ P ∩Q zi ≥ yi (for agent 2 and 3).

z(Q) = 15

+α2 + 15

+α3 + 15

+ 15

= 45

+α2 +α3. We have α2 +α3 <15, otherwise,

we would have α2 + α3 ≥ 15

and since the αi are ordered, we would then have5∑

i=2

αi ≥2

5, which is not possible. Hence z(Q) ≤ 1 which proves z is feasible

4

Using similar arguments, we find a counter-objection for each other objections (youmight want to fill in the details at home).

• P = {1, 2, 3, 4}, y = 〈α, 15+α1,

15+α2,

15+α3〉, α > −1

5, αi ≥ 0,

∑4i=2 αi+α ≤

25⇒∑4

i=2 αi ≤ 25− α < 3

5.

ë Q = {2, 5, 6, 7}, z = 〈15

+ α2,15, 1

5, 1

5〉 since α2 ≤ 1

5

• |P | ≤ 3 P = {1, 2, 3}, v(P ) = 0, y = 〈α, α1, α2〉, α > −15, αi ≥ 0, α1 + α2 ≤

−α < 15

ë Q = {4, 5, 6, 7}, z = 〈15, 1

5, 1

5, 1

5〉 will be a counter-objection (1 cannot provide

more than 15

to any other agent).

• For each possible objection of 1, we found a counter-objection. Using similararguments, we can find a counter-objection to any objection of player 7 againstplayer 1.

ë x ∈ preBS(N, v,S).4

Page 34: Lectures in Cooperative Game Theory

36 Lecture 3. The bargaining set

3.3 The bargaining set

In the example, agent 1 gets −15

when v(C) ≥ 0 for all coalition C ⊆ N ! This showsthat the pre-bargaining set may not be individually rational.

Let I(N, v,S) ={x ∈ X(N,v,S) | xi ≥ v({i})∀i ∈ N

}be the set of individually

rational payoff vector in X(N,v,S). Given most of the games, the set of individuallyrational payoffs is non-empty. For some classes of games, we can show it is the case,as shown by the following lemma.

3.3.1. LEMMA. If a game is weakly superadditive, I(N, v,S) 6= ∅.

Proof. A game (N, v) is weakly superadditive when ∀C ⊆ N and i /∈ C, we have thatv(C) + v({i}) ≤ v(C ∪ {i}). Let us consider that, for C ∈ S , each agent in C gets itsmarginal contribution given an ordering of agents in C. Since for all C ⊂ N , i /∈ Cv(C ∪ {i}) − v(C) ≥ v({i}), we know that xi ≥ v({i}). Hence, there exists a payoffdistribution in I(N, v,S). �

Since we want a solution concept to be at least an imputation (i.e., efficient andindividually rational), we define the bargaining set to be the set of payoff distributionsthat are individually rational and in the pre-bargaining set.

3.3.2. DEFINITION. Bargaining set Let (N, v,S) a game in coalition structure. Thebargaining set (BS) is defined by

BS(N, v,S) = I(N, v,S) ∩ preBS(N, v,S).

Of course, this restriction does not have any negative impact on the relationshipbetween the core and the bargaining set.

3.3.3. LEMMA. We have Core(N, v,S) ⊆ BS(N, v,S).

This lemma shows that we have relaxed the requirements of core stability. Onequestion is whether we have relaxed them enough to guarantee that this new stabilityconcept is guaranteed to be non-empty. The answer to this question is yes!

3.3.4. THEOREM. Let (N, v,S) a game with coalition structure. Assume that I(N, v,S) 6=∅. Then the bargaining set BS(N, v,S) 6= ∅.

It is possible to give a direct proof of this theorem (for example, se the proof inSection 4.2 in Introduction to the Theory of Cooperative Games [2]. We will notpresent this proof now, but we will prove this theorem differently in a coming lecture.

Page 35: Lectures in Cooperative Game Theory

3.4. One issue with the bargaining set 37

3.4 One issue with the bargaining setWe relaxed the requirements of core stability to come up with a solution concept thatis guaranteed to be non-empty. In doing so, the stable payoff distributions may havesome issues, and we are going to consider one using the example of a weighted votinggame. We first recall the definition.

3.4.1. DEFINITION. weighted voting games A game (N,wi∈N , q, v) is a weighted vot-ing game when v satisfies unanimity, monotonicity and the valuation function is de-fined as

v(S) =

1 when∑

i∈Swi ≥ q

0 otherwiseWe note such a game by (q : w1, . . . , wn)

We consider the 6-player weighted majority game (3:1,1,1,1,1,0). Agent 6 is a nullplayer since its weight is 0, in other words, its presence does not affect by any meansthe decision taken by the other agents. Nevertheless the following payoff distributionis in the bargaining set x = 〈1

7, . . . , 1

7, 2

7〉 ∈ BS(N, v)! This may be quite surprising

as the null player receives the most payoff, but none of the other agents are able toprovide a objection that is not countered!

One of the desirable properties of a payoff was to be reasonable from above. Werecall that x is reasonable from above if ∀i ∈ N xi ≤ mcmax

i where mcmaxi =

maxC⊆N\{i}

v(C ∪ {i})− v(C). mcmaxi is the strongest threat that an agent can use against

a coalition. It is desirable that no agent gets more than mcmaxi , as it never con-

tributes more than mcmaxi in any coalition i can join. The previous example shows

that the bargaining set is not reasonable from above: the dummy agent gets more thanmaxC⊆N\{6}

(v(C ∪ {6})− v(C)) = 0.8

Proof.This proof will be part of homework 2. �

Page 36: Lectures in Cooperative Game Theory
Page 37: Lectures in Cooperative Game Theory

Bibliography

[1] Robert J. Aumann and M. Maschler. The bargaining set for cooperative games.Advances in Game Theory (Annals of mathematics study), (52):217–237, 1964.

[2] Bezalel Peleg and Peter Sudhölter. Introduction to the theory of cooperative coop-erative games. Springer, 2nd edition, 2007.

39

Page 38: Lectures in Cooperative Game Theory

Lecture 4The nucleolus

The nucleolus is based on the notion of excess and has been introduced by Schmei-dler [3]. The excess measures the amount of “complaints” of a coalition for a payoffdistribution. We already mentionned the excess and gave a definition of the core usingthe excess. We now recall the definition.

4.0.2. DEFINITION. [Excess] Let (N, v) be a TU game, C ⊆ N be a coalition, and xbe a payoff distribution over N . The excess e(C, x) of coalition C at x is the quantitye(C, x) = v(C)− x(C).

When a coalition has a positive excess, some utility is not provided to the coalition’smembers, and the members complain about this. for a payoff distribution in the core,there cannot be any complaint. The goal of the nucleolus is to reduce the amount ofcomplaint, and we are now going to see in what sense it is reduced.

4.1 Motivations and DefinitionsLet us consider the game in Table 4.1 and we want to compare two payoff distributionsx and y. A priori, it is not clear which payoff should be preferred. To compare twovectors of complaints, we can use the lexicographical order1.

4.1.1. DEFINITION. [Lexicographical ordering] Let (x, y) ∈ Rm, x ≥lex y. We saythat x is greater or equal to y in the lexicographical ordering, and we note x ≥lex y,

when{x = y or∃t, 1 ≤ t ≤ m such that ∀i 1 ≤ i < t xi = yi and xt > yt

For example, we have 〈1, 1, 0,−1,−2,−3,−3〉 ≥lex 〈1, 0, 0, 0,−2,−3,−3〉. Let l be asequence ofm reals. We denote by lI the reordering of l in decreasing order. In the ex-ample, e(x) = 〈−3,−3,−2,−1, 1, 1, 0〉 and then e(x)I = 〈1, 1, 0,−1,−2,−3,−3〉.

1the order used for the names in a phonebook or words in a dictionary

41

Page 39: Lectures in Cooperative Game Theory

42 Lecture 4. The nucleolus

Using the lexicographical ordering, we are now ready to compare the payoff distribu-tions x and y and we note that y is better than x since e(x)I ≤lex e(y)I: there is asmaller amount of complaints in y than in x given the lexicographical ordering.

N = {1, 2, 3},v({i}) = 0 for i ∈ {1, 2, 3}

v({1, 2}) = 5, v({1, 3}) = 6, v({2, 3}) = 6v(N) = 8

Let us consider two payoff vectors x = 〈3, 3, 2〉 and y = 〈2, 3, 3〉.x = 〈3, 3, 2〉 y = 〈2, 3, 3〉

coalition C e(C, x)

{1} -3{2} -3{3} -2{1, 2} -1{1, 3} 1{2, 3} 1{1, 2, 3} 0

coalition C e(C, y)

{1} -2{2} -3{3} -3{1, 2} 0{1, 3} 1{2, 3} 0{1, 2, 3} 0

Table 4.1: A motivating example for the nucleolus

The first entry of e(x)I is the maximum excess: the agents involved in the corre-sponding coalition have the largest incentive to leave their current coalition and forma new one. Put another way, the agents involved in that coalition have the most validcomplaint. If one selects the payoff distribution minimizing the most valid complaint,there can be a large number of candidates. To refine the selection, among those pay-off distribution with the smallest largest complaint, one can look at minimizing thesecond largest complaint. A payoff distribution is in the nucleolus when it yields the“least problematic” sequence of complaints according to the lexicographical ordering.The nucleolus tries to minimise the possible complaints (or minimise the incentives tocreate a new coalition) over all possible payoff distributions.

4.1.2. DEFINITION. Let Imp be the set of all imputations. The nucleolus Nu(N, v) isthe set

Nu(N, v) = {x ∈ Imp | ∀y ∈ Imp e(y)I ≥lex e(x)I}.

Intuitively, this definition makes sense. It is another solution concept that focuseson stability, and it relaxes the stability requirement of the core: the core requires nocomplaint at all. The nucleolus may allow for some complaints, but tries to minimizethem.

Page 40: Lectures in Cooperative Game Theory

4.2. Some properties of the nucleolus 43

4.2 Some properties of the nucleolus

We now provide some properties of the nucleolus. First, we consider the relationshipwith the core of a game. The following theorem guarantees that the nucleolus of agame is always included in the core. A payoff distribution in the core does not haveany complaint. If there are more than one payoff in the core, it is possible to use theexcess and the lexicographical ordering to rank the payoff according to the satisfactionof the agents.

4.2.1. THEOREM. Let (N, v) be a TU game with a non-empty core. Then Nu(N, v) ⊆core(N, v)

Proof. This will be an assignment of Homework 2. �

if the core of a game in non-empty, one can use the nucleolus to discriminate betweendifferent core members. Now, we turn to the important issue of the existence of payoffdistributions in the nucleolus. The following theorem guarantees that the nucleolus isnon-empty in most games:

4.2.2. THEOREM. Let (N, v) be a TU game and Imp is the set of imputations. IfImp 6= ∅, then the nucleolus Nu(N, v) is non-empty.

This property ensures that the agents will always find an agreement if they use thismethod, which is a great property. The assumption that the set of imputation is a verymild assumption: if the game does not have any efficient and individually rationalpayoff distributions, it is not such an interesting game. The following theorem showsthat in addition to always exist, the nucleolus is in fact unique.

4.2.3. THEOREM. The nucleolus has at most one element.

The proofs of both theorems are a bit involved, and are included in the next section.The nucleolus is guaranteed to be non-empty and it is unique. These are two im-

portant property in favour of the nucleolus. Moreover, when the core is non-empty, thenucleolus is in the core.

One drawback, however, is that the nucleolus is difficult to compute. It can becomputed using a sequence of linear programs of decreasing dimensions. The size ofeach of these groups is, however, exponential. In some special cases, the nucleoluscan be computed in polynomial time [2, 1], but in the general case, computing thenucleolus is not guaranteed to be polynomial. Only a few papers in the multiagentsystems community have used the nucleolus, e.g., [4].

Page 41: Lectures in Cooperative Game Theory

44 Lecture 4. The nucleolus

4.3 Proofs of the main theoremThe results that the nucleolus is a unique payoff distribution is quite an important result.We will also use this result to show that some other solution concepts are non-empty.For this reason, it is worth stating one proof of this theorem, although it is quite atechnical result. To prove the theorem, one needs to use results from analysis. In thefollowing, we informally recall some definitions and theorems that will be used in theproofs.

4.3.1 Elements of AnalysisLet E = Rm and X ⊆ E. ||.|| denote a distance in E, e.g., the euclidean distance.

We consider functions of the form u : N → Rm. Another viewpoint on u is aninfinite sequence of elements indexed by natural numbers (u0, u1, . . . , uk, . . .) whereui ∈ X . We recall some definitions:

• convergent sequence: A sequence (ut) converges to l ∈ Rm iff for all ε > 0,∃T ∈ N s.t. ∀t ≥ T , ||ut − l|| ≤ ε.

• extracted sequence: Let (ut) be an infinite sequence and f : N → N be amonotonically increasing function. The sequence v is extracted from u iffv = u ◦ f , i.e., vt = uf(t).

• closed set: a set X is closed if and only if it contains all of its limit points. Inother words, for all converging sequences (x0, x1 . . .) of elements in X , the limitof the sequence has to be in X as well.

For example, if X = (0, 1], (1, 12, 1

3, 1

4, . . . , 1

n, . . .) is a converging sequence.

However, 0 is not in X , and hence, X is not closed.

One way to think about a closed set is by saying “A closed set contains its bor-ders”.

• bounded set: A subset X ⊆ Rm is bounded if it is contained in a ball of finiteradius, i.e. ∃c ∈ Rm and ∃r ∈ R+ s.t. ∀x ∈ X ||x− c|| ≤ r.

• compact set: A subset X ⊆ Rm is a compact set iff from all sequences in X , wecan extract a convergent sequence in X .

ë A set is compact set of Rm iff it is closed and bounded.

• convex set: A set X is convex iff ∀(x, y) ∈ X2, ∀α ∈ [0, 1], αx+ (1− α)y ∈ X(i.e. all points in a line from x to y is contained in X).

• continuous function: Let X ⊆ Rn, f : Rn → Rm.

f is continuous at x0 ∈ X iff

Page 42: Lectures in Cooperative Game Theory

4.3. Proofs of the main theorem 45

∀ε > 0 ∃δ > 0 ∀x ∈ X ||x − x0|| < δ ⇒ ||f(x) − f(x0)|| < ε. In otherwords, the function f does not contain any jump.

We now state some theorems. Let X ⊆ Rn.

Thm A1 If f : Rn → Rm is continuous and X ⊆ E is a non-empty compact subset ofRn, then f(X) is a non-empty compact subset of Rm.

Thm A2 Extreme value theorem: Let X be a non-empty compact subset of Rn, f : X →R a continuous function. Then f is bounded and it reaches its supremum.

Thm A3 Let X be a non-empty compact subset of Rn. f : X → R is continuous iff forevery closed subset B ⊆ R, the set f−1(B) is compact.

4.3.2 ProofsLet us assume that the following two theorems are valid. We will prove them later.

4.3.1. THEOREM. Assume we have a TU game (N, v), and consider its set Imp. IfImp 6= ∅, then set B = {e(x)I | x ∈ Imp} is a non-empty compact subset of R2|N|

4.3.2. THEOREM. Let A be a non-empty compact subset of Rm.{x ∈ A | ∀y ∈ A x ≤lex y} is non-empty.

We can use these theorems to prove that the nucleolus is non-empty.Proof.

Let us take a TU game (N, v) and let us assume Imp 6= ∅. From theorem 4.3.1,we know that B = {e(x)I | x ∈ Imp} is a non-empty compact subset of R2|N| .

Now let us apply the result of theorem 4.3.2 to B. We then have that{e(x)I | (x ∈ Imp) ∧ (∀y ∈ Imp e(x)I ≤lex e(y)I)} is non-empty. From this, itfollows that: Nu(N, v) = {x ∈ Imp | ∀y ∈ Imp e(y)I ≥lex e(x)I} 6= ∅. 4 �

In the following, we need to prove both theorems 4.3.1 and 4.3.2. We start by thefirst one.Proof. Let (N, v) be a TU game and consider its set Imp. Let us assume that Imp 6=∅. We want to prove that B = {e(x)I | x ∈ Imp} is a non-empty compact subset ofR2|N| .

First, let us prove that Imp is a non-empty compact subset of R|N |.

• Imp non-empty by assumption.

• To see that Imp is bounded, we need to show that for all i, xi is bounded bysome constant (independent of x). We have v({i}) ≤ xi by individual rationalityand x(N) = v(N) by efficiency. Then xi +

∑nj=1,j 6=i v({j}) ≤ v(N), hence

xi ≤ v(N)−∑nj=1,j 6=i v({j}).

Page 43: Lectures in Cooperative Game Theory

46 Lecture 4. The nucleolus

• Imp is closed (this is trivial as the boundaries of Imp are members of Imp).

ë Imp non-empty, closed and bounded. By definition, it is a non-empty compactsubset of R|N |. 4

e()I is a continuous function and Imp is a non-empty and compact subset of R2|N| .Using thm A1, we can conclude that e(Imp)I = {e(x)I|x ∈ Imp} is a non-emptycompact subset of R2|N| , which concludes the proof of theorem 4.3.1. 4 �

We now turn to the proof of theorem 4.3.2.

Proof. For a non-empty compact subset A of Rm, we need to prove that the set{x ∈ A | ∀y ∈ A x ≤lex y} is non-empty.

First, let πi : Rm → R be the projection function such that πi(x1, . . . , xm) = xi.Then, let us define the following sets:{

A0 = AAi+1 = argmin

x∈Ai

πi+1(x), i ∈ {0, 1, . . . ,m− 1} Let us assume that we want to find

the minimum of A according to the lexicographic order (say you want to find the lastwords in a text according to the order in a dictionary). You first take the entire set,then you select only the vectors that have the smallest first entries (set A1). Then,from this set, you select the vectors that have the smallest second entry (forming theset A2) and you repeat the process until you reach m steps. At the end, you haveAm = {x ∈ A | ∀y ∈ A x ≤lex y}.

We want to prove by induction that each Ai is non-empty compact subset of Rm

for i ∈ {1, . . . ,m}. First, we need to show non-emptiness:

• A0 = A is non-empty compact of Rm by hypothesis 4.

• Let us assume that Ai is a non-empty compact subset of Rm and let us prove thatAi+1 is a non-empty compact subset of Rm.

πi+1 is a continuous function and Ai is a non-empty compact subset of Rm.Using the extreme value theorem A2, minx∈Ai

πi+1(x) exists and it is reached inAi, hence argminx∈Ai

πi+1(x) is non-empty. 4

Now, we need to show each Ai is compact. We note by π−1i : R→ Rm the inverse

of πi. Let α ∈ R, π−1i (α) is the set of all vectors 〈x1, . . . , xi−1, α, xi+1, . . . , xm〉 such

that xj ∈ R, j ∈ {1, . . . ,m}, j 6= i. We can rewrite Ai+1 as:

Ai+1 = π−1i+1

(minx∈Ai

πi+1(x)

)⋂Ai

Page 44: Lectures in Cooperative Game Theory

4.3. Proofs of the main theorem 47

Ai+1 = π−1i+1

minx∈Ai

πi+1(x)︸ ︷︷ ︸

closed

︸ ︷︷ ︸According to Thm A3, it is a compact subset of Rm

⋂Ai

︸ ︷︷ ︸is a compact subset of Rm since

the intersection of two closed sets is closed and in Rm,and a closed subset of a compact subset of Rm

is a compact subset of Rm 4

Hence Ai+1 is a non-empty compact subset of Rm and the proof is complete. �

For a TU game (N, v) the nucleolusNu(N, v) is non-empty when Imp 6= ∅, whichis a great property as agents will always find an agreement. But there is more! No weneed to prove that there is one agreement which is stable according to the nucleolus.To prove the unicity of the nucleolus, we again need to prove two results.

4.3.3. THEOREM. Let A be a non-empty convex subset of Rm. Then the set{x ∈ A | ∀y ∈ A xI ≤lex y

I} has at most one element.

Proof.Let A be a non-empty convex subset of Rm, and

M in = {x ∈ A | ∀y ∈ A xI ≤lex yI}. We now prove that |M in| ≤ 1.

Towards a contradiction, let us assume M in has at least two elements x and y,x 6= y. By definition of M in, we must have xI = yI.

Let α ∈ (0, 1) and σ be a permutation of {1, . . . ,m} such that (αx+ (1−α)y)I =σ(αx+(1−α)y) = ασ(x)+(1−α)σ(y). Let us show by contradiction that σ(x) = xI

and σ(y) = yI.Let us assume that either σ(x) <lex x

I or σ(y) <lex yI, it follows that

ασ(x) + (1− α)σ(y) <lex αxI + (1− α)yI = xI.

SinceA is convex, αx+(1−α)y ∈ A. But this is a contradiction because by definitionof M in, αx + (1 − α)y ∈ A cannot be strictly smaller than xI, yI in A. This provesσ(x) = xI and σ(y) = yI.

Since xI = yI, we have σ(x) = σ(y), hence x = y. This contradicts the fact thatx 6= y. Hence, M in cannot have at least two elements, and |M in| ≤ 1. �

4.3.4. THEOREM. Let (N, v) be a TU game such that Imp 6= ∅.

(i) Imp is a non-empty and convex subset of R|N |

Page 45: Lectures in Cooperative Game Theory

48 Lecture 4. The nucleolus

(ii) {e(x) | x ∈ Imp} is a non-empty convex subset of R2|N|

Proof. Let (N, v) be a TU game such that Imp 6= ∅ (in case Imp = ∅, Imp is triviallyconvex). Let (x, y) ∈ Imp2, α ∈ [0, 1]. Let us prove Imp is convex by showing thatu = αx+ (1− α)y ∈ Imp, i.e., that u is individually rational and efficient.

Individual rationality: Since x and y are individually rational, for all agents i,ui = αxi + (1 − α)yi ≥ αv({i}) + (1 − α)v({i}) = v({i}). Hence u is individuallyrational.

Efficiency: Since x and y are efficient, we have∑

i∈Nui =

i∈Nαxi + (1− α)yi = α

i∈Nxi + (1− α)

i∈Nyi = αv(N) + (1− α)v(N).

Hence we have∑

i∈Nui = v(N) and u is efficient.

Thus, u ∈ Imp. 4

Let (N, v) be a TU game and Imp its set of imputations. We need to show that theset {e(z) | z ∈ Imp} is a non-empty convex subset of Rm. Remember that e(z) is thesequence of excesses of all coalitions for the payoff distribution z in a given order of thecoalitions (i.e., it is a vector of size 2|N |). Since Imp is non-empty, {e(z) | z ∈ Imp}is trivially non non-empty. Then we just need to prove it is convex. Let (x, y) ∈ Imp2,α ∈ [0, 1], and C ⊆ N . We consider the vector αe(x) + (1−α)e(y) and we look at theentry corresponding to coalition C.

(αe(x) + (1− α)e(y))C = αe(C, x) + (1− α)e(C, y)

= α(v(C)− x(C)) + (1− α)(v(C)− y(C))= v(C)− (αx(C) + (1− α)y(C))= v(C)− ([αx+ (1− α)y](C))= e(αx+ (1− α)y, C)

Since the previous equality is valid for all C ⊆ N , both sequences are equal and wecan write

αe(x) + (1− α)e(y) = e(αx+ (1− α)y).

Since Imp is convex, αx+ (1− α)y ∈ Imp, it follows thate(αx+ (1− α)y) ∈ {e(z) | z ∈ Imp}. Hence, {e(z) | z ∈ Imp} is convex. �

Now we finally are ready to prove that the nucleolus has at most one element.

Proof. Let (N, v) be a TU game, and Imp its set of imputations.According to Theorem 4.3.4(ii), we have that {e(x) | x ∈ Imp} is a non-empty

convex subset of R2|N| . Applying theorem4.3.3 withA = {e(x) | x ∈ Imp} we obtainthe following statement:

Page 46: Lectures in Cooperative Game Theory

4.3. Proofs of the main theorem 49

B = {e(x) | x ∈ Imp ∧ ∀y ∈ Imp e(x)I ≤lex e(y)I} has at most one element.B is the image of the nucleolus under the function e. We need to make sure that an

e(x) corresponds to at most one element in Imp. This is true since for (x, y) ∈ Imp2,we have x 6= y ⇒ e(x) 6= e(y).

Hence Nu(N, v) = {x | x ∈ Imp ∧ ∀y ∈ Imp e(x)I ≤lex e(y)I} has at mostone element! �

Page 47: Lectures in Cooperative Game Theory
Page 48: Lectures in Cooperative Game Theory

Bibliography

[1] Xiaotie Deng, Qizhi Fang, and Xiaoxun Sun. Finding nucleolus of flow game.In SODA ’06: Proceedings of the seventeenth annual ACM-SIAM symposium onDiscrete algorithm, pages 124–131, New York, NY, USA, 2006. ACM Press.

[2] Jeroen Kuipers, Ulrich Faigle, and Walter Kern. On the computation of the nucle-olus of a cooperative game. International Journal of Game Theory, 30(1):79–98,2001.

[3] D. Schmeidler. The nucleolus of a characteristic function game. SIAM Journal ofapplied mathematics, 17, 1969.

[4] Makoto Yokoo, Vincent Conitzer, Tuomas Sandholm, Naoki Ohta, and AtsushiIwasaki. A compact representation scheme for coalitional games in open anony-mous environments. In Proceedings of the Twenty First National Conference onArtificial Intelligence, pages –. AAAI Press AAAI Press / The MIT Press, 2006.

51

Page 49: Lectures in Cooperative Game Theory

Lecture 5The Kernel

The Kernel is another stability concept that weakens the stability requirements of thecore. It was first introduced by Davis and Maschler [3]. The definition of the kernel isbased on the excess of a coalition. For the nucleolus, a positive excess was interpretedas an amount of complaint as by forming a coalition with positive excess, some payoffwas lost. In the kernel, a positive excess is interpreted as a measure of threat: in thecurrent payoff distribution, if some agents deviate by forming coalition with positiveexcess, they are able to increase their payoff by redistributing the excess between them.When any two agents in a coalition have similar threatening powers, the kernel consid-ers that the payoff is stable. In the following, we will see two definitions of the kerneland we will see that it is guaranteed to be non-empty.

5.1 Definition of the KernelWe recall that the excess related to coalition C for a payoff distribution x is defined ase(C, x) = v(C)− x(C). We saw that a positive excess can be interpreted as an amountof complaint for a coalition. We can also interpret the excess as a potential to generatemore utility. Let us consider that the agents are forming a CS S = {C1, . . . , Ck}, andlet consider that the excess of a coalition C /∈ S is positive. Agent i ∈ C can view thepositive excess as a measure of his strength: if she leaves its current coalition in S andforms coalition C ⊆ N , she has the power to generate some surplus e(C, x). Whentwo agents want to compare their strength, they can compare the maximum excess ofa coalition that contains them and excludes the other agent, and the kernel is based onthis idea.

5.1.1. DEFINITION. [Maximum surplus] For a TU game (N, v), the maximum surplussk,l(x) of agent k over agent l with respect to a payoff distribution x is

sk,l(x) = maxC⊆N | k∈C, l /∈C

e(C, x).

53

Page 50: Lectures in Cooperative Game Theory

54 Lecture 5. The Kernel

For two agents k and l, the maximum surplus sk,l of agent k over agent l with re-spect to x is the maximum excess from a coalition that includes k but does exclude l.This maximum surplus can be used by agent k to show its strength over agent l: as-suming it is positive and that agent k can claim all of it, she can argue that she willbe better off without agent l; hence she should be compensated with more utility forstaying in the current coalition. When any two agents in a coalition have the samemaximum surplus (except for a special case), the agents are said to be in equilibrium.A payoff distribution is in the Kernel when all agents are in equilibrium. The formaldefinitions follow:

5.1.2. DEFINITION. [kernel] Let (N, v,S) be a TU game with coalition structure. Thekernel is the set of imputations x ∈ Imp(S) such that for every coalition C ∈ S, if(k, l) ∈ C2, k 6= l, then we have either skl(x) ≥ slk(x) or xk = v({k}).

First, note that the definition of the kernel is for a particular coalition structure. Theagents do not try to change the structure, but they argue about the payoff distribution.Another observation is that the kernel is a subset of the set of imputations(we could de-fine a pre-kernel and show this solution does not satisfy individual rationality). Finally,the stability condition must be satisfied by any two agents that are in the same coali-tion. This condition may appear to surprising at first as one would expect an equalitybetween the maximum surpluses of the two agents. The condition skl(x) < slk(x) callsfor a transfer of utility from k to l unless it is prevented by individual rationality, i.e.,by the fact that xk = v({k}). Hence, it is possible that agent l has a strictly greatermaximum excess than k when the payoff of k is her minimum payoff v({k}).

5.2 Another definitionAs for the bargaining set and the nucleolus, the kernel can be defined using objec-tions and counter objections. For the kernel, objections and counter-objections areexchanged between two members of the same coalition in S. Objections and counter-objections take the form of coalitions only (unlike for the the bargaining set and the nu-cleolus for which a payoff distribution was part of an objection and counter-objection).

Let us consider a game with CS 〈N, v,S〉 and let us consider a coalition C ∈ Ssuch that both agents k and l are in C.

Objection: A coalition P ⊆ N is an objection of k against l to x iff k ∈ P , l /∈ P andxl > v({l}).

“P is a coalition that contains k, excludes l and which sacrifices toomuch (or gains too little).”

An objection of agent k against agent l is simply a coalition P that has someissues according to agent k. The only constraint is that agent l is not a member of

Page 51: Lectures in Cooperative Game Theory

5.3. Properties of the kernel 55

that coalition. A counter-objection of agent l is then a coalition Q that excludesagent k which has a greater (or equal) excess than e(P, x).

Counter-objection: A coalition Q ⊆ N is a counter-objection to the objection P of kagainst l at x iff l ∈ Q, k /∈ Q and e(Q, x) ≥ e(P, x).

“k’s demand is not justified: Q is a coalition that contains l and ex-cludes k and that sacrifices even more (or gains even less).”

Note that if the inequality is strict, we can view Q as a new objection against agentl. Remember that the set of feasible payoff vectors for (N, v,S) is X(N,v,S) = {x ∈Rn | for every C ∈ S : x(C) ≤ v(C)}. We are now ready to define the kernel.

5.2.1. DEFINITION. [Kernel] Let (N, v,S) be a TU game in coalition structure. Thekernel is the set of imputations x ∈ X(N,v,S) s.t. for any coalition C ∈ S , for eachobjection P of an agent k ∈ C over any other member l ∈ C to x, there is a counter-objection of l to P .

This definition using objections and counter-objections is very intuitive compared tothe first definition that was presented. It should be clear that both definitions coincide.

5.3 Properties of the kernelThe Kernel and the nucleolus are linked: the following result shows that the nucleolusis included in the kernel. As a consequence, this guarantees that the Kernel is non-empty. The second part of the result is that the kernel is included in the bargaining set.As a consequence, the nucleolus is also included in the bargaining set and the bargain-ing set is non-empty, a result we did not prove during the lecture on the bargaining set.A final observation is that the kernel can be seen as a refinement of the bargaining set.

5.3.1. THEOREM. Let (N, v,S) a game with coalition structure, and let Imp 6= ∅.Then we have:

• (i) Nu(N, v,S) ⊆ K(N, v,S)

• (ii) K(N, v,S) ⊆ BS(N, v,S)

Proof.Let us start by proving (i).Let x /∈ K(N, v,S), we want to show that x /∈ Nu(N, v,S).Since x /∈ K(N, v,S), there exists a coalition C ∈ CS and two members k and l of

coalition C such that slk(x) > skl(x) and xk > v({k}). Let y be a payoff distributioncorresponding to a transfer of utility ε > 0 from k to l:

yi =

xi if i 6= k and i 6= lxk − ε if i = kxl + ε if i = l

Page 52: Lectures in Cooperative Game Theory

56 Lecture 5. The Kernel

Since xk > v({k}) and slk(x) > skl(x), we can choose ε > 0 small enough such that

• xk − ε > v({k})

• slk(y) > skl(y)

We want to show that e(y)I ≤lex e(x)I. Note that for any coalition S ⊆ N suchthat e(S, x) 6= e(S, y) we have either:

• k ∈ S and l /∈ S (e(S, x) > e(S, y) since e(S, y) = e(S, x) + ε > e(S, x))

• k /∈ S and l ∈ S (e(S, x) < e(S, y) since e(S, y) = e(S, x)− ε < e(S, x))

Let {B1(x), . . . , BM(x)} a partition of the set of all coalitions such that

• (S, T ) ∈ Bi(x) iff e(S, x) = e(T, x). We denote by ei(x) the common value ofthe excess in Bi(x), i.e. ei(x) = e(S, x) for all S ∈ Bi(x).

• e1(x) > e2(x) > . . . > eM(x)

In other words, e(x)I = 〈e1(x), . . . , e1(x)︸ ︷︷ ︸|B1(x)|times

, . . . , eM(x), . . . , eM(x)︸ ︷︷ ︸|BM (x)|times

〉.

Let i∗ be the minimal value of i ∈ {1, . . . ,M} such that there is a coalition C ∈Bi∗(x) with e(C, x) 6= e(C, y). For all i < i∗, we have Bi(x) = Bi(y) and ei(x) =ei(y). Since slk(x) > skl(x), Bi∗ contains

• at least one coalition S that contains l but not k, for such coalition, we must havee(S, x) > e(S, y)

• no coalition that contains k but not l.

If Bi∗(x) contains either

• coalitions that contain both k and l

• or coalitions that do not contain both k and l

Then, for any such coalitions S, we have e(S, x) = e(S, y), and it follows thatBi∗(y) ⊂Bi∗(x).Otherwise, we have ei∗(y) < ei∗(x).

In both cases, we have e(y) is lexicographically less than e(x), and hence y is notin the nucleolus of the game (N, v,S). 4

We now turn to proving (ii).Let (N, v,S) a TU game with coalition structure. Let x ∈ K(N, v,S). We want to

prove that x ∈ BS(N, v,S). To do so, we need to show that for any objection (P, y)from any player i against any player j at x, there is a counter objection (Q, z) to (P, y).For the bargaining set, An objection of i against j is a pair (P, y) where

Page 53: Lectures in Cooperative Game Theory

5.3. Properties of the kernel 57

• P ⊆ N is a coalition such that i ∈ P and j /∈ P .

• y ∈ Rp where p is the size of P

• y(P ) ≤ v(P ) (y is a feasible payoff for members of P )

• ∀k ∈ P , yk ≥ xk and yi > xi

An counter-objection to (P, y) is a pair (Q, z) where

• Q ⊆ N is a coalition such that j ∈ Q and i /∈ Q.

• z ∈ Rq where q is the size of Q

• z(Q) ≤ v(Q) (z is a feasible payoff for members of Q)

• ∀k ∈ Q, zk ≥ xk

• ∀k ∈ Q ∩ P zk ≥ yk

Let (P, y) be an objection of player i against player j to x. i ∈ P , j /∈ P , y(P ) ≤v(P ) and y(P ) > x(P ). We choose y(P ) = v(P ).

• xj = v({j}): Then ({j}, v({j})) is a counter objection to (P, y). 4

• xj > v({j}): Since x ∈ K(N, v,S) we have sji(x) ≥ sij(x) ≥ v(P )− x(P ) ≥y(P )− x(P ) since i ∈ P , j /∈ P .

Let Q ⊆ N such that j ∈ Q, i /∈ Q and sji(x) = v(Q)− x(Q).

We have v(Q)− x(Q) ≥ y(P )− x(P ). Then, we have

v(Q) ≥ y(P ) + x(Q)− x(P )

≥ y(P ∩Q) + y(P \Q) + x(Q \ P )− x(P \Q)

> y(P ∩Q) + x(Q \ P ) since i ∈ P \Q, y(P \Q) > x(P \Q)

Let us define z as follows{xk if k ∈ Q \ Pyk if k ∈ Q ∩ P

(Q, z) is a counter-objection to (P, y). 4

Finally x ∈ BS(N, v,S).�

These properties allow us to conclude that when the set of imputation is non-empty,both the kernel and the bargaining set are non-empty.

5.3.2. THEOREM. Let 〈N, v,S〉 a game with coalition structure such that the set ofimputations Imp 6= ∅. The kernel K(N, v,S) and the bargaining set BS(N, v,S) arenon-empty.

Page 54: Lectures in Cooperative Game Theory

58 Lecture 5. The Kernel

5.4 Computational IssuesOne method for computing the Kernel is the Stearns method [7]. The idea is to builda sequence of side-payments between agents to decrease the difference of surplusesbetween the agents. At each step of the sequence, the agents with the largest maxi-mum surplus difference exchange utility so as to decrease their surplus: the agent withsmaller surplus makes a payment to an agent with higher surplus so as to decrease theirsurplus difference. After each side-payment, the maximum surplus over all agents de-creases. In the limit, the process converges to an element in the Kernel. Computing anelement in the Kernel may require an infinite number of steps as the side payments canbecome arbitrarily small, and the use of the ε-Kernel can alleviate this issue. A crite-ria to terminate Stearns method is proposed in [6], and we present the correspondingalgorithm in Algorithm 1.

Algorithm 1: Transfer scheme to converge to a ε-Kernel-stable payoff distribu-tion for the CS S

compute-ε-Kernel(ε, S)repeat

for each coalition C ∈ S dofor each member i ∈ C do

for each member j ∈ C, j 6= i, do // compute the surplus

for two members of a coalition in Ssij ← maxR∈C |(i∈R, j /∈R) v(R)− x(R)

δ ← max(i,j)∈N2 |sij − sji|;(i?, j?)← argmax(i,j)∈N2 sij − sji;if(xj? − v({j}) < δ

2

)then // payment should be individually

rational

d← xj? − v({j?});else

d← δ2;

xi? ← xi? + d;xj? ← xj? − d;

until δv(S)≤ ε;

Computing a Kernel distribution is of exponential complexity. In Algorithm 1,computing the surpluses is expensive, as we need to search through all coalitionsthat contains a particular agent and does not contain another agent. Note that whena side-payment is performed, it is necessary to recompute the maximum surpluses.The derivation of the complexity of the Stearns method to compute a payoff in theε-Kernel can be found in [4, 6], and the complexity for one side-payment is O(n · 2n).Of course, the number of side-payments depends on the precision ε and on the initial

Page 55: Lectures in Cooperative Game Theory

5.5. Fuzzy Kernel 59

payoff distribution. They derive an upper bound for the number of iterations: con-verging to an element of the ε-Kernel requires n log2( δ0

ε·v(S)), where δ0 is the maximum

surplus difference in the initial payoff distribution. To derive a polynomial algorithm,the number of coalitions must be bounded. The solution used in [4, 6] is to only con-sider coalitions whose size is bounded in the interval K1, K2. The complexity of thetruncated algorithm is O(n2 · ncoalitions) where ncoalitions is the number of coalitionswith a size between K1 and K2, which is a polynomial of order K2.

5.5 Fuzzy KernelIn order to take into account the uncertainty in the knowledge of the utility function, afuzzy version of stability concept can be used. Blankenburg et al. consider a coalitionto be Kernel-stable with a degree of certainty [2]. This work also presents a side-payment scheme and shows that the complexity is similar to the crisp Kernel, and theidea from [4] can be used for ensuring a polynomial coalition formation algorithm.This approach assumes a linear relationship of the membership and coalition values.

Fuzzy coalitions can also allow agents to be members of multiple coalitions atthe same time, with possibly different degrees of involvement [1]. It can be mutuallybeneficial for an agent to be in two different coalitions. It may be beneficial for theagent to be in both coalition. In addition, the two coalitions may need the competenceof the same agent, though the coalitions do not have any incentive to merge as they maynot have anything to do with each other. This solution may allow to form coalitionsthat involve only agents that need to work together. In the previous example, withoutthe possibility of being member of multiple coalitions, the two coalitions should mergeto benefit from the agent participation, and agents that would not need to be in the samecoalition are forced to be in the same coalition. In [1], the degree of involvement of anagent in different coalition is a function of the risk involved in being in that coalition.The risk is quantified using financial risk measures. This work presents a definitionof the Kernel based on partial membership in coalitions and introduces a coalitionformation protocol that runs in polynomial time.

Page 56: Lectures in Cooperative Game Theory
Page 57: Lectures in Cooperative Game Theory

Bibliography

[1] Bastian Blankenburg, Minghua He, Matthias Klusch, and Nicholas R. Jennings.Risk-bounded formation of fuzzy coalitions among service agents. In Proceedingsof 10th International Workshop on Cooperative Information Agents, 2006.

[2] Bastian Blankenburg, Matthias Klusch, and Onn Shehory. Fuzzy kernel-stablecoalitions between rational agents. In Proceedings of the second international jointconference on Autonomous agents and multiagent systems (AAMAS-03). ACMPress, 2003.

[3] M. Davis and M. Maschler. The kernel of a cooperative game. Naval ResearchLogistics Quarterly, 12, 1965.

[4] Matthias Klusch and Onn Shehory. A polynomial kernel-oriented coalition algo-rithm for rational information agents. In Proceedings of the Second InternationalConference on Multi-Agent Systems, pages 157 – 164. AAAI Press, December1996.

[5] Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. The MITPress, 1994.

[6] Onn Shehory and Sarit Kraus. Feasible formation of coalitions among autonomousagents in nonsuperadditve environments. Computational Intelligence, 15:218–251, 1999.

[7] Richard Edwin Stearns. Convergent transfer schemes for n-person games. Trans-actions of the American Mathematical Society, 134(3):449–459, December 1968.

61

Page 58: Lectures in Cooperative Game Theory

Lecture 6The Shapley value

Thus far, we have focused on solution concepts that provide some incentives for stayingin a coalition structure. The solution concept we are about to study focuses on how welleach agent’s payoff reflects her contribution, i.e., a notion of fairness. This solutionconcept was introduced by L.S Shapley [5] and we are going to consider two differentdefinitions. One is an axiomatic approach, and the other describes the idea that a playershould receive a payoff that is proportional to her contribution.

6.1 Taking the average of the contribution

A simple idea to design a payoff distribution is that an agent should receive a payoffthat is proportional to her contribution in a coalition. We recall that the marginal contri-bution of an agent i to a coalition C ⊆ N ismci(C) = v(C∪{i})−v(C). Let us considerthat a coalition C is built incrementally with one agent at a time entering the coalition.Also consider that the payoff of each agent i is its marginal contribution1. For example,〈mc1(∅),mc2({1}),mc3({1, 2})〉 is a payoff distribution for a game {1, 2, 3}, v). Thispayoff is efficient and it is also reasonable (from above and from below). However,the payoff of each agent depends on the order in which the agents enter the coalition.Let us assume that the game is convex. An agent that joins later would then have anadvantage over an agent that join the coalition earlier. As a more concrete example,consider agents that form a coalition to take advantages of price reduction when buyinglarge quantities of a product. Agents that start the coalition may have to spend largesetup cost, and agents that come later benefits from the already large number of agentsin the coalition. This may not be fair!

To alleviate this issue, the Shapley value averages each agents’ payoff over allpossible orderings: the value of agent i in coalition C is the average marginal valueover all possible orders in which the agents may join the coalition.

1We used this payoff distribution in the proof showing that a convex game has a non-empty core(Theorem 2.2.3).

63

Page 59: Lectures in Cooperative Game Theory

64 Lecture 6. The Shapley value

Let π represent a joining order of the grand coalition N : π can also be viewed asa permutation of 〈1, . . . , n〉. We write mc(π) the payoff vector where agent i obtainsmci({π(j) | j < i}). The payoff vector mc(π) is called the marginal vector. Let usdenote the set of all permutations of the sequence 〈1, . . . , n〉 as Π(N). The Shapleyvalues can then be defined as

Sh(N, v) =

∑π∈Π(N) mc(π)

n!.

N = {1, 2, 3}v({1}) = 0 v({2}) = 0 v({3}) = 0

v({1, 2}) = 90 v({1, 3}) = 80 v({2, 3}) = 70v({1, 2, 3}) = 120

1 2 31← 2← 3 0 90 301← 3← 2 0 40 802← 1← 3 90 0 302← 3← 1 50 0 703← 1← 2 80 40 03← 2← 1 50 70 0total 270 240 210Shapley value φ 45 40 35

Let y = 〈50, 40, 30〉C e(C, φ) e(C, y){1} -45 0{2} -40 0{3} -35 0{1, 2} 5 0{1, 3} 0 0{2, 3} -5 0{1, 2, 3} 0 0

This example shows that the Shapley value may not be in the core, and may not be thenucleolus.

Table 6.1: Example of a computation of Shapley value

We provide an example in Table 6.1 in which we list all the orders in which theagents can enter the grand coalition. The sum is over all joining orders, which maycontains a very large number of terms. However, when computing the Shapley valuefor one agent, one can avoid some redundancy by summing over all coalitions andnoticing that:

• There are |C|! permutations in which all members of C precede i.

• There are |N \(C∪{i})|! permutations in which the remaining members succedei, i.e. (n− |C| − 1)!.

These observations allow us to rewrite the Shapley value as:

Shi(N, v) =∑

C⊆N\{i}

|C|!(n− |C| − 1)!

n!(v(C ∪ {i})− v(C)) .

Note that the example from Table 6.1 also demonstrates that in general the Shapleyvalue is not in the core or in the nucleolus.

Page 60: Lectures in Cooperative Game Theory

6.2. An Axiomatic Characterisation 65

6.2 An Axiomatic CharacterisationIn the first lecture, we provided some concepts some desirable concepts that can besatisfied by a payoff distribution. The core is defined by using two of these concepts:efficiency and group rationality. In the following, we are going to use some proper-ties, called axioms in this context, to define a payoff distribution that guarantees someelements of fairness.

Instead of considering properties of a payoff distribution, we will state the proper-ties of a function that maps a game to a payoff distribution.

6.2.1. DEFINITION. [value function] Let GN the set of all valuation functions 2N → Rthat maps a coalition of N to a real number. A value function φ : N × GN → Rn

assigns an efficient allocation x to a TU game (N, v) (i.e., an allocation that satisfies∑i∈N φ(N, v)i = v(N)).

We have already seen one value function for games that have at least one imputa-tion: the nucleolus provides a single payoff distribution for such game.

We now consider some axioms that may be desirable for such a value function. Weconsider a TU game (N, v) and we simply note φ(N, v) as φ. The first axiom uses thedefinition of a dummy agent. Intuitively, there is no synergy between a dummy agentand any other agent, in other words the marginal contribution of a dummy agent i toany coalition is v({i}).

DUM (“Dummy actions”) : if agent i is a dummy then φi = v({i}). In other words,if the presence of agent i improves the value of any coalition by exactly v({i}),this agent should obtain precisely v({i}).

SYM (“Symmetry”) : When two agents generate the same marginal contributions,they should be rewarded equally: for i 6= j and ∀C ⊆ N such that i /∈ C andj /∈ S, if v(C ∪ {i}) = v(C ∪ {j}), then φi = φj .

ADD (“Additivity”) : For any two TU games (N, v) and (N,w) and their correspond-ing value functions φ(N, v) and φ(N,w), the value function for the TU game(N, v + w) is φ(N, v + w) = φ(N, v) + φ(N,w).

These three axioms appear as good properties that should be satisfied by a payoffdistribution. Dummy states that an agent that does not have any synergy with anyother agent should get the value of its singleton coalition. SYM provides an elementof fairness: two agents that are substitutes should get the same payoff. The last axiomdemands a value function to be additive. Shapley [5] showed that actually, there is aunique value that satisfies this three axioms.

6.2.2. THEOREM. The Shapley value is the unique value that satisfies axioms DUM,SYM and ADD.

Page 61: Lectures in Cooperative Game Theory

66 Lecture 6. The Shapley value

These axioms are independent. One can prove that if one of the three axioms isdropped, it is possible to find multiple value functions satisfying the other two axioms.To prove this results, one needs to show the existence of a value function that satisfiesthe three axioms, and then prove the unicity of the value function. We will follow theproof from [4]. We will need the following type of games called in the proof.

6.2.3. DEFINITION. [Unanimity Game] Let N be a set of agents and T ⊆ N \ ∅.The unanimity game (N, vT ) is the game such that ∀C ⊆ N , vT (C) =

{1, if T ⊆ C,0 otherwise.

One can note that if i ∈ N \ T , i is a null player. In addition, if (i, j) ∈ T 2, i and jare substitutes. The following lemma will be useful for the proof.

6.2.4. LEMMA. The set {vT ∈ GN | T ⊆ N \ ∅} is a linear basis of GN .

The lemma means that a TU game (N, v) can be represented by a unique set ofvalues (αT )T⊆N\∅ such that ∀C ⊆ N , v(C) =

(∑T⊆N\∅ αTvT

)(C).

Proof. There are 2n − 1 unanimity games and the dimension of GN is also 2n − 1.We only need to prove that the unanimity games are linearly independent. Towards acontradiction, let us assume that

∑T⊆N\∅ αTvT = 0 where (αT )T⊆N\∅ 6= 0R2n−1 . Let

T0 be a minimal set in {T ⊆ N | αT 6= 0}. Then,(∑

T⊆N\∅ αTvT)

(T0) = αT0 6= 0,which is a contradiction. �

We can now turn to proving that the Shapley value is the unique value that satisfiesSYM, DUM and ADD.Proof. We start by proving the uniqueness. Let φ a feasible solution on GN that isnon-empty and satisfies the axioms SYM, DUM and ADD. Let us prove that φ is avalue function. Let (N, v) ∈ GN .

• if v = 0GN , all players are dummy. Since the solution is non-empty, 0R|N| is theunique member of φ(N, v).

• otherwise, (N,−v) ∈ GN . Let x = φ(N, v) and y = φ(N,−v). By ADD,x + y = φ(v − v), and then since v − v = 0G , we have that x = −y is unique.Moreover, x(N) ≤ v(N) as φ is a feasible solution. Also y(N) ≤ −v(N). Sincex = −y, we have v(N) ≤ x(N) ≤ v(N), i.e. x is efficient.

Hence, φ is a value function. We now show that the value function of a unanimitygame is unique. Let T ⊆ N \ ∅ and α ∈ R. Let us prove that φ(N,α · vT ) is uniquelydefined.

• Let i /∈ T . We have trivially T ⊆ C iff T ⊆ C ∪ {i}. Then ∀C ⊆ N \ {i},αvT (C) = αvT (C ∪ {i}). Hence, all agent i /∈ T are dummies. By DUM,∀i /∈ T , φi(N,α · vT ) = 0.

Page 62: Lectures in Cooperative Game Theory

6.2. An Axiomatic Characterisation 67

• Let (i, j) ∈ T 2. Then for all C ⊆ N \ {i, j}, v(C ∪ {i}) = v(C ∪ {j}). By SYM,φi(N,α · vT ) = φj(N,α · vT ).

• Since φ is a value function, it is efficient. Then,∑

i∈N φi(N,α·vT ) = αvT (N) =α.Hence, ∀i ∈ T , φi(N,α · vT ) = α

|T | .

This proves that φ(N,α · vT ) is uniquely defined. Since any TU game (N, v) can bewritten as

∑T⊆N\∅ αTvT and because of ADD, there is a unique value function that

satisfies the three axioms.We now turn to show the existence of the Shapley value, i.e. we need to show

that there exist a value function that satisfies the three axioms. Let (N, v) a TU gameand we consider the Shapley value as defined in the previous section. We are going tocheck whether this function satisfies the three axioms.

SYM Let i and j be substitutes, i.e., ∀C ⊆ N \{i, j}, we have v(C∪{i}) = v(C∪{j}).Then ∀C ⊆ N \ {i, j}, we have

– mci(C) = mcj(C)– v(C ∪ {i, j}) − v(C ∪ {i}) = v(C ∪ {i, j}) − v(C ∪ {j}), hence, we havemcj(C ∪ {j}) = mci(C ∪ {i}).

ë Shi(N, v) = Shj(N, v), Sh satisfies SYM.

DUM Let i be a dummy agent, i.e., for all coalition C ⊆ N \ {i}, we have v(C) +v({i}) = v(C ∪ {i}). Then, each marginal contribution of player i is v({i}), andit follows that Shi(N, v) = v({i}). Sh satisfies DUM.

ADD Sh is clearly additive.

We proved that a unique function satisfies all three axioms at the same time and wefound one function that satisfies them all, hence the Shapley value is well defined.4 �

The axioms SYM and DUM are clearly desirable. The last axiom, ADD, is harderto motivate in some cases. If the valuation function of a TU game is interpreted as anexpected payoff, then ADD is desirable (as you want to be able to add the value ofdifferent states of the world). Also, if we consider cost-sharing games and that a TUgame corresponds to sharing the cost of one service, then ADD is desirable as the costfor a joint-service should be the sum of the cost of the separate services. However, ifwe do not make any assumptions about the games (N, v) and (N,w), the axiom impliesthat there is no interaction between the two games. In addition, the game (N, v + w)may induce a behavior that may be unrelated to the behavior induced by either (N, v)or (N,w), and in this case ADD can be questioned.

Other axiomatisations that do not use the ADD axiom have been proposed byYoung in [7] and by Myerson in [3].

Page 63: Lectures in Cooperative Game Theory

68 Lecture 6. The Shapley value

6.2.5. DEFINITION. [Marginal contribution axiom] Let (N, v) and (N, v) be two TUgames. A value function φ satisfies the marginal contribution axiom iff for all i ∈ N ,if for all C ⊆ N \ {i} v(C ∪ {i})− v(C) = u(C ∪ {i})− u(C), then φ(u) = φ(v).

In other words, the value of a player depends only on its marginal contribution.The following axiomatisation is due to Young [7].

6.2.6. THEOREM. The Shapley value is the unique value function that satisfies sym-metry and marginal contribution axioms.

We refer by v \ i the TU game (N \ {i}, v\i) where v\i is the restriction of v toN \ {i}.

6.2.7. DEFINITION. [Balanced contribution axiom] A value function φ satisfies thebalanced contribution axiom iff for all (i, j) ∈ N2 φi(v)−φi(v\j) = φj(v)−φj(v\i).

For any two agents, the amount that each agent would win or lose if the other“leaves the game” should be the same. The following axiomatisation is due to Myer-son [3].

6.2.8. THEOREM. The Shapley value is the unique value function that satisfies thebalanced contribution axiom.

6.3 Other propertiesAt noted before, the Shapley value always exists and is unique.

6.3.1. THEOREM. For superadditive games, the Shapley value is an imputation.

Proof. Let (N, v) be a superadditive TU game.By superadditivity, ∀i ∈ N , ∀C ⊆ N \ {i} v(C ∪ {i})− v(C) > v({i}). Hence, for

each marginal vector, an agent i gets at least v({i}). The same is true for the Shapleyvalue as it is the average over all marginal vectors.4 �

6.3.2. THEOREM. For convex game, the Shapley value is in the core.

Proof. Let (N, v) be a convex game. We know that all marginal vectors are in the core(to show that convex games have non-empty core, we used one marginal vector and showedit was in the core). The core is a convex set. The average of a finite set of points in aconvex set is also in the set. Finally, the Shapley value is in the core.4 �

When the valuation function is superadditive, the Shapley value is individuallyrational, i.e., it is an imputation. When the Core is non-empty, the Shapley value maynot be in the Core. However, when the valuation function is convex, the Shapley valueis also group rational, hence, it is in the Core.

Page 64: Lectures in Cooperative Game Theory

6.4. Computational Issues 69

6.4 Computational IssuesThe nature of the Shapley value is combinatorial, as all possible orderings to form acoalition needs to be considered. This computational complexity can sometimes bean advantage as agents cannot benefit from manipulation. For example, it is NP-complete to determine whether an agent can benefit from false names [6]. Neverthe-less, some representations allow to compute the Shapley value efficiently. We willsurveying few representations in a coming lecture. For now, we just concentrate on asimple proposal.

In order to reduce the combinatorial complexity of the computation of the Shap-ley value, Ketchpel introduces the Bilateral Shapley Value (BSV ) [1]. The idea is toconsider the formation of a coalition as a succession of merging between two coali-tions. Two disjoint coalitions C1 and C2 with C1 ∩ C2 = ∅, may merge when v(C1 ∪C2) ≥ v(C1) + v(C2). When they merge, the two coalitions, called founders of thenew coalition C1 ∪ C2, share the marginal utility as follows: BSV (C1) = 1

2v(C1) +

12

(v(C1 ∪ C2)− v(C2)) and BSV (C2) = 12v(C2) + 1

2(v(C1 ∪ C2)− v(C1)). This is

the expression of the Shapley value in the case of an environment with two agents.In C1 ∪ C2, each of the founders gets half of its ‘local’ contribution, and half of themarginal utility of the other founder. Given this distribution of the marginal utility, it isrational for C1 and C2 to merge if ∀i ∈ {1, 2}, v(Ci) ≤ BSV (Ci). Note that symmetricfounders get equal payoff, i.e., for C1, C2, C such that C1 ∩ C2 = C1 ∩ C = C2 ∩ C = ∅,v(C ∪C1) = v(C ∪C2)⇒ BSV (C ∪C1) = BSV (C ∪C2). Given a sequence of succes-sive merges from the states where each agent is in a singleton coalition, we can use abackward induction to compute a stable payoff distribution [2]. Though the computa-tion of the Shapley value requires looking at all of the permutations, the value obtainedby using backtracking and the BSV only focuses on a particular set of permutations,but the computation is significantly cheaper.

Page 65: Lectures in Cooperative Game Theory
Page 66: Lectures in Cooperative Game Theory

Bibliography

[1] Steven P. Ketchpel. The formation of coalitions among self-interested agents. InProceedings of the Eleventh National Conference on Artificial Intelligence, pages414–419, August 1994.

[2] Matthias Klusch and Onn Shehory. Coalition formation among rational informa-tion agents. In Rudy van Hoe, editor, Seventh European Workshop on ModellingAutonomous Agents in a Multi-Agent World, Eindhoven, The Netherlands, 1996.

[3] Roger B. Myerson. Graphs and cooperation in games. Mathematics of OperationsResearch, 2:225–229, 1977.

[4] Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. The MITPress, 1994.

[5] L.S. Shapley. A value for n-person games. In H. Kuhn and A.W. Tucker, edi-tors, Contributions to the Theory of Games, volume 2. Princeton University Press,Princeton, NJ, 1953.

[6] Makoto Yokoo, Vincent Conitzer, Tuomas Sandholm, Naoki Ohta, and AtsushiIwasaki. Coalitional games in open anonymous environments. In Proceedingsof the Twentieth National Conference on Artificial Intelligence, pages 509–515.AAAI Press AAAI Press / The MIT Press, 2005.

[7] H. P. Young. Monotonic solutions of cooperative games. International Journal ofGame Theory, 14:65–72, 1985.

71

Page 67: Lectures in Cooperative Game Theory

Lecture 7A Special Class of TU games: Voting Games

The formation of coalitions is usual in parliaments or assemblies. It is therefore in-teresting to consider a particular class of coalitional games that models voting in anassembly. For example, we can represent an election between two candidates as a vot-ing game where the winning coalitions are the coalitions of size at least equal to thehalf the number of voters.

7.1 DefinitionsWe start by providing the definition of a voting game, which can be viewed as a specialclass of TU games. Then, we will formalize some known concepts used in voting. Wewill see how we can define what a dictator is,

7.1.1. DEFINITION. [voting game] A game (N, v) is a voting game when

• the valuation function takes only two values: 1 for the winning coalitions, 0otherwise.

• v satisfies unanimity: v(N) = 1

• v satisfies monotonicity: S ⊆ T ⊆ N ⇒ v(S) ≤ v(T ).

Unanimity and monotonicity are natural assumptions in most cases. Unanimityreflects the fact that all agents agree; hence, the coalition should be winning. Mono-tonicity tells that the addition of agents in the coalition cannot turn a winning coalitioninto a losing one, which is reasonable for voting: more supporters should not harmthe coalition. A first way to represent a voting game is by listing all winning coali-tions. Using the monotonicity property, a more succinct representation is to list onlythe minimal winning coalitions.

7.1.2. DEFINITION. [Minimal winning coalition] A coalition C ⊆ N is a minimalwinning coalition iff v(C) = 1 and ∀i ∈ C v(C \ {i}) = 0.

73

Page 68: Lectures in Cooperative Game Theory

74 Lecture 7. A Special Class of TU games: Voting Games

For example, we consider the game ({1, 2, 3, 4}, v) such that v(C) = 1 when|C| ≥ 3 or (|C| = 2 and 1 ∈ C) and v(C) = 0 otherwise. The set of winning coali-tions is {{1, 2}, {1, 3}, {1, 4}, {1, 2, 3}, {1, 2, 4}, {1, 3, 4}, {2, 3, 4}, {1, 2, 3, 4}}. Wecan represent the game more succinctly by just writing the set of minimal winningcoalitions, which is {{1, 2}, {1, 3}, {1, 4}, {2, 3, 4}}.

We can now see how we formalize some common terms in voting. We can firstexpress what a dictator is.

7.1.3. DEFINITION. [Dictator] Let (N, v) be a simple game. A player i ∈ N is adictator iff {i} is a winning coalition.

Note that with the requirements of simple games, it is possible to have more thanone dictator! The next notion is the notion of veto player, in which a player can blocka decision on its own by opposing to it (e.g. in the United Nations Security Council,China, France, Russia, the United Kingdom, and the United States are veto players).

7.1.4. DEFINITION. [Veto Player] Let (N, v) be a simple game. A player i ∈ N is aveto player if N \ {i} is a losing coalition. Alternatively, i is a veto player iff for allwinning coalition C, i ∈ C.

It also follows that a veto player is member of every minimal winning coalitions.Another concept is the concept of a blocking coalition: it is a coalition that, on its own,cannot win, but the support of all its members is required to win. Put another way, themembers of a blocking coalition do not have the power to win, but they have the powerto lose.

7.1.5. DEFINITION. [blocking coalition] A coalition C ⊆ N is a blocking coalition iffC is a losing coalition and ∀S ⊆ N \ C, S \ C is a losing coalition.

7.2 StabilityWe can start by studying what it means to have a stable payoff distribution in thesegames. The following theorem characterizes the core of simple games.

7.2.1. THEOREM. Let (N, v) be a simple game. Then

Core(N, v) ={x ∈ Rn x is an imputation xi = 0 for each non-veto player i

}

Proof.

⊆ Let x ∈ Core(N, v). By definition x(N) = 1. Let i be a non-veto player.x(N \ {i}) ≥ v(N \ {i}) = 1. Hence x(N \ {i}) = 1 and xi = 0.

Page 69: Lectures in Cooperative Game Theory

7.2. Stability 75

⊇ Let x be an imputation and xi = 0 for every non-veto player i. Since x(N) = 1,the set V of veto players is non-empty and x(V ) = 1.

Let C ⊆ N . If C is a winning coalition then V ⊆ C, hence x(C) ≥ v(C).Otherwise, v(C) is a losing coalition (which may contain veto players), andx(C) ≥ v(C). Hence, x is group rational.

We can also study the class of simple convex games. The following theorem showsthat they are the games with a single minimal winning coalition.

7.2.2. THEOREM. A simple game (N, v) is convex iff it is a unanimity game (N, vV )where V is the set of veto players.

Proof. A game is convex iff ∀S, T ⊆ N v(S) + v(T ) ≤ v(S ∩ T ) + v(S ∪ T ).

⇒ Let us assume (N, v) is convex.

If S and T are winning coalitions, S ∪T is a winning coalition by monotonicity.Then, we have 2 ≤ 1 + v(S ∩ T ) and it follows that v(S ∩ T ) = 1. Theintersection of two winning coalitions is a winning coalition. Moreover, fromthe definition of veto players, the intersection of all winning coalitions is the setV of veto players. Hence, v(V ) = 1. By monotonicity, if V ⊆ C, v(C) = 1.Otherwise, V * C. Then there must be a veto player i /∈ C, and it must be thecase that v(C) = 0. Hence, for all coalition C ⊆ N , v(C) = 1 iff V ⊆ C.

⇐ Let (N, vV ) a unanimity game. Let us prove it is a convex game. Let S ⊆ N andT ⊆ N , and we want to prove that v(S) + v(T ) ≤ v(S ∪ T ) + v(S ∩ T ).

– case V ⊆ S ∩ T : Then V ⊆ S and V ⊆ T , and we have 2 ≤ 2

– case V * S ∩ T ∧ V ⊆ S ∪ T :

∗ if V ⊆ S then V * T and 1 ≤ 1

∗ if V ⊆ T then V * S and 1 ≤ 1

∗ otherwise V * S and V * T , and then 0 ≤ 1

– case V * S ∪ T : then 0 ≤ 0

For all cases, v(S) + v(T ) ≤ v(S ∪ T ) + v(S ∩ T ), hence a unanimity game isconvex. In addition, all members of V are veto players.

Page 70: Lectures in Cooperative Game Theory

76 Lecture 7. A Special Class of TU games: Voting Games

7.3 Weighted voting gamesWe now define a class of voting games that has a more succinct representation: eachagent has a weight and a coalition needs to achieve a threshold (i.e. a quota) to bewinning. This is a much more compact representation as we only use to define a vectorof weights and a threshold. The formal definition follows.

7.3.1. DEFINITION. [weighted voting game] A game (N, v, q, w) is a weighted votinggame when

• w = (w1, w2 . . . , wn) ∈ Rn+ is a vector of weights, one for each voter

• A coalition C is winning (i.e., (v(C) = 1) iff∑

i∈C wi ≥ q, it is losing otherwise(i.e., (v(C) = 0)

• v satisfies monotonicity:∑

i∈N wi ≥ q

The fact that each agent has a positive (or zero) weight ensures that the game ismonotone. We will note a weighted voting game (N,wi∈N , q) as [q; w1, . . . , wn]. In itsearly days, the European Union was using a weighted voted games. Now a combinationof weighted voting games are used (a decision is accepted when it is supported by 55%of Member States, including at least fifteen of them, representing at the same time atleast 65% of the Union’s population).

Weighted voting games is a succinct representation of a simple game. However, notall the simple games can be represented by a weighted voting game. We say that therepresentation is not complete. For example, consider the voting game ({1, 2, 3, 4}, v)such that the set of minimal winning coalitions is {{1, 2}, {3, 4}}. Let us assume wecan represent (N, v) with a weighted voting game [q; w1, w2, w3, w4]. We can formthe following inequalities:

v({1, 2}) = 1 then w1 + w2 ≥ qv({3, 4}) = 1 then w3 + w4 ≥ qv({1, 3}) = 0 then w1 + w3 < qv({2, 4}) = 0 then w2 + w4 < q

But then, w1 +w2 +w3 +w4 < 2q and w1 +w2 +w3 +w4 ≥ 2q, which is impossible.Hence, (N, v) cannot be represented by a weighted voting game.

Not all simple games can be represented by a weighted voting game. However,many weighted voting games represent the same simple game: two weigthed vot-ing games may have different quotas and weights, but they may have exactly thesame winning coalitions. Two weighted voting games G = [q, w1, ..., wn] and G′ =[q′, w′1, ..., w

′n] are said to be equivalent when ∀C ⊆ N , w(C) ≥ q iff w′(C) ≥ q′.

The definition of weighted voting games allows to choose the weights and the quotaas a real number. From a computational point of view, storing and manipulating realnumber is challenging. However, one do not need to use real numbers. The following

Page 71: Lectures in Cooperative Game Theory

7.4. Power Indices 77

result shows that any weighted voting game is equivalent to a weighted voting gamewith small integer weights and quota.

7.3.2. THEOREM. For any weighted voting gameG, there exists an equivalent weightedvoting game [q, w1, ..., wn] with q ∈ N and ∀i ∈ N wi ∈ N and wi = O(2nlogn).

Without loss of generality, we can now study weighted voting games with onlyinteger weights and integer quota, which allows us to represent a weighted voting gamewith a polynomial number of bits.

We now turn to the question about the meaning of the weight. One intuition maybe that the weight represents the importance or the strength of a player. Let us considersome examples to check this intuition.

• [10; 7, 4, 3, 3, 1]: The set of minimal winning coalitions is {{1, 2}{1, 3}{1, 4}{2, 3, 4}}.Player 5, although it has some weight, is a dummy. Player 2 has a higher weightthan player 3 and 4, but it is clear that player 2, 3 and 4 have the same influence.

• [51; 49, 49, 2]: The set of winning coalition is {{1, 2}, {1, 3}, {2, 3}}. It seemsthat the players have symmetric roles, but it is not reflected in their weights.

These examples shows that the weights can be deceptive and may not represent thevoting power of a player. Hence, we need different tools to measure the voting powerof the voters, which is the goal of the following section.

7.4 Power IndicesThe examples raise the subject of measuring the voting power of the agents in a vot-ing game. Multiple indices have been proposed to answer these questions. In thefollowing, we introduce few of them, and we will discuss some weaknesses (someparadoxical situations may occur). Finally, we briefly describe some applications.

7.4.1 DefinitionsOne central notion to define the power of a voter is the notion of being a Swing orPivotal Voter. Informally, when a coalition C is losing, a pivotal voter for that coalitionis a voter that makes the coalition C ∪ {i} win. The presence of the members of C isnot sufficient to win the election, but with the presence of i, C ∪ {i} wins and i can beseen as an important voter.

7.4.1. DEFINITION. [Swing or Pivotal Voter] A voter i is pivotal or swing for a coali-tion C when i turns the coalition from a losing to a wining one, i.e., v(C) = 0 andv(C ∪ {i}) = 1.

Page 72: Lectures in Cooperative Game Theory

78 Lecture 7. A Special Class of TU games: Voting Games

In the following, w is the number of winning coalitions and for a voter i, ηi is thenumber of coalitions for which i is pivotal, i.e., ηi =

S⊆N\{i}v(S ∪ {i})− v(S). We

are now ready to define some power indices.

Shapley-Shubik index: it is the Shapley value of the voting game, its interpretationin this context is the percentage of the permutations of all players in which i ispivotal.

ISS(N, v, i) =∑

C⊆N\{i}

|C|!(n− |C| − 1)!

n!(v(C ∪ {i})− v(C)) .

“For each permutation, the pivotal player gets one more point.”. One issue isthat the voters do not trade the value of the coalition, though the decision that thevoters vote about is likely to affect the entire population.

Banzhaff index: For each coalition, we determine which agent is a swing agent (morethan one agent may be pivotal). The raw Banzhaff index of a player i is

βi =

∑C⊆N\{i} v(C ∪ {i})− v(C)

2n−1.

The interpretation is that the Banzhaff index is the percentage of coalitions forwhich a player is pivotal. The raw Banzhaff index does not necessarily sum upto one. However, for a simple game (N, v), v(N) = 1 and v(∅) = 0, at least oneplayer i has a power index βi 6= 0. Hence, B =

∑j∈N βj > 0. The normalized

Banzhaff index of player i for a simple game (N, v) is defined as

IB(N, v, i) =βiB.

Coleman index: Coleman defines three indices [1]: the power of the collectivity to actA = w

2n(A is the probability of a winning vote occurring); the power to prevent

action Pi = ηiw

(it is the ability of a voter to change the outcome from winningto losing by changing its vote); the power to initiate action Ii = ηi

2n−w (it is theability of a voter to change the outcome from losing to winning by changing itsvote, the numerator is the same as in P , but the denominator is the number oflosing coalitions, i.e., the complement of the one of P )

We provide in Table 7.1 an example of computation of the Shapley-Schubik andBanzhaff indices. This example shows that both indices may be different. There isa slight difference in the probability model between the Banzhaf βi and Coleman’sindex Pi: in Banzhaf’s, all the voters but i vote randomly whereas in Coleman’s, theassumption of random voting also applies to the voter i. Hence, the Banzhaf index canbe written as βi = 2Pi · A = 2Ii · (1− A).

Page 73: Lectures in Cooperative Game Theory

7.4. Power Indices 79

{1, 2, 3, 4} {3, 1, 2, 4}{1, 2, 4, 3} {3, 1, 4, 2}{1, 3, 2, 4} {3, 2, 1, 4}{1, 3, 4, 2} {3, 2, 4, 1}{1, 4, 2, 3} {3, 4, 1, 2}{1, 4, 3, 2} {3, 4, 2, 1}{2, 1, 3, 4} {4, 1, 2, 3}{2, 1, 4, 3} {4, 1, 3, 2}{2, 3, 1, 4} {4, 2, 1, 3}{2, 3, 4, 1} {4, 2, 3, 1}{2, 4, 1, 3} {4, 3, 1, 2}{2, 4, 3, 1} {4, 3, 2, 1}In red and underlined, the pivotal agent

1 2 3 4Sh 7

1214

112

112

winning coalitions:{1, 2}{1, 2, 3}{1, 2, 4}{1, 3, 4}{1, 2, 3, 4}In red and underlined, the pivotal agents

1 2 3 4

β 58

38

18

18

IB(N, v, i) 12

310

110

110

Table 7.1: Shapley-Schubik and the Banzhaff indices for the weighted voting game[7; 4, 3, 2, 1].

7.4.2 ParadoxesThe power indices may behave in an unexpected way if we modify the game. Forexample, we might expect that adding voters to a game would reduce the power ofthose voters that are present in the original game, but this may not be the case.

Consider the game [4; 2, 2, 1]. Player 3 is a dummy in this game, so her ShapleyShubik or Banzhaff indices are zero. Now assume that a voter joins the game witha weight of 1. In the resulting game G′, player 3 becomes pivotal for a coalitionconsisting of one of the two voters of the original game and the new player. Hence, herindex must now be positive. This situation is known as the paradox of new player.

Another unexpected behaviour may occur when a voter i splits her identity andweight between two voters. The sum of the new identities’ Shapley value may be quitedifferent from the Shapley value of voter i. This situation is known as the paradox ofsize.

• increase of power by splitting identities Consider a game with |N | = n voters[n+1; 2, 1, . . . , 1]. In this game, the only winning coalition is the grand coalition,so ISS(N, v, i) = 1

n. Now suppose that voter 1 splits into two voters of weight

one. We have a new game game with n+1 voters [n + 1; 1, . . . , 1]. Using asimilar argument, the Shapley Shubik index for each voter is 1

n+1. Hence, the

joint power of the new identities is 2n+1

, almost twice the power of agent 1 byherself!

• decrease of power by splitting identities Consider an n-voter voting game in

Page 74: Lectures in Cooperative Game Theory

80 Lecture 7. A Special Class of TU games: Voting Games

which all voters have a weight of 2 and the quota is 2n − 1, i.e., we have thegame [2n−1; 2, . . . , 2]. All the players being symmetric, the Shapley value is 1

n.

If player 1 splits into two voters of weight 1, each of her identities has a Shapleyvalue of 1

n(n+1)in the new game. Hence, the sum of the Shapley values of the

two identities is smaller than the value in the original game, by a factor of n+12

.

7.4.3 ApplicationsWhen designing a weighted voting game, for example to decide on the weights fora vote for the European Union or at the United Nations, one needs to choose whichweights are to be attributed to each nation. The problem of choosing the weights sothat they corresponds to a given power index has been tackled in [2]. If the numberof country changes, you do not want to re-design and negotiate over a new game eachtime. Each citizen vote for a representative and the representatives for each countryvote. It may be desirable that each citizen, irrespective of her/his nationality, has thesame voting power. If βx is the normalized Banzhaf index for a person in a country iin EU with population ni, and βi is the normalized Banzhaf index of a representativefor country i, then Felsenthal and Machover have shown that βx ∝ βi

√2πni

. Thus theBanzhaf index of each representative βi should be proportional to ni for each personin the EU to have equal power.

7.4.4 ComplexityThe computational complexity of voting and weighted voting games have been studiedin [3, 4]. For example, the problem of determining whether the core is empty is poly-nomial. The argument for this result is the following theorem: the core of a weightedvoting game is non-empty iff there exists a veto player. When the core is non-empty,the problem of computing the nucleolus is also polynomial, otherwise, it is an NP-hard problem.

Page 75: Lectures in Cooperative Game Theory

Bibliography

[1] James S. Coleman. The benefits of coalition. Public Choice, 8:45–61, 1970.

[2] Bart de Keijzer, Tomas Klos, and Yingqian Zhang. Enumeration and exact designof weighted voting games. In Proc. of the 9th Int. Conf. on Autonomous Agentsand Multiagent Systems (AAMAS-2010), pages 391–398, 2010.

[3] Xiaotie Deng and C H Papadimitriou. On the complexity of cooperative solutionconcetps. Mathematical Operation Research, 19(2):257–266, 1994.

[4] Edith Elkind, Leslie Ann Goldberg, Paul Goldberg, and Michael Wooldridge.Computational complexity of weighted threshold games. In Proceedings of theTwenty-Second AAAI Conference on Artificial Intelligence (AAAI-07), pages 718–723, 2007.

81

Page 76: Lectures in Cooperative Game Theory

Lecture 8Representation and complexity

We have studied different solution concepts, mainly looking at their different proper-ties. One natural question from the point of view of a (theoretical) computer scientist ishow hard it is to compute a solution, or to check whether the set of solutions is emptyor not. This is the question we are going to consider in this lecture

8.1 Naive representation

Let us assume we want to write a computer program for computing a solution concept.

The first question that comes to mind is how to represent the input of a TU game.A straighforward representation is by enumeration: we can use a array, each entryrepresents a coalition and contains the value of that coalition (and one can use thebinary representation of a number to encode which agents are members of a coalition,e.g. 21 = 10101 corresponds to coalition {1, 3, 5}). This requires storing 2n numbers,which may be problematic for large values of n. Typically, computer scientists aremade happier with an input of polynomial length.

The complexity of an algorithm is measured in terms of the input size. If we useenumeration, many algorithms may appear good as they manipulate an exponentialinput. To properly speak about complexity issues, one need to find a polynomial rep-resentation of the game, which we will call a compact or succinct representation.

In general, there is a tradeoff between how succinct the representation is and howeasy or hard the computation is. The idea is that to represent the game compactly, oneneeds to encode a lot of information in a smart way, which may make it difficult tomanipulate the representation to compute something interesting. Hence we look for abalance between succinctness and tractability.

83

Page 77: Lectures in Cooperative Game Theory

84 Lecture 8. Representation and complexity

8.2 Representations that are good for computing theShapley Value

The nature of the Shapley value is combinatorial, as all possible orderings to forma coalition need to be considered. This computational complexity can sometimes bean advantage as agents cannot benefit from manipulation. For example, it is NP-complete to determine whether an agent can benefit from false names [15]. Neverthe-less, some representations allow to compute the Shapley value efficiently, and we aresurveying few representations.

8.2.1 Bilateral Shapley Value

In order to reduce the combinatorial complexity of the computation of the Shapleyvalue, Ketchpel introduces the Bilateral Shapley Value (BSV ) [13]. The idea is toconsider the formation of a coalition as a succession of merging between two coali-tions. Two disjoint coalitions C1 and C2 with C1 ∩ C2 = ∅, may merge when v(C1 ∪C2) ≥ v(C1) + v(C2). When they merge, the two coalitions, called founders of thenew coalition C1 ∪ C2, share the marginal utility as follows: BSV (C1) = 1

2v(C1) +

12

(v(C1 ∪ C2)− v(C2)) and BSV (C2) = 12v(C2) + 1

2(v(C1 ∪ C2)− v(C1)). This is

the expression of the Shapley value in the case of an environment with two agents.In C1 ∪ C2, each of the founders gets half of its ‘local’ contribution, and half of themarginal utility of the other founder. Given this distribution of the marginal utility, it isrational for C1 and C2 to merge if ∀i ∈ {1, 2}, v(Ci) ≤ BSV (Ci). Note that symmetricfounders get equal payoff, i.e., for C1, C2, C such that C1 ∩ C2 = C1 ∩ C = C2 ∩ C = ∅,v(C ∪C1) = v(C ∪C2)⇒ BSV (C ∪C1) = BSV (C ∪C2). Given a sequence of succes-sive merges from the states where each agent is in a singleton coalition, we can use abackward induction to compute a stable payoff distribution [14]. Though the computa-tion of the Shapley value requires looking at all of the permutations, the value obtainedby using backtracking and the BSV only focuses on a particular set of permutations,but the computation is significantly cheaper.

8.2.2 Weigthed graph games

[8] introduce a class of games called weighted graph games: they define a TU gameusing an undirected weighted graph G = (V ,W) where V is the set of vertices andW : V → V is the set of edges weights. For (i, j) ∈ V 2, wij is the weight of the edgebetween the vertices i and j. The coalitional game (N, v) is defined as follows:

• N = V , i.e., each agent corresponds to one vertex of the graph.

• the value of a coalition C ⊆ N is the sum of the weights between any pairs ofmembers of C, i.e. v(C) =

∑(i,j)∈C2 wij .

Page 78: Lectures in Cooperative Game Theory

8.2. Representations that are good for computing the Shapley Value 85

1

2

3

45

w12 w23

w34

w45

w15

w13

w14

w24

v({1, 2, 4}) = w12 + w34 + w14

Figure 8.1: Example of a graph with 5 agents

This representation is succinct as we only need to provide n2 values to represent theentire game. However, it is not a complete representation as some TU games cannotbe represented this way (e.g., it is not possible to represent a majority voting game).If we add some restrictions on the weights, we can further guarantee some properties.For example, when all the weights are nonnegative, then the game is convex, and thenthe game is guaranteed to have a non-empty core.

8.2.1. PROPOSITION. Let (V,W ) be a weighted graph game. If all the weights arenonnegative then the game is convex.

Proof. Let S, T ⊆ N , we want to prove that v(S) + v(T ) ≤ v(S ∪ T ) + v(S ∩ T ).

v(S) + v(T ) =∑

(i,j)∈S2

wij +∑

(i,j)∈T 2

wij =∑

(i,j)∈S2∨(i,j)∈T 2

wij +∑

(i,j)∈(S∩T )2

wij

≤∑

(i,j)∈(S∪T )2

wij +∑

(i,j)∈(S∩T )2

wij = v(S ∪ T ) + v(S ∩ T )

One other nice property of this representation is that the Shapley value can becomputed in quadratic time, as shown in the following theorem.

8.2.2. THEOREM. Let (V,W ) be a weighted graph game. The Shapley value of anagent i is given by Φi(N, v) =

∑(i,j)∈N2,i 6=j wij .

One simple proof of this theorem uses the axioms that define the Shapley value.Proof. Let (V,W ) a weighted graph game. We can view this game as the sum ofthe |W | games: each game Gij is associated to an edge (i, j) in the graph as followsGij = (V, {wij}) such that wijkl = wij if i = k and j = l and 0 otherwise.

For a game Gij corresponding to edge (i, j):

• the agents i and j are substitutes (this is clear).

Page 79: Lectures in Cooperative Game Theory

86 Lecture 8. Representation and complexity

• all other agents k 6= i, j are dummy agents (this is also clear).

Using the symmetry axiom, we know that Shi(Gij) = Shj(Gij). Then, using thedummy axiom, we also know that Shk(Gij) = 0. This tells us that Shi(Gij) = 1

2wij .

Since (V,W ) is the sum of all games, by the additivity axiom, we obtain Shk =∑i,j Shk(Gij) =

∑k,iwij �

8.2.3 Multi-issue representationConitzer and Sandholm [7] analyse the case where the agents are concerned with mul-tiple independent issues that a coalition can address. For example, performing a taskmay require multiple abilities, and a coalition may gather agents that work on the sametask but with limited or no interactions between them. A characteristic function v canbe decomposed over T issues when it is of the form v(C) =

∑Tt=1 vt(C), in which, for

each t, (N, vt) is a TU game.

8.2.3. DEFINITION. [Decomposition]The vector of characteristic functions 〈v1, v2, . . . , vT 〉,with each vi : 2N → R, is a decomposition over T issues of characteristic functionv : 2N → R if for any S ⊆ N , v(S) =

∑Ti=1 vi(S).

Using this idea, we can represent any TU game (we can express a TU game usinga single issue).

The Shapley value for agent i for the characteristic function v is the sum of theShapley values over the t different issues: Φi(N, v) =

∑Tt=1 Φi(N, vt). When a small

number of agents is concerned about an issue, computing the Shapley value for theparticular issue can be cheap. For an issue t, the characteristic function vt concernsonly the agents in It when ∀C1 ∈ C , C2 ∈ C such that It ∩ C1 = It ∩ C2 ⇒ vt(C1) =vt(C2). When the characteristic function v is decomposed over T issues and when |It|agents are concerned about each issue t ∈ [1...T ], computing the Shapley value takesO(∑T

t=1 2|It|).

8.2.4 Marginal Contribution Networks (MC-nets)Ieong and Shoham propose a representation in which the characteristic function isrepresented by a set of “rules” [11]. A rule is composed by a pattern and a value: thepattern tells which agent must be present or absent from a coalition so that the valueof the coalition is increased by the value of the rule. This representation allows torepresent any TU game.

More formally, each player is represented by a boolean variable and the charac-teristic vector of a coalition is treated as a truth assignment. Each “rule” associatesa pattern φ and a weight w ∈ R. The pattern φ is a formula of propositional logiccontaining variables in N . A positive literal represents the presence of an agent in a

Page 80: Lectures in Cooperative Game Theory

8.2. Representations that are good for computing the Shapley Value 87

coalition, whereas a negative literal represents the absence of an agent in the coalition.The value of a coalition is the sum over the values of all the rules that apply to thecoalition.

8.2.4. DEFINITION. [Rule] Let N be a collection of atomic variables. A rule has asyntactic form (φ,w) where φ is called the pattern and is a boolean formula containingvariables in N and w is called the weight, and is a real number.

Let us consider that there are two variables a and b and here are two rules:

• (a ∧ b, 5): each coalition containing both agents a and b increase its value by 5units.

• (b, 2): each coalition containing b increase its value by 2.

8.2.5. DEFINITION. [Marginal contribution nets (MC-net)] An MC-net consists of aset of rules {(p1, w1), . . . (pk, wk)} where the valuation function is given by

v(C) =k∑

i=1

pi(eC)wi,

where pi(eC) evaluates to 1 if the boolean formula pi evaluates to true for the truthassignment eC and 0 otherwise.

The valuation function of the MC net {(a ∧ b, 5), (b, 2)} is the following one:v(∅) = 0 v({b}) = 2v({a}) = 0 v({a, b}) = 5 + 2 = 7

We can use negations in rules, and negative weights. Let consider the followingMC-net: {(a ∧ b, 5), (b, 2), (c, 4), (b ∧ ¬c,−2)} In this case, the valuation function is

v(∅) = 0 v({b}) = 2− 2 = 0 v({a, c}) = 4v({a}) = 0 v({a, b}) = 5 + 2− 2 = 5 v({b, c}) = 4 + 2 = 6

When negative literals are allowed or when the weights can be negative, MC-netscan represent any TU-game, hence this representation is complete. When the patternsare limited to conjunctive formula over positive literals and weights are nonnegative,MC-nets can represent all and only convex games (in which case, they are guaranteedto have a non-empty core).

Using this representation and assuming that the patterns are limited to a conjunctionof variables, the Shapley value can be computed in time linear to the size of the input(i.e. the number of rules of the MC-net).

8.2.6. THEOREM. Given a TU game represented by an MC-net limited to conjunctivepatterns, the Shapley value can be computed in time linear in the size of the input.

Proof. (sketch) we can treat each rule as a game, compute the Shapley value for therule, and use ADD to sum all the values for the overall game. For a rule, we cannotdistinguish the contribution of each agent, by SYM, they must have the same value. Itis a bit more complicated when negation occurs (see Ieong and Shoham, 2005). �

Page 81: Lectures in Cooperative Game Theory

88 Lecture 8. Representation and complexity

8.3 Some references for simple games

The computational complexity of voting and weighted voting games have been studiedin [8, 9]. For example, the problem of determining whether the core is empty is poly-nomial. The argument for this result is the following theorem: the core of a weightedvoting game is non-empty iff there exists a veto player. When the core is non-empty,the problem of computing the nucleolus is also polynomial, otherwise, it is an NP-hard problem.

8.4 Some interesting classes of games from the compu-tational point of view

We want to briefly introduce some classes of games that have been studied in the AIliterature. Some of these classes of games can be represented more compactly than byusing 2N values, one for each coalition, using an underlying graph structure. In somerestricted cases, some solution concepts can be computed efficiently.

minimum cost spanning tree games. A game is 〈V, s, w〉 where 〈V,w〉 is as in agraph game and s ∈ V is the source node. For a coalition C, we denote by Γ(C) theminimum cost spanning tree spanning over the set of edges C ∪ {s}. The value of acoalition V \ {s} is given by

∑(i,j)∈Γ(C) wi,j .

This class of game can model the problem of connecting some agents to a centralnode played by the source node s. Computing the nucleolus or checking whether thecore is non-empty can be done in polynomial time.

Network flow games. A flow network 〈V,E, c, s, t〉 is composed of a directedgraph (V,E) with a capacity on the edge c : V 2 → R+, a source vertex s and a sinkvertex t. A network flow is a function f : E → R+ that satisfies the capacity of anedge (∀(i, j) ∈ E, f(i, j) ≤ c(i, j)) and that is conserved (except for the source andsink), i.e., the total flow arriving at an edge is equal to the total flow leaving that edge(∀j ∈ V ,

∑(i,j)∈E f(i, j) =

∑(j,k)∈E f(j, k)). The value of the flow is the amount

flowing out of the sink node.In network flow game [12], 〈V,E, c, s, t〉, the value of a coalition C ⊆ N is the

maximum value of the flow going through the flow network 〈C, E, c, s, t〉.This class of games can model a situation where some cities share a supply of water

or some electricity network. [12] proved that a network flow game is balanced, henceit has a non-empty core. [3] study a threshold version of the game and the complexityof computing power indices.

Affinity games. The class of affinity games is a class of hedonic games introducedin [5, 6]. An affinity game is defined using a directed weigthed graph 〈V,E,w〉 whereV is the set of agents, E is the set of directed edges and w : E → R is the weight ofthe edges. w(i, j) is the value of agent i when it is associated with agent j. The valueof agent i for coalition C is vi(C) =

∑j∈C w(i, j).

Page 82: Lectures in Cooperative Game Theory

8.4. Some interesting classes of games from the computational point of view 89

Some special classes of affinity games have a non-empty core (e.g. when theweights are all positive or all negative). In this games, there may be a trade-off be-tween stability and efficiency (in the sense of maximizing social welfare) as the ratiobetween an optimal CS and a stable CS may be infinite.

Skill games. This class of games, introduced by [4] is represented by a triplet〈N,S, T, u〉 where N is the set of agents, S is the set of skills, T is the set of tasks, andu : 2T → R provides a value to each set of tasks that is completed. Each agent i has aset of skills S(i) ⊆ S, each task ti requires a set of skills S(ti) ⊆ S. A coalition C canperform a task t when each skill needed for the task is the skill of at least a memberof C (i.e. ∀s ∈ S(t), ∃i ∈ C such that S(i) = s). The value of a coalition C is u(TC)where TC is the set of tasks that can be performed by C.

This representation is exponential in the number of agents, but variants of the repre-sentation lead to polynomial representation. For example when the value of a coalitionis the number of tasks it can accomplish, or when each task has a weight and the valueof a coalition is the sum of the weights of the accomplished tasks. In general, com-puting the solution concepts with these polynomial representation is hard. However,in some special cases, checking whether the core is empty or computing an element ofthe core can be performed in polynomial time. The problem of finding an optimal CSis studied in [2].

Some more papers are studying the computational complexity of some subclassesof games, e.g. in [1, 10] to name a few. We do not want to provide a full account ofcomplexity problem in cooperative games as it could be the topic of half a course.

Page 83: Lectures in Cooperative Game Theory
Page 84: Lectures in Cooperative Game Theory

Bibliography

[1] Haris Aziz, Felix Brandt, and Paul Harrenstein. Monotone cooperative games andtheir threshold versions. In Proceedings of the 9th International Conference onAutonomous Agents and Multiagent Systems: volume 1 - Volume 1, AAMAS ’10,pages 1107–1114, Richland, SC, 2010. International Foundation for AutonomousAgents and Multiagent Systems.

[2] Yoram Bachrach, Reshef Meir, Kyomin Jung, and Pushmeet Kohli. Coalitionalstructure generation in skill games. In Proceedings of the Twenty-Fourth AAAIConference on Artificial Intelligence (AAAI-10), pages 703–708, July 2010.

[3] Yoram Bachrach and Jeffrey Rosenschein. Power in threshold network flowgames. Autonomous Agents and Multi-Agent Systems, 18:106–132, 2009.10.1007/s10458-008-9057-6.

[4] Yoram Bachrach and Jeffrey S. Rosenschein. Coalitional skill games. In Proc. ofthe 7th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS-08),pages 1023–1030, May 2008.

[5] Simina Brânzei and Kate Larson. Coalitional affinity games. In Proceedingsof the International Joint Conference on Artificial Intelligence (IJCAI-09), pages1319–1320, May 2009.

[6] Simina Brânzei and Kate Larson. Coalitional affinity games. In Proc of the 8thInt. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2009), pages79–84, July 2009.

[7] Vincent Conitzer and Tuomas Sandholm. Computing shapley values, manipu-lating value division schemes, and checking core membership in multi-issue do-mains. In Proceedings of the 19th National Conference on Artificial Intelligence(AAAI-04), pages 219–225, 2004.

[8] Xiaotie Deng and C H Papadimitriou. On the complexity of cooperative solutionconcetps. Mathematical Operation Research, 19(2):257–266, 1994.

91

Page 85: Lectures in Cooperative Game Theory

92 Bibliography

[9] Edith Elkind, Leslie Ann Goldberg, Paul Goldberg, and Michael Wooldridge.Computational complexity of weighted threshold games. In Proceedings ofthe Twenty-Second AAAI Conference on Artificial Intelligence (AAAI-07), pages718–723, 2007.

[10] Gianluigi Greco, Enrico Malizia, Luigi Palopoli, and Francesco Scarcello. Onthe complexity of compact coalitional games. In Proceedings of the 21st inter-national jont conference on Artifical intelligence, pages 147–152, San Francisco,CA, USA, 2009. Morgan Kaufmann Publishers Inc.

[11] Samuel Ieong and Yoav Shoham. Marginal contribution nets: a compact repre-sentation scheme for coalitional games. In EC ’05: Proceedings of the 6th ACMconference on Electronic commerce, pages 193–202, New York, NY, USA, 2005.ACM Press.

[12] E. Kalai and E. Xemel. Totally balanced games and games of flow. Mathematicsof Operations Research, 7:476–478, 1982.

[13] Steven P. Ketchpel. The formation of coalitions among self-interested agents. InProceedings of the Eleventh National Conference on Artificial Intelligence, pages414–419, August 1994.

[14] Matthias Klusch and Onn Shehory. Coalition formation among rational informa-tion agents. In Rudy van Hoe, editor, Seventh European Workshop on ModellingAutonomous Agents in a Multi-Agent World, Eindhoven, The Netherlands, 1996.

[15] Makoto Yokoo, Vincent Conitzer, Tuomas Sandholm, Naoki Ohta, and AtsushiIwasaki. Coalitional games in open anonymous environments. In Proceedingsof the Twentieth National Conference on Artificial Intelligence, pages 509–515.AAAI Press AAAI Press / The MIT Press, 2005.

Page 86: Lectures in Cooperative Game Theory

Lecture 9Non-Transferable Utility Games (NTU games)

An underlying assumption behind a TU game is that agents have a common scale tomeasure the worth of a coalition. Such a scale may not exist in every situations, whichleads to the study of games where the utility is non-transferable (NTU games). Westart by introducing a particular type of NTU games called Hedonic games. We chosethis type of games for the simplicity of the formalism. Next, we provide the classicaldefinition of a NTU game, which can represent a lot more situations.

9.1 Hedonic GamesIn an hedonic game, agents have preferences over coalitions: each agent knows whetherit prefers to be in company of some agents rather than others. An agent may enjoy morethe company of members of C1 over members of C2, but it cannot tell by how muchit prefers C1 over C2. Consequently, it does not make sense to talk about any kind ofcompensation when an agent is not part of its favorite coalition. The question that eachagent must answer is “which coalition to form?”.

More formally, let N be a set of agents and Ni be the set of coalitions that containagent i, i.e.,Ni = {C ∪{i} | C ⊆ N \{i}}. For a CS S, we will note S(i) the coalitionin S containing agent i.

9.1.1. DEFINITION. [Hedonic games] An Hedonic game is a tuple (N, (�i)i∈N) where

• N is the set of agents

• �i⊆ 2Ni×2Ni is a complete, reflexive and transitive preference relation for agenti, with the interpretation that if S �i T , agent i prefers coalition T at most asmuch as coalition S.

In a hedonic game, each agent has a preference over each coalition it can join. Thesolution of a hedonic game is a coalition structure (CS), i.e., a partition of the set of

93

Page 87: Lectures in Cooperative Game Theory

94 Lecture 9. Non-Transferable Utility Games (NTU games)

agents into coalition. A first desirable property of a solution is to be Pareto optimal: itwould not be possible to find a different solution that is weakly preferred by all agents.

The notion of core can be easily extended for this type of games. Given a currentCS, no group of agents should have an incentive to leave the current CS. As it is thecase for TU games, the core of an NTU game may be empty, and it is possible todefine weaker versions of stability. We now give the definition of stability conceptsadapted from [2]. For the core, it is a group of agents that leave their correspondingcoalitions to form a new one they all prefer. In the weaker stability solution concepts,the possible deviations feature a single agent i that leaves its current coalition C1 ∪ {i}to join a different existing coalition C2 ∈ N \ {i} or to form a singleton coalition {i}.There are few scenarios one can consider, depending on the behavior of the membersof C1 and C2. For a Nash stable S, the behavior of C1 and C2 are not considered atall: if i prefers to join C2, it is a valid deviation. This assumes that the agents in C2

will accept agent i, which is quite optimistic (it may very well be the case that someor all agents in C2 do not like agent i). For individual stability, the deviation is valid ifno agent in C2 is against accepting agent i, in other words the agents in C2 are happyor indifferent about i joining them. Finally, for contractual individual stability, thepreference of the members of C1 – the coalition that i leaves – are taken into account.Agents in C1 should prefer to be without i than with i. The three stability conceptshave the following inclusion: Nash stability is included in Individual stability, which isincluded in contractual individual stability1. We now provide the corresponding formaldefinitions.

Core stability: A CS S is core-stable iff @C ⊆ N | ∀i ∈ C, C �i S(i).

Nash stability: A CS S is Nash-stable iff (∀i ∈ N) (∀C ∈ S ∪ {∅}) S(i) %iC ∪{i}. No player would like to join any other coalition in S assuming the othercoalitions did not change.

Individual stability A CS S is individually stable iff (@i ∈ N) (@C ∈ S ∪{∅}) | (C ∪ {i} �i S(i)) and (∀j ∈ C, C ∪ {i} %j C). No player can move toanother coalition that it prefers without making some members of that coalitionunhappy.

Contractually individual stability: A CS S is contractually individually stableiff (@i ∈ N) (@C ∈ S ∪ {∅}) | (C ∪ {i} �i S(i)) and (∀j ∈ C, C ∪ {i} %jC) and (∀j ∈ S(i)\{i}, S(i)\{i} %j S(i)). No player can move to a coalitionit prefers so that the members of the coalition it leaves and it joins are better off.

Let us see some examples.

1This ordering may appear counter-intuitive at first, but note that the conditions for being a valid de-viation are more difficult to meet from Nash stability to Contractual individual stability, and the stabilityconcepts are defined as CS such that no deviation exists

Page 88: Lectures in Cooperative Game Theory

9.1. Hedonic Games 95

Example 1

{1, 2} �1 {1} �1 {1, 2, 3} �1 {1, 3}{1, 2} �2 {2} �2 {1, 2, 3} �2 {2, 3}{1, 2, 3} �3 {2, 3} �3 {1, 3} �3 {3}

Let us consider each CS one by one.{{1}, {2}, {3}} {1, 2} is preferred by both agent 1 and 2, hence it is not Nash stable.

{{1, 2}, {3}}{1, 2, 3} is preferred by agent 3, so it is not Nash stable,as agents 1 and 3 are worse off, it is not a possible move for individual stability.no other move is possible for individual stability.

{{1, 3}, {2}} agent 1 prefers to be on its own (hence the CS is not Nash stable,and then, neither individually stable).

{{2, 3}, {1}}agent 2 prefers to join agent 1,and agent 1 is better off, hence the CS is neither Nash stable norIndividually stable.

{{1, 2, 3}} agents 1 and 2 have an incentive to form a singleton,hence the CS is neither Nash stable, nor Individually stable.

As a conclusion:

• {{1, 2}, {3}} is in the core and is individually stable.

• There is no Nash stable partitions.

Example 2

{1, 2} �1 {1, 3} �1 {1, 2, 3} �1 {1}{2, 3} �2 {1, 2} �2 {1, 2, 3} �2 {2}{1, 3} �3 {2, 3} �3 {1, 2, 3} �3 {3}

Again, let us consider all the CSs.{{1}, {2}, {3}} {1, 2}, {1, 3}, {2, 3} and {1, 2, 3} are blocking. As a result, the CS is

not Nash stable{{1, 2}, {3}} {2, 3} is blocking, and since 2 can leave 1 to form {1, 2}, it follows that

the CS is not Nash stable{{1, 3}, {2}} {1, 2} is blocking, and since 1 can leave 3 to form {1, 2}, it follows that

the CS is not Nash stable{{2, 3}, {1}} {1, 3} is blocking, and since 3 can leave 2 to form {1, 3},it follows that

the CS is not Nash stable{{1, 2, 3}} {1, 2}, {1, 3}, {2, 3} are blocking, but it is contractually individually

stable

As a conclusion, we obtain that

• The core is empty.

Page 89: Lectures in Cooperative Game Theory

96 Lecture 9. Non-Transferable Utility Games (NTU games)

• {{1, 2, 3}} is the unique Nash stable partition, unique individually stable parti-tion (no agent has any incentive to leave the grand coalition).

Example 3

{1, 2} �1 {1, 3} �1 {1} �1 {1, 2, 3}{2, 3} �2 {1, 2} �2 {2} �2 {1, 2, 3}{1, 3} �3 {2, 3} �3 {3} �3 {1, 2, 3}

For this game we can show that

• The core is empty (similar argument as for example 2).

• There is no Nash stable partition or individually stable partition. But there arethree contractually individually stable CSs: {{1, 2}, {3}}, {{1, 3}, {2}},{{2, 3}, {1}}.

For example, we can consider all the possible changes for the CS {{1, 2}, {3}}:

• agent 2 joins agent 3:

In this case, both agents 2 and 3 benefit from forming {{1}, {2, 3}}, hence{{1, 2}, {3}} is not Nash or individually stable. As agent 1 is worse off it isnot a legal deviation for contractual individual stability.

• agent 1 joins agent 3, but agent 1 has no incentive to join agent 3, hence this isnot a deviation.

• Agent 1 or 2 forms a singleton coalition, but neither agent has any incentive toform a singleton coalition. Hence, there is no deviation.

In Examples 2 and 3, we see that the core may be empty. The literature in gametheory focuses on finding conditions for the existence of the core. In the AI literature,Elkind and Wooldridge have proposed a succinct representation of Hedonic games [4]and Brânzei and Larson considered a subclass of hedonic games called the affinitygames [3].

9.2 NTU gamesWe now turn to the most general definition of an NTU game. This definition is moregeneral than the definition of hedonic gamess. The idea is that each coalition hasthe ability to achieve a set of outcomes. The preference of the agents are about theoutcomes that can be brought about by the different coalitions. The formal definitionis the following:

9.2.1. DEFINITION. [NTU Game] A non-transferable utility game (NTU Game) is de-fined by a tuple (N,X, V, (�i)i∈N) such that

Page 90: Lectures in Cooperative Game Theory

9.2. NTU games 97

• N is a set of agents;

• X is a set of outcomes;

• V : 2N → 2X is a function that describes the outcomes V (C) ⊆ X that can bebrought about by coalition C;

• �i is the preference relation of agent i over the set of outcomes. The relation isassumed to be transitive and complete.

Intuitively, V (C) is the set of outcomes that the members of C can bring about bymeans of their joint-actions. The agents have a preference relation over the outcomes,which makes a lot of sense. This type of games is more general that the class of hedonicgames or even TU games, as we can represent these games using a NTU game.

9.2.2. PROPOSITION. A hedonic game can be represented by an NTU games.

Proof. Let (N, (�Hi )i∈N) be a hedonic game.

• For each coalition C ⊆ N , create a unique outcome xC .

• For any two outcomes xS and xT corresponding to coalitions S and T that con-tains agent i, We define �i as follows: xS �i xT iff S �Hi T .

• For each coalition C ⊆ N , we define V (C) as V (C) = {xC}.

9.2.3. PROPOSITION. A TU game can be represented as an NTU game.

Proof. Let (N, v) be a TU game.

• We define X to be the set of all allocations, i.e., X = Rn.

• For any two allocations (x, y) ∈ X2, we define�i as follows: x �i y iff xi ≥ yi.

• For each coalition C ⊆ N , we define V (C) as V (C) = {x ∈ Rn | ∑i∈N xi ≤v(C)}. V (C) lists all the feasible allocation for the coalition C.

First, we can note that the definition of the core can easily be modified in the caseof NTU games.

9.2.4. DEFINITION. core(V ) = {x ∈ V (N) | @C ⊂ N, @y ∈ V (C),∀i ∈ C y �i x}

Page 91: Lectures in Cooperative Game Theory

98 Lecture 9. Non-Transferable Utility Games (NTU games)

An outcome x ∈ X is blocked by a coalition C when there is another outcome y ∈ Xthat is preferred by all the members of C. An outcome is then in the core when it canbe achieved by the grand coalition and it is not blocked by any coalition. As is the casefor TU game, it is possible that the core of an NTU game is empty.

As for TU games, we can define a balanced game and show that the core of abalanced game is non-empty.

9.2.5. DEFINITION. [Balanced game]A game is balanced iff for every balanced col-lection B, we have

⋂C⊆B V (C) ⊂ V (N)

9.2.6. THEOREM (THE SCARF THEOREM). The core of a balanced game is non-empty.

9.2.1 An application: Exchange EconomyFor TU games, we studied market games and proved such games have a non-emptycore. We now consider a similar game without transfer of utility. There is a set of con-tinuous goods that can be exchanged between the agents. Each agent has a preferencerelation over the bundle of goods and tries to obtain the best bundle possible.

Definition of the game

The main difference between the exchange economy and a market game is that thepreference is ordinal in the exchange econmomy whereas it is cardinal in a marketgame.

9.2.7. DEFINITION. An exchange economy is a tuple (N,M,A, (�i)i∈N) where

• N is the set of n agents

• M is the set of k continuous goods

• A = (ai)i∈N is the initial endowment vector

• (�i)i∈N is the preference profile, in which �i is a preference relation over bun-dles of goods.

Given an exchange economy (N,M,A, (�i)i∈N), we define the associated ex-change economy game as the following NTU game (N,X, V, (�i)i∈N) where:

• The set of outcomes X is defined as X ={

(x1, . . . , xn) |xi ∈ Rk+ for i ∈ N

}.

Note that xi = 〈xi1, . . . , xik〉 represents the quantity of each good that agent ipossesses in an outcome x.

• The preference relation for an agent i is defined as: for (x, y) ∈ X2 x �i y ⇔xi �i yi. Each player is concerned by its own bundle only.

Page 92: Lectures in Cooperative Game Theory

9.2. NTU games 99

• The value sets are defined as ∀C ⊆ N ,

V (C) =

{x ∈ X

∣∣∣∑

i∈Cxi =

i∈Cai ∧ xj = aj for j ∈ N \ C

}.

The players outside C do not participate in any trading and hold on their ini-tial endowments. When all agents participate in the trading, we have V (N) ={x ∈ X | ∑i∈N xi =

∑i∈N ai

}.

Solving an exchange economy

Let us assume that we can define a price pr for a unit of good r. The idea would be toexchange the goods at a constant price during the negotiation.

Let us define a price vector p ∈ Rk+. The amount of each good that agent i possesses

is xi ∈ Rk+. The total cost of agent i’s bundle is p · xi =

∑kr=1 prxi,r. Since the initial

endowment of agent i is ai, the agent has at his disposal an amount p · ai, and i canafford to obtain a bundle yi such that p · yi ≤ p · ai.

We can wonder about what an ideal situation would be. Given the existence of theprice vector, we can define a competitive equilibrium, the idea is to make believe toeach player that it possesses the best outcome.

9.2.8. DEFINITION. [Competitive equilibrium] The competitive equilibrium of an ex-change economy (N,X, V, (�i)i∈N) is a pair (p, x) where p ∈ Rk

+ is a price vectorand x ∈

{(x1, . . . , xn) |xi ∈ Rk

+ for i ∈ N}

such that

• ∑i∈N xi =∑

i∈N ai (the allocation results from trading)

• ∀i ∈ N , p · xi ≤ p · ai (each agent can afford its allocation)

• ∀i ∈ N ∀yi ∈ Rk+ (p · yi ≤ p · ai)⇒ xi �i yi

Among all the allocations that an agent can afford, it obtains one of its most favoritesoutcomes.

This competitive equilibrium seems like an ideal situation, and surprisingly, Arrowand Debreu [1] proved such equilibrium is guaranteed to exist. This is a deep theorem,and we will not study the proof here.

9.2.9. THEOREM (ARROW & DEBREU, 1954). Let (N,M,A, (�i)i∈N) be an exchangeeconomy. If each preference relation �i is continuous and strictly convex, then a com-petitive equilibrium exists.

The proof of the theorem is not constructive, i.e., it guarantees the existence of theequilibrium, but not how to obtain the price vector or the allocation. The followingtheorem links the allocation with the core:

Page 93: Lectures in Cooperative Game Theory

100 Lecture 9. Non-Transferable Utility Games (NTU games)

9.2.10. THEOREM. If (p, x) is a competitive equilibrium of an exchange economy, thenx belongs to the core of the corresponding exchange economy game.

Proof. Let us assume x is not in the core of the associated exchange economy game.Then, there is at least one coalition C and an allocation y such that ∀i ∈ C y �i x. Bydefinition of the competitive equilibrium, we must have p · yi > p · ai. Summing overall the agents in C, we have p ·∑i∈C yi > p ·∑i∈C ai. Since the prices are positive, wededuce that

∑i∈C yi >

∑i∈C ai, which is a contradiction. �

It then follows that if each preference relation is continuous and strictly convex,then the core of an exchange economy game is non-empty. In an economy, the out-comes that are immune to manipulations by groups of agent are competitive equilib-rium allocation.

Page 94: Lectures in Cooperative Game Theory

Bibliography

[1] Kenneth J. Arrow and Gerard Debreu. Existence of an equilibrium for a competi-tive economy. Econometrica, 22:265–290, July 1954.

[2] Anna Bogomolnaia and Matthew O. Jackson. The stability of hedonic coalitionstructures. Games and Economic Behavior, 38(2):201–230, 2002.

[3] Simina Brânzei and Kate Larson. Coalitional affinity games. In Proc of the 8thInt. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2009), pages79–84, July 2009.

[4] Edith Elkind and Michael Wooldridge. Hedonic coalition nets. In Proc. of the8th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS-09), pages417–424, May 2009.

101

Page 95: Lectures in Cooperative Game Theory

Lecture 10Outside of the traditional games

10.1 Games with a priori unions– a different interpretation of a coalition structure

So far, a coalition has represented a set of agents that worked on its own. In a CS,the different coalitions are intended to work independently of each other. We canalso interpret a coalition to represent a group of agent that is more likely to worktogether within a larger group of agents (because of personal or political affinities).The members of a coalition do not mind working with other agents, but they want tobe together and negotiate their payoff together, which may improve their bargainingpower. This is the idea used in games with a priori unions. Formally, a game with apriori unions is similar to a game with CS: it consists of a triplet (N, v, S) when (N, v)is a TU game and S is a CS. However, we assume that the grand coalition forms. Theproblem is again to define a payoff distribution.

10.1.1. DEFINITION. [Game with a priori unions] A game with a priori unions is atriplet (N, v, S), where (N, v) is a TU game, and S is a particular CS. It is assumedthat the grand coalition forms.

Owen [8] proposes a value that is based on the idea of the Shapley value. Theagents forms the grand coalition by joining one by one. In the Shapley value, allpossible joining orders are allowed. In the Owen value, an agent i may join only whenthe last agent that joined is a member of i’s coalition or when the last agents (j1, . . . , jk)that joined before formed a coalition in S. This is formally captured using the notionof a consistency with a CS:

10.1.2. DEFINITION. [Consistency with a coalition structure] A permutation π is con-sistent with a CS S when, for all (i, j) ∈ C2, C ∈ S and l ∈ N , π(i) < π(l) < π(j)implies that l ∈ C.

103

Page 96: Lectures in Cooperative Game Theory

104 Lecture 10. Outside of the traditional games

We denote by ΠS(N) the set of permutations of N that are consistent with theCS S. The number of such permutations is m

∏C∈S |C|! where m is the number of

coalitions in S. The Owen value is then defined as follows:

10.1.3. DEFINITION. Owen value Given a game with a priori union (N, v, S), theOwen value Oi(N, v, S) of agent i is given by

Oi(N, v, S) =∑

π∈ΠS(N)

mc(π)

|ΠS(N)|

In Table 10.1, we present the example used for the Shapley value and computethe Owen value. The members of the coalition of two agents improve their payoff byforming an union.

N = {1, 2, 3}v({1}) = 0 v({2}) = 0 v({3}) = 0

v({1, 2}) = 90 v({1, 3}) = 80 v({2, 3}) = 70v({1, 2, 3}) = 120

S2 = {{1, 2}, {3}} S2 = {{1, 3}, {2}}1 2 3

1← 2← 3 0 90 301← 3← 2 8

2← 1← 3 90 0 302← 3← 1 8

3← 1← 2 80 40 03← 2← 1 50 70 0total 220 200 60Owen value Oi(N, v, S1) 55 50 15

1 2 31← 2← 3 8

1← 3← 2 0 40 802← 1← 3 90 0 302← 3← 1 50 0 703← 1← 2 80 40 03← 2← 1 8

total 220 80 180Owen value Oi(N, v, S2) 55 20 45

Table 10.1: Example of the computation of an Owen value

10.2 Games with externalitiesA traditional assumption in the literature of coalition formation is that the value of acoalition depends solely on the members of that coalition. In particular, it is indepen-dent of on non-members’ actions. In general, this may not be true: some externalities(positive or negative) can create a dependency between the value of a coalition andthe actions of non-members. [10] attribute these externalities to the presence of sharedresources (if a coalition uses some resource, they will not be available to other coali-tions), or when there are conflicting goals: non-members can move the world fartherfrom a coalition’s goal state. [9] state that a “recipe for generating characteristic func-tions is a minimax argument”: the value of a coalition C is the value C gets when the

Page 97: Lectures in Cooperative Game Theory

10.2. Games with externalities 105

non-members respond optimally so as to minimise the payoff of C. This formulationacknowledges that the presence of other coalitions in the population may affect thepayoff of the coalition C. As in [4, 9], we can study the interactions between differentcoalitions in the population: decisions about joining forces or splitting a coalition candepend on the way the competitors are organised. For example, when different compa-nies are competing for the same market niche, a small company might survive againsta competition of multiple similar individual small companies. However, if some ofthese small companies form a viable coalition, the competition significantly changes:the other small companies may now decide to form another coalition to be able to suc-cessfully compete against the existing coalition. Another such example is a bargainingsituation where agents need to negotiate over the same issues: when agents form acoalition, they can have a better bargaining position, as they have more leverage, andbecause the other party needs to convince all the members of the coalition. If the otherparties also form coalition, the bargaining power of the first coalition may decrease.

Two main types of games with externalities are described in the literature, both arerepresented by a pair (N, v), but the valuation function has a different signature.

Games in partition function form [11]: v : 2N ×Sn → R. This is an extension ofthe valuation function of a TU game by providing the value of a coalition giventhe current coalition structure (note that v(C,S) is meaningful when C ∈ S).

Games with valuations : v : N × Sn → R. In this type of games, the valuationfunction directly assigns a value to an agent given a coalition structure. Onepossible interpretation is that the problem of sharing the value of a coalition tothe members has already been solved.

The definitions of superadditivity, subadditivity and monotonicity can be adaptedto games in partition functions [3]. As an example, we provide the definition for su-peradditivity.

10.2.1. DEFINITION. [superadditive games in partition function] A partition functionv is superadditive when, for any CS S and any coalitions C1 and C2 in S, we havev(C1 ∪ C2,S \ {C1, C2} ∪ {C1 ∪ C2}) ≥ v(C1,S) + v(B,S).

The partition function may also have some regularities when two coalition merge:either they always have a positive effect on the other coalition, or they always have anegative one. More precisely, a partition function exhibits positive spillovers when forany CS S and any coalitions C1 and C2 in S, we have v(C,S \ {C1, C2}∪ {C1 ∪C2}) ≥v(C,S) for all coalitions C 6= C1, C2 in S.

We now turn to considering solution concepts for such games. The issue of extend-ing the Shapley value has a rich literature in game theory. We want the Shapley valueto represent an average marginal contribution, but there is a debate over which set ofcoalition structures. Michalak et al. [5] provide references on different solutions andpresent three solutions in more details.

Page 98: Lectures in Cooperative Game Theory

106 Lecture 10. Outside of the traditional games

Airiau and Sen [2] considers the issue of the stability of the optimal CS and dis-cusses a possible way to extend the kernel for partition function games. In [1], theyconsider coalition formation in the context of games with valuations and propose asolution for myopic agents (an agent will join a coalition only when it is beneficial,without considering long-terms effect).

Michalak et al. [7] tackle the problem of representing such games and propose threedifferent representations that depends on the interpretation of the externalities. The firstrepresentation considers the value of a coalition in a CS: the value of a coalition canbe decomposed into on term that is free of externality and another term that modelsthe sum of the uncertainty due to the formation of the other coalitions. The two otherrepresentations consider that the contribution of a coalition in a CS: either by providingthe mutual influence of any two coalitions in a CS (outward operational externalities)or by providing the influence of all the other coalitions on a given coalition (inwardoperational externalities). Michalak et al. (in [5] and [6]) extend the concept of MC-nets to games with partition function.

Page 99: Lectures in Cooperative Game Theory

Bibliography

[1] Stéphane Airiau and Sandip Sen. A fair payoff distribution for myopic rationalagents. In Proceedings of the Eighth International Conference on AutonomousAgents and Multiagent Systems (AAMAS-09), May 2009.

[2] Stéphane Airiau and Sandip Sen. On the stability of an optimal coalition struc-ture. In Proceedings of the 19th European Conference on Artificial Intelligence(ECAI-2010), pages 203–208, August 2010.

[3] Francis Bloch. Non-cooperative models of coalition formation in games withspillover. In Carlo Carraro, editor, The endogenous formation of economic coali-tions, chapter 2, pages 35–79. Edward Elgar, 2003.

[4] Sergiu Hart and Mordecai Kurz. Endogenous formation of coalitions. Economet-rica, 51(4), July 1983.

[5] Tomasz Michalak, Talal Rahwan, Dorota Marciniak, Marcin Szamotulski, andNicholas R. Jennings. Computational aspects of extending the shapley value tocoalitional games with externalities. In Proceeding of the 2010 conference onECAI 2010: 19th European Conference on Artificial Intelligence, pages 197–202, Amsterdam, The Netherlands, The Netherlands, 2010. IOS Press.

[6] Tomasz Michalak, Talal Rahwan, Dorota Marciniak, Marcin Szamotulski, PeterMcBurney, and Nicholas R. Jennings. A logic-based representation for coalitionalgames with externalities. In Proceedings of the 9th International Conference onAutonomous Agents and Multiagent Systems: volume 1 - Volume 1, AAMAS ’10,pages 125–132, Richland, SC, 2010. International Foundation for AutonomousAgents and Multiagent Systems.

[7] Tomasz Michalak, Talal Rahwan, Jacek Sroka, Andrew Dowell, MichaelWooldridge, Peter McBurney, and Nicholas R. Jennings. On representing coali-tional games with externalities. In Proceedings of the 10th ACM conference onElectronic Commerce 09 (EC’09), 2009.

107

Page 100: Lectures in Cooperative Game Theory

108 Bibliography

[8] Guilliermo Owen. Values of games with a priori unions. In O. Moeschlin R. Hein,editor, Mathematical Economics and Game Theory: Essays in Honor of OskarMorgenstern. Springer, New York, 1977.

[9] Debraj Ray and Rajiv Vohra. A theory of endogenous coalition structures. Gamesand Economic Behavior, 26:286–336, 1999.

[10] Tuomas Sandholm and Victor R. Lesser. Coalitions among computationallybounded agents. AI Journal, 94(1–2):99–137, 1997.

[11] R. M. Thrall and W. F. Lucas. N-person games in partition function form. NavalResearch Logistics Quarterly, 10(1):281–298, 1963.

Page 101: Lectures in Cooperative Game Theory

Lecture 11Coalition Structure Generation problem and

related issues

In the previous sections, the focus was on individual agents that are concerned withtheir individual payoff. In this section, we consider TU games (N, v) in which agentsare concerned only about the society’s payoff: the agents’ goal is to maximise utilitar-ian social welfare. The actual payoff of the agent or the value of her coalition is notof importance in this setting, only the total value generated by the population matters.This is particularly interesting for multiagent systems designed to maximise some ob-jective functions. In the following, an optimal CS denotes a CS with maximum socialwelfare. This may model multiagent systems that are designed to optimise an objectivefunction.

More formally, we consider a TU game (N, v), and we recall that a coalition struc-ture (CS) s = {S1, · · · ,Sm} is a partition of N , where Si is the ith coalition of agents,and i 6= j ⇒ Si ∩ Sj = ∅ and ∪i∈[1..m]Si = N . S denotes the set of all CSs. The goalof the multiagent system is to locate a CS that maximises utilitarian social welfare, inother words the problem is to find an element of argmaxs∈S

∑S∈s v(S).

The space S of all CSs can be represented by a lattice, and an example for apopulation of four agents is provided in Figure 11.1. The first level of the latticeconsists only of the CS corresponding to the grand coalition N = {1, 2, 3, 4}, thelast level of the lattice contains CS containing singletons only, i.e., coalitions con-taining a single member. Level i contains all the CSs with exactly i coalitions. Thenumber of CSs at level i is S(|N |, i), where S is the Stirling Number of the SecondKind1. The Bell number, B(n), represents the total number of CSs with n agents,B(n) =

∑ni=0 S(n, k). This number grows exponentially, as shown in Figure 11.2,

and is O(nn) and ω(nn2 ) [15]. When the number of agents is relatively large, e.g.,

n ≥ 20, exhaustive enumeration may not be feasible.The actual issue is the search of the optimal CS. Sandholm et al. [15] show that

given a TU game (N, v), the finding the optimal CS is an NP-complete problem. Inthe following, we will consider centralised search where a single agent is performing

1S(n,m) is the number of ways of partitioning a set of n elements into m non-empty sets.

109

Page 102: Lectures in Cooperative Game Theory

110 Lecture 11. Coalition Structure Generation problem and related issues

Level 1

Level 2

Level 3

Level 4{1}{2}{3}{4}

{1, 2}{3}{4} {1, 3}{2}{4} {1, 4}{2}{3}{1}{2, 3}{4}{1}{2, 4}{3}{1}{2}{3, 4}

{1, 2, 3}{4}{1, 2}{3, 4} {3}{1, 2, 4} {1, 4}{2, 3}{2}{1, 3, 4} {1, 3}{2, 4}{1}{2, 3, 4}

{1, 2, 3, 4}

Figure 11.1: Set of CSs for 4 agents.

the search as well as the more interesting case of decentralised search where all agentsmake the search at the same time on different parts of the search space. Before doingso, we review some work where the valuation function v is not known in advance.In a real application, these values need to be computed; and this may be an issue onits own if the computations are hard, as illustrated by an example in [14] where thecomputation of a value requires to solve a traveling salesman problem.

11.1 Sharing the computation of the coalition valuesThus far, when we used a TU game, the valuation function was common knowledge.For a practical problem though, one needs to compute these values. We said that thevalue of a coalition was the worth that could be achieved through cooperation of thecoalition’s members. In many cases, computing the value of a coalition will be anoptimization problem: find the optimal way to cooperate to produce the best possibleworth. In some cases, such a problem may be computationaly hard. The followingexample is given by Sandholm and Lesser [14]: we are in a logistics application andthe computing the value of a coalition requires to solve a travelling salesman problem,a problem known to be NP-complete. Before being able to compute an optimal CS,one needs to compute the value of all coalitions. Since agents are cooperative (i.e. theywant to work together to ensure the best outcome for the society), we are interested ina decentralised algorithm that computes all the coalition values in a minimal amountof time, and that requires minimum communication between the agents.

Shehory and Kraus were the first to propose an algorithm to share the computation

Page 103: Lectures in Cooperative Game Theory

11.1. Sharing the computation of the coalition values 111

100000

1e+10

1e+20

1e+30

1e+40

0 5 10 15 20 25 30 35 40 45 50

log(

num

er)

Number of agents

Number of coalitions and coalition structures

number of coalition structuresnumber of coalitions

Figure 11.2: Number of CSs in a population of n agents.

of the coalition values [19]. In their algorithm, the agents negotiate which computa-tion is performed by which agent, which is quite demanding. Rahwan and Jenningsproposed an algorithm where agents first agree on an identification for each agent par-ticipating in the computation (an index between 1 and n the number of agents). Then,each agent use the same algorithm that determines which coalition values they needto compute, removing the need of any further communication, except announcing theresult of the computation. The index is used to compute a set of coalitions and ensuresthat the values of all the coalitions are computed exactly once. This algorithm, calledDCVC [7] outperforms the one by Shehory and Kraus. To minimize the overall time ofcomputation, it is best to balance the work of all the agents. The key observation is thatin general, it should take longer to compute the value of a large coalition compared to asmall coalition (i.e., the computational complexity is likely to increase with the size ofthe coalition since more agents have to coordinate their activities). Their method im-proves the balance of the loads by distributing coalitions of the same size to all agents.By knowing the number of agents n participating in the computation an index number(i.e., an integer in the range {0..n}), the agents determine for each coalition size whichcoalition values to compute. The algorithm can also be adapted when the agents havedifferent known computational speed so as to complete the computation in a minimumamount of time.

Page 104: Lectures in Cooperative Game Theory

112 Lecture 11. Coalition Structure Generation problem and related issues

11.2 Searching for the optimal coalition structureOnce the value of each coalition is known, the agents needs to search for an optimalCS. The difficulty of this search lies in the large search space, as recognised by existingalgorithms, and this is even more true in the case where there exists externalities (i.e.,when the valuation of a coalition depends on the CS). For TU games with no external-ities, some algorithms guarantee finding CSs within a bound from the optimum whenan incomplete search is performed. Unfortunately, such guarantees are not possible forgames with externalities. We shortly discuss these two cases in the following.

11.2.1 Games with no externalitiesAnytime algorithms

Sandholm et al. [15] proposed a first algorithm that searches through a lattice aspresented in Figure 11.1. Their algorithm guarantees that the CS found, s, is withina bound from the optimal s? when a sufficient portion of the lattice has been visited.To ensure any bound, it is necessary to visit a least 2n−1 CSs (Theorems 1 and 3 in[15]) which corresponds to the first two levels of the lattice, i.e., the algorithm needsto visit the grand coalition and all the CSs composed of 2 coalitions. Let S ′ be thebest CS found in the first two levels, then we have v(s?) ≤ n · v(S ′). To see this,let Cmax a coalition with the highest value (i.e. Cmax ∈ argmax{C⊆N} v(C). It is clearthat v(s?) ≤ n × v(Cmax) as each coalition forming the CS s? has a most the value ofv(Cmax) and there are at most n coalitions in s?. Since all coalitions are part of theselevels, it is clear that we have v(Cmax) ≤ v(S ′). Finally, we have v(s?) ≤ n × v(S ′),which was what we wanted.

The bound improves each time a new level is visited. An empirical study of differ-ent strategies for visiting the other levels is presented in [4]. Three different algorithmsare empirically tested over characteristic functions with different properties: 1) sub-additive, 2) superadditive, 3) picked from a uniform distribution in [0, 1] or in [0, |S|](where |S| is the size of the coalition). The performance of the heuristics differs overthe different type of valuation functions, demonstrating the importance of the proper-ties of the characteristic function in the performance of the search algorithm.

The algorithm by Dang and Jennings [3] improves the one of [15] for low boundsfrom the optimal. For large bounds, both algorithms visit the first two levels of thelattice. Then, when the algorithm by Sandholm et al. continues by searching eachlevel of the lattice, the algorithm of Dang and Jennings only searches specific subset ofeach level to decrease the bound faster. This algorithm is anytime, but its complexityis not polynomial.

These algorithms were based on a lattice as the one presented in Figure 11.1 wherea CS in level i contains exactly i coalitions. The best algorithm to date has beendeveloped by Rahwan et al. and uses a different representation called integer-partition(IP) of the search space. It is an anytime algorithm that has been improved over a series

Page 105: Lectures in Cooperative Game Theory

11.2. Searching for the optimal coalition structure 113

of paper: [11, 12, 8, 9, 13]. In this representation the CSs are grouped according tothe sizes of the coalitions they contain, which is called a configuration. For example,for a population of four agents, the configuration {1, 3} represents CSs that contain acoalition with a singleton and a coalition with three agents. A smart scan of the inputallows to search the CSs with two coalitions the grand coalition and the CS containingsingletons only. In addition, during the scan, the algorithm computes the average andmaximum value for each coalition size. The maximum values can be used to prune thesearch space. When constructing a configuration, the use of the maximum values of acoalition for each size permits the computation of an upper bound of the value of a CSthat follows that configuration, and if the value is not greater than the current best CS,it is not necessary to search through the CSs with that configuration, which prunes thesearch tree. Then, the algorithm searches the remaining configurations, starting withthe most promising ones. During the search of a configuration, a branch and boundtechnique is used. In addition, during the search, the algorithm is designed so that noCS is evaluated twice. Empirical evaluation shows that the algorithm outperforms anyother current approach over different distributions used to generate the values of thecoalitions.

dynamic programming

Another approach is to use dynamic programming technique. The key idea is pro-vided in the following lemma: in order to compute the optimal value of a CS, it sufficesto consider partitions of N into two disjoints coalitions and apply the argument recur-sively. To help us, let us recall the definition of the supeadditive cover (N, v) of aTU game (N, v). The valuation function v is v(C) = maxP∈SC

{∑T∈P v(T )

}for all

C ⊆ N \ ∅ and v(∅) = 0. The set of optimal CSs can now be noted argmax v(N). Letus now state the key lemma:

11.2.1. LEMMA. For any C ⊆ N , we have

v(C) = max {max {v(C ′) + v(C ′′) | C ′ ∪ C ′′ = C ∧ C ′ ∩ C ′′ = ∅ ∧ C ′, C ′′ 6= ∅} , v(C)} .

Proof. Clearly, v(C) ≥ v(C). Take two disjoint non-empty coalitions C ′ and C ′′ suchthat C ′ ∪ C ′′ = C. Let S ′ and S ′′ be two partitions of C ′ and C ′′ such that v(C ′) = v(S ′)and v(C ′′) = v(S ′′). Then S ′ ∪S ′′ is a CS over C with v(S ′ ∪S ′′) = v(S ′) + v(S ′′), sowe must have v(C) ≥ v(C ′) + v(C ′′).

Now, let S be a partition of C such that v(C) = v(S). If S = {C}, then we aredone. Otherwise, let C ′ be a coalition in S, C ′′ = C \ C ′ and S ′ be S \ {C ′}. Since S ′is a CS over C ′′, we have v(C ′′) ≥ v(S ′) = v(S) − v(C ′). On the other hand, we havev(C ′) ≥ v(C ′). Hence v(C ′) + v(C ′′) ≥ v(S) = v(C). �

More recently, [17, 18] designed an algorithm that uses dynamic programming andthat guarantees a constant factor approximation ratio r in a given time. In particular,the latest algorithm [17] guarantees a factor of 1

8in O(2n).

Page 106: Lectures in Cooperative Game Theory

114 Lecture 11. Coalition Structure Generation problem and related issues

Other approachesSome algorithms are now trying to combine an anytime approach and an dynamics

programming. Other researchers try to use different techniques. For example, Silaghiet al [20] propose to use a different representation, assuming that the value of a coali-tion is the optimal solution of a distributed constraint optimization problem (DCOP).The algorithm uses a DCOP solver and guarantees a bound from the optimum.

The algorithms above assume that the TU game is represented in a naive way. Thereexists some algorithms that take advantage of compact representation. For example,[6] proposes algorithms in the case where the game is represented using an MC-netsand in the case where the synergy coalition group is used. Another example is [1] forskill games.

11.2.2 Games with externalitiesThe previous algorithm explicitly uses the fact that the valuation function only dependson the members of the coalition, i.e., has no externalities. When this is not the case,i.e., when the valuation function depends on the CS, it is still possible to use some al-gorithms, e.g., the one proposed in [4], but the guarantee of being within a bound fromthe optimal is no longer valid. Sen and Dutta use genetic algorithms techniques [16] toperform the search. The use of such technique only assumes that there exists some un-derlying patterns in the characteristic function. When such patterns exist, the geneticsearch makes a much faster improvement in locating higher valued CS compared tothe level-by-level search approach. One downside of the genetic algorithm approachis that there is no optimality guarantee. Empirical evaluation, however, shows that thegenetic algorithm does not take much longer to find a solution when the value of acoalition does depend on other coalitions.

More recently, Rahwan et al. and Michalak et al. consider the problem for someclass of externalities and modify the IP algorithm for the games with externalities [5,10], however, they assume games with negative or positive spillovers. [2] introduce arepresentation to represent games in partition function games using types: each agenthas a single type. They make two assumptions on the nature of the externalities (basedon the notions of competition and complementation) and they show that games withnegative or positive spillovers are special cases. They provide a branch and boundalgorithm for the general setting. They also provide a worst-case initial bound.

Page 107: Lectures in Cooperative Game Theory

Bibliography

[1] Yoram Bachrach, Reshef Meir, Kyomin Jung, and Pushmeet Kohli. Coalitionalstructure generation in skill games. In Proceedings of the Twenty-Fourth AAAIConference on Artificial Intelligence (AAAI-10), pages 703–708, July 2010.

[2] Bikramjit Banerjee and Landon Kraemer. Coalition structure generation in multi-agent systems with mixed externalities. In Proceedings of the 9th InternationalConference on Autonomous Agents and Multiagent Systems: volume 1 - Volume1, AAMAS ’10, pages 175–182, Richland, SC, 2010. International Foundationfor Autonomous Agents and Multiagent Systems.

[3] Viet Dung Dang and Nicholas R. Jennings. Generating coalition structureswith finite bound from the optimal guarantees. In Proceedings of the thirdInternational Joint Conference on Autonomous Agents and Multiagent Sys-tems(AAMAS’04), 2004.

[4] Kate S. Larson and Tuomas W. Sandholm. Anytime coalition structure gener-ation: an average case study. Journal of Experimental & Theoretical ArtificialIntelligence, 12(1):23–42, 2000.

[5] Tomasz Michalak, Andrew Dowell, Peter McBurney, and Michael Wooldridge.Optimal coalition structure generation in partition function games. In Proceedingof the 2008 conference on ECAI 2008, pages 388–392, Amsterdam, The Nether-lands, The Netherlands, 2008. IOS Press.

[6] Naoki Ohta, Vincent Conitzer, R. Ichimura, Y. Sakurai, and Makoto Yokoo.Coalition structure generation utilizing compact characteristic function represen-tations. In Proocedings of the 15th International Conference on Principles and-Practice of Constraint Programming (CP’09), pages 623–638, 2009.

[7] Talal Rahwan and Nicholas R. Jennings. An algorithm for distributing coali-tional value calculations among cooperating agents. Artificial Intelligence, 171(8-9):535–567, 2007.

115

Page 108: Lectures in Cooperative Game Theory

116 Bibliography

[8] Talal Rahwan and Nicholas R. Jennings. Coalition structure generation: Dynamicprogramming meets anytime optimization. In Proceedings of the 23rd conferenceon artificial intelligence (AAAI-08), pages 156–161, 2008.

[9] Talal Rahwan and Nicholas R. Jennings. An improved dynamic programming al-gorithm for coalition structure generation. In Proceedings of the 7th internationalconference on Autonomous Agents and Multi-Agent Systems (AAMAS-08), 2008.

[10] Talal Rahwan, Tomasz Michalak, Nicholas R. Jennings, Michael Wooldridge,and Peter McBurney. Coalition structure generation in multi-agent systems withpositive and negative externalities. In Proceedings of the 21st International JointConference on Artificial Intelligence (IJCAI-09), 2009.

[11] Talal Rahwan, Sarvapali D. Ramchurn, Viet Dung Dang, Andrea Giovannucci,and Nicholas R. Jennings. Anytime optimal coalition structure generation. InProceedings of the Twenty-Second Conference on Artificial Intelligence (AAAI-07), pages 1184–1190, 2007.

[12] Talal Rahwan, Sarvapali D. Ramchurn, Viet Dung Dang, and Nicholas R. Jen-nings. Near-optimal anytime coalition structure generation. In Proceedings of theTwentieth International Joint Conference on Artificial Intelligence (IJCAI’07),pages 2365–2371, January 2007.

[13] Talal Rahwan, Sarvapali D. Ramchurn, Nicholas R. Jennings, and Andrea Gio-vannucci. An anytime algorithm for optimal coalition structure generation. Jour-nal of Artificial Intelligence Research, 34:521–567, 2009.

[14] Tuomas Sandholm and Victor R. Lesser. Coalitions among computationallybounded agents. AI Journal, 94(1–2):99–137, 1997.

[15] Tuomas W. Sandholm, Kate S. Larson, Martin Andersson, Onn Shehory, andFernando Tohmé. Coalition structure generation with worst case guarantees. Ar-tificial Intelligence, 111(1–2):209–238, 1999.

[16] Sandip Sen and Partha Sarathi Dutta. Searching for optimal coalition structures.In ICMAS ’00: Proceedings of the Fourth International Conference on Multi-Agent Systems (ICMAS-2000), page 287, Washington, DC, USA, 2000. IEEEComputer Society.

[17] Travis Service and Julie Adams. Approximate coalition structure generation.In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence(AAAI-10), pages 854–859, July 2010.

[18] Travis Service and Julie Adams. Constant factor approximation algorithms forcoalition structure generation. Autonomous Agents and Multi-Agent Systems,pages 1–17, 2010. Published online February 2010.

Page 109: Lectures in Cooperative Game Theory

Bibliography 117

[19] Onn Shehory and Sarit Kraus. Methods for task allocation via agent coalitionformation. Artificial Intelligence, 101(1-2):165–200, May 1998.

[20] Suguru Ueda, Atsushi Iwasaki, Makoto Yokoo, Marius Calin Silaghi, KatsutoshiHirayama, and Toshihiro Matsui. Coalition structure generation based on dis-tributed constraint optimization. In Proceedings of the Twenty-Fourth AAAI Con-ference on Artificial Intelligence (AAAI-10), pages 197–203, July 2010.

Page 110: Lectures in Cooperative Game Theory
Page 111: Lectures in Cooperative Game Theory

Lecture 12Issues for applying cooperative games

We now highlight issues that have emerged from the different types of applications(e.g. resource or task allocation problem or forming a buying group). Some of theissues have solutions while others remain unsolved, for example, dealing with agentsthat can enter and leave the environment at any time in an open, dynamic environment.None of the current protocols can handle these issues without re-starting computation,and only few approaches consider how to re-use the already computed solution [6, 13].

12.1 Stability and Dynamic EnvironmentsReal-world scenarios often present dynamic environments. Agents can enter and leavethe environment at any time, the characteristics of the agents may change with time,the knowledge of the agents about the other agents may change, etc.

The game-theoretic stability criteria are defined for a fixed population of agents andthe introduction of a new agent in the environment requires significant computation toupdate a stable payoff distribution. For example, for the kernel, all the agents need tocheck whether any coalition that includes the new agent changes the value of the max-imum surplus, which requires re-evaluating O(2n) coalitions. Given the complexityof the stability concept, one challenge that is faced by the multiagent community is todevelop stability concepts that can be easily updated when an agent enters or leavesthe environment.

In addition, if an agent drops during the negotiation, this may cause problems forthe remaining agents. For example, a protocol that guarantees a kernel stable payoffdistribution is shown not to be ‘safe’ when the population of agents is changing: if anagent i leaves the formation process without notifying other agents, the other agentsmay complete the protocol and find a solution to a situation that does not match thereality. Each time a new agent enters or leaves the population, a new process needs tobe restarted [9].

In an open environments, manipulations will be impossible to detect: agents mayuse multiple identifiers (or false names) to pretend to be multiple agents, or the other

119

Page 112: Lectures in Cooperative Game Theory

120 Lecture 12. Issues for applying cooperative games

way around, multiple agents may collude and pretend to be a single agents, or agentscan hide some of their skills. Hence, it is important to propose solution concepts thatare robust against such manipulations. We will come back later to some of the solutionthat have been proposed: the anonymity-proof core [44] and anonymity-proof Shapleyvalue [35].

12.2 Uncertainty about Knowledge and TaskIn real-world scenario, agents will be required to handle some uncertainty. Differentsources of uncertainty have been considered in the literature:

• the valuation function is an approximation [38] and agents may not use the samealgorithm. Hence, the agents may not know what is the true value.

• agents may not know some tasks [9] or the value of some coalitions. In suchcases, the agents play a different coalitional game that may reduce the payoff ofsome agents compared to the solution of the true game.

• some information is private, i.e., an agent knows some property about itself, butdoes not know it for other agents. In [28], it is the cost incurred by other agentsto perform a task that is private. In [16, 17], agents have a private type, and thevaluation function depends on the types of the coalition’s members.

• uncertainty about the outcome of an action [16]: when a coalition makes anaction, some external factors may influence the outcome of the actions. This canbe captured by a probability of an outcome given the action taken and the typeof the members of the coalition.

• there are multiple possible worlds [24], which models the different possible out-comes of the formation of a coalition. Agents know a probability distributionover the different worlds. In addition, an agent may not able to distinguish someworlds as it lacks information and they know a partition of the worlds (calledinformation sets), each set of the partition represent worlds that appears as indis-tinguishable.

Some authors also consider that there is uncertainty in the valuation function with-out modeling a particular source, for example in [25], each agent has an expectationof the valuation function. In [10, 11] fuzzy sets are used to represent the valuationfunction. In the first paper, the agents enters bilateral negotiations to negotiate Shapleyvalue, in the second paper, they define a fuzzy version of the kernel.

In the uncertainty model of [24], the definition of the core depends on the timeone reasons about it. They proposed three different definitions of the core that de-pend on the timing of the evaluation: before the world is drawn or ex-ante, not muchinformation can be used; after the world is drawn but before it is known, also called

Page 113: Lectures in Cooperative Game Theory

12.3. Safety and Robustness 121

ex-interim, an agent knows to which set of its information set the real world belongs,but does not know which one; finally when the world is announced to the agent orex-post, everything is known.

The model of [16] combines uncertainty about the agent types and uncertaintyabout the outcome of the action taken by the coalition. Each agent has a probabilisticbelief about the types of the other agents in the population. Chalkiadakis and Boutilierpropose a definition of the core, the Bayesian core (introduced in [14]) in which noagent has the belief that there exists a better coalition to form. As it may be difficult toobtain all the probabilities and reason about them, [17] propose to use a “point” belief:an agent guesses the type of the other agents and reason with these guesses. The paperanalyses the core, simple games (proving that the core of a simple game is non-emptyiff the game has a veto player) and some complexity result in this games with belief.

12.3 Safety and RobustnessIt is also important that the coalition formation process is robust. For instance, com-munication links may fail during the negotiation phase. Hence, some agents may misssome components of the negotiation stages. This possibility is studied in [9] for theKCA protocol [27]: coalition negotiations are not safe when some agents becomeunavailable (intentionally or otherwise). In particular, the payoff distribution is notguaranteed to be kernel-stable. [6] empirically studies the robustness of the use of acentral algorithm introduced in [5]: the cost to compute a task allocation and payoffdistribution in the core is polynomial, but it can still be expensive. In the case of agentfailure, the computation needs to be repeated. Belmonte et al. propose an alterna-tive payoff division model that avoids such a re-computation, but the solution is nolonger guaranteed to be in the core, it is only close to the core. There is a trade-offbetween computational efficiency and the utility obtained by the agent. They concludethat when the number of agents is small, the loss of utility compared to the optimal issmall; hence, the improvement of the computational efficiency can be justified. For alarger number of agents, however, the loss of utility cannot not justify the improvementin computational cost.

12.3.1 Protocol ManipulationWhen agents send requests to search for members of a coalition or when they accept toform a coalition, the protocol may require disclosure of some private information [36].When the agents reveal some of their information, the mechanism must ensure thatthere is no information asymmetry that can be exploited by some agents [7]. To protecta private value, some protocol [9] may allow the addition of a constant offset to theprivate value, as long as this addition does not impact the outcome of the negotiation.

Belmonte et al. study the effect of deception and manipulation of their modelin [6]. They show that some agents can benefit from falsely reporting their cost. In

Page 114: Lectures in Cooperative Game Theory

122 Lecture 12. Issues for applying cooperative games

some other approaches [9, 20], even if it is theoretically possible to manipulate theprotocol, it is not possible in practice as the computational complexity required toensure higher outcome to the malevolent agent is too high. For example, [20] show thatmanipulating marginal-contribution based value division scheme is NP-hard (exceptwhen the valuation function has other properties, such as being convex).

Other possible protocol manipulations include hiding skills, using false names, col-luding, etc. The traditional solution concepts can be vulnerable to false names and tocollusion [44]. To address these problems, it is beneficial to define the valuation func-tion in terms of the required skills instead of defining it over the agents: only skills,not agents, should be rewarded by the characteristic function. In that case, the solutionconcept is robust to false names, collusion, and their combination. But the agents canhave incentive to hide skills. A straight, naive decomposition of the skills will increasethe size of the characteristic function, and [45] propose a compact representation inthis case.

12.4 CommunicationWhile one purpose of better negotiation techniques may be to improve the quality ofthe outcome for the agents, other goals may include decreasing the time and the numberof messages required to reach an agreement. For example, learning is used to decreasenegotiation time in [41]. The motivation Lerman’s work in [30] is to develop a coalitionformation mechanism that has low communication and computation cost. In anotherwork, the communication costs are included in the characteristic function [42].

The communication complexity of some protocols has been derived. For instance,the exponential protocol in [40] and the coalition algorithm for forming Bilateral Shap-ley Value Stable coalition in [26] have communication complexity of O(n2), the nego-tiation based protocol in [40] isO(n22n), and it isO(nk) for the protocol in [39] (wherek is the maximum size of a coalition). The goal of [37] is to analyse the communica-tion complexity of computing the payoff of a player with different stability concepts:they find that it is Θ(n) when either the Shapley value, the nucleolus, or the core areused.

12.5 ScalabilityWhen the population of heterogeneous agents is large, discovering the relevant agentsto perform a task may be difficult. In addition, if all agents are involved in the coalitionformation process, the cost in time and computation will be large. To alleviate thisscalability issue, a hierarchy of agents can be used [1]. When an agent discovers atask that can be addressed by agents below this agent in the hierarchy, the agent picksthe best of them to perform the task. If the agents below cannot perform the task, theagent passes the task to the agent above it in the hierarchy and the process repeats. The

Page 115: Lectures in Cooperative Game Theory

12.6. Long Term vs. Short Term 123

notion of clans [22] and congregations [12], where agents gather together for a longperiod have been proposed to restrict the search space by considering only a subset ofthe agents (see Section 12.6).

12.6 Long Term vs. Short Term

In general, a coalition is a short-lived entity that is “formed with a purpose in mindand dissolve when that need no longer exists, the coalition ceases to suit its designedpurpose, or critical mass is lost as agents depart” [23]. It can be beneficial to considerthe formation of long term coalitions, or the process of repeated coalition formationinvolving the same agents. [43] explicitly study long term coalitions, and in particularthe importance of trust in this content. [12] refer to a long term coalition as a con-gregation. The purpose of a congregation is to reduce the number of candidates fora successful interaction: instead of searching the entire population, agents will onlysearch in the congregation. The goal of a congregation is to gather agents, with similaror complementary expertise to perform well in an environment in the long run, whichis not very different from a coalition. The only difference is that group rationality isnot expected in a congregation. The notion of congregation is similar to the notion ofclans [22]: agents gather not for a specific purpose, but for a long-term commitment.The notion of trust is paramount in the clans, and sharing information is seen as anotherway to improve performance.

12.7 Fairness

Stability does not necessarily imply fairness. For example, let us consider two CSs Sand T with associated kernel-stable payoff distribution xS and xT . Agents may havedifferent preferences between the CSs. It may even be the case that there is no CSthat is preferred by all agents. If the optimal CS is formed, some agents, especially ifthey are in a singleton coalition, may suffer from the choice of this CS. [3] proposea modification of the kernel to allow side-payment between coalitions to compensatesuch agents.

[2] consider partition function games with externalities. They consider a processwhere, in turns, agents change coalition to improve their immediate payoff. Theypropose that the agents share the maximal social welfare, and the size of the share isproportional to the expected utility of the process. The payoff obtained is guaranteedto be at least as high as the expected utility. They claim that using the expected utilityas a base of the payoff distribution provides some fairness as the expected utility canbe seen as a global metric of an agent performance over the entire set of possible CSs.

Page 116: Lectures in Cooperative Game Theory

124 Lecture 12. Issues for applying cooperative games

12.8 Overlapping Coalitions

It is typically assumed that an agent belongs to a single coalition; however, there aresome applications where agents can be members of multiple coalitions. For instance,the expertise of an agent may be required by different coalitions at the same time,and the agent can have enough resources to be part of two or more coalitions. Ina traditional setting, the use of the same agent i by two coalitions C1 and C2 wouldrequire a merge of the two coalitions. This larger coalition U is potentially harder tomanage, and a priori, there would not be much interaction between the agents in C1

and C2, except for agent i. Another application that requires the use of overlappingcoalition is tracking targets using a sensor networks [21]. In this work, a coalition isdefined for a target, and as agents can track multiple targets at the same time, they canbe members of different coalitions.

The traditional stability concepts do not consider this issue. One possibility is forthe agent to be considered as two different agents, but this representation is not satis-factory as it does not capture the real power of this agent. Shehory and Kraus proposea setting with overlapping coalition [39]: Each agent has a capacity, and performing atask may use only a fraction of the agent’s capacity. Each time an agent commits to atask, the possible coalitions that can perform a given task can change. A mapping toa set covering problem allows to find the coalition. However, the study of the stabilityis not considered. Another approach is the use of fuzzy coalition [8]: agents can bemembers of a coalition with a certain degree that represents the risk associated withbeing in that coalition. Other work considers that the agents have different degree ofmembership, and their payoff depends on this degree [4, 31, 34]. The protocols in [29]also allow overlapping coalitions.

More recently, [19]1 have studied the notion of the core in overlapping coalitionformation. In their model, each agent has one resource and the agent contributes afraction of that resource to each coalition it participates in. The valuation function vis then [0, 1]n → R. A CS is no longer a partition of the agents: a CS S is a finitelist of vectors, one for each ‘partial’ coalition, i.e., S = (r1, . . . , rk). The size of Sis the number of coalitions, i.e., k. The support of rC ∈ S (i.e., the set of indicesi ∈ N such that rCi 6= 0) is the set of agents forming coalition C. For all i ∈ Nand all coalition C ∈ S, rCi ∈ [0, 1]n represents the fraction of resource that agent icontributes to coalition C; hence,

∑C∈S r

Ci ≤ 1 (i.e., agent i cannot contributes more

than 100% of its resource). A payoff distribution for a CS S of size k is defined bya k-tuple x = (x1, . . . , xk) where xC is the payoff distribution that the agents obtainfor coalition C. If an agent is not in the coalition, it must not receive any payoff forthis coalition, hence (rCi = 0) ⇒ (xCi = 0). The total payoff of agent i is the sum ofits payoffs over all coalitions pi(CS, x) =

∑kC=1 x

Ci . The efficiency criterion becomes

∀rC ∈ S,∑

i∈N xCi = v(rC). An imputation is an efficient payoff distribution that is

also individually rational. We denote by I(S) the set of all imputations for the CS S.

1An earlier version is [18]

Page 117: Lectures in Cooperative Game Theory

12.9. Trust 125

We are now ready to define the overlapping core. One issue is the kind of permis-sible deviations: when an agent deviates, she can completely leave some coalitions,reduce her contribution in other coalitions, or contributes to new coalitions. If shestills contribute to a coalition containing non-deviating agents, how should they be-have? They first may refuse to give any payoff to the deviating agent, as she is seen asnot trustworthy. Agents that are not affected by the deviation may, however, considerthat the deviators agents did not fail them, and consequently, they may continue toshare payoffs with the deviators. A last case occurs when the deviators are decreasingtheir implication in a coalition. This coalition may no longer perform the same tasks,but it can still perform some. If there is enough value to maintain the payoff of thenon-deviators, the deviators may be allowed to share the surplus generated. Each ofthese behaviors give raise to different types of deviations, and consequently, differentdefinition of a core: the conservative core, the refined core and the optimistic core.The paper also provides a characterization of conservative core, properties of the dif-ferent core, including a result showing that convex overlapping coalitional games havea non-empty core.

12.9 Trust

The notion of trust can be an important metric to determine whom to interact with. Thisis particularly important when the coalition is expected to live for a long term. In [7],an agent computes a probability of success of a coalition, based on a notion of trustwhich can be used to eliminate some agents from future consideration. This probabilityis used to estimate the value of different coalitions and help the agent in deciding whichcoalition to join or form. In [43], the decision to leave or join a coalition is function ofthe trust put in other agents. In this paper, the concept of trust is defined as a belief thatagents will have successful interaction in the future; hence, trust is used to consider asubset of the entire population of agents for the formation of future coalitions. Trust isused to compute coalitions, but agents do not compute a payoff distribution. Anotherwork that emphasises trust is [22] which introduces the concept of clans. A clan isformed by agents that trust each other with long-term commitments. Given the trustand an estimate of local gain, agents can accept to join a clan. The idea behind thiswork is that agents that trust each other will be collaborative. Moreover, when anagent needs to form a coalition of agents, it will only search partners in the clan, whichreduces the search space. Trust can therefore be very effective for scaling up in largesociety of agents.

12.10 Learning

When agents have to repeatedly form coalitions in the presence of the same set ofagents, learning can be used to improve performance of the coalition formation process

Page 118: Lectures in Cooperative Game Theory

126 Lecture 12. Issues for applying cooperative games

both in terms of speed of the process and in terms of better valuation.A basic model of iteratively playing many coalitional games is presented in [32]:

at each time step, a task is offered to agents that are already organised into coalitions.The task is awarded to the best coalition. The model is made richer in [33] where theagents can estimate the value of a coalition and have a richer set of actions: as theagents can fire members from a coalition, join a different coalition, or leave a coalitionto replace some agents in a different coalition. However, in both works, the agents arenot learning, they have a set of static strategies. Empirical experiments compare theresults over populations using either the same strategy or a mix of strategies.

Chalkiadakis and Boutilier also consider a repeated coalition formation problem [14,15, 16]. The setting is a task allocation problem where agents know their own types(i.e., skill to perform some type of tasks), but do not know the ones of other agents inthe population. Each time a coalition is formed, the agents will receive a value for thatcoalition. From the observation of this value, the agents can update a belief about thetypes of other agents. When an agent is reasoning about which coalition to form, ituses its beliefs to estimate the value of the coalition. This problem can be formulatedusing a POMPD (Partially observable Markov Decision Process) where the agents aremaximising the long-term value of their decision over the repetition of the coalition for-mation process. Solving a POMPD is a difficult task, and the POMPD for the coalitionformation problem grows exponentially with the number of agents. In [14], a myopicapproach is proposed. More recently, Chalkiadakis and Boutilier propose additionalalgorithms to solve that POMPD, and empirically compare the solutions [15].

Page 119: Lectures in Cooperative Game Theory

Bibliography

[1] Sherief Abdallah and Victor Lesser. Organization-based cooperative coalitionformation. In IAT 04, 2004.

[2] Stéphane Airiau and Sandip Sen. A fair payoff distribution for myopic rationalagents. In Proceedings of the Eighth International Conference on AutonomousAgents and Multiagent Systems (AAMAS-09), May 2009.

[3] Stéphane Airiau and Sandip Sen. On the stability of an optimal coalition struc-ture. In Proceedings of the 19th European Conference on Artificial Intelligence(ECAI-2010), pages 203–208, August 2010.

[4] J.-P. Aubin. Mathematical Methods of Game and Economic Theory. North-Holland, 1979.

[5] María-Victoria Belmonte, Ricardo Conejo, José-Luis Pérez de-la Cruz, and Fran-cisco Triguero Ruiz. A stable and feasible payoff division for coalition formationin a class of task oriented domains. In ATAL ’01: Revised Papers from the 8thInternational Workshop on Intelligent Agents VIII, pages 324–334, London, UK,2002. Springer-Verlag.

[6] María-Victoria Belmonte, Ricardo Conejo, José-Luis Pérez de-la Cruz, and Fran-cisco Triguero Ruiz. A robust deception-free coalition formation model. In Pro-ceedings of the 2004 ACM symposium on Applied computing (SAC ’04), pages469–473, New York, NY, USA, 2004. ACM Press.

[7] Bastian Blankenburg, Rajdeep K. Dash, Sarvapali D. Ramchurn, MatthiasKlusch, and Nicholas R. Jennings. Trusted kernel-based coalition formation. InProceedings of the fourth international joint conference on Autonomous agentsand multiagent systems, pages 989–996, New York, NY, USA, 2005. ACM Press.

[8] Bastian Blankenburg, Minghua He, Matthias Klusch, and Nicholas R. Jennings.Risk-bounded formation of fuzzy coalitions among service agents. In Proceed-ings of 10th International Workshop on Cooperative Information Agents, 2006.

127

Page 120: Lectures in Cooperative Game Theory

128 Bibliography

[9] Bastian Blankenburg and Matthias Klusch. On safe kernel stable coalition form-ing among agents. In Proceedings of the third International Joint Conference onAutonomous Agents and Multiagent Systems, 2004.

[10] Bastian Blankenburg and Matthias Klusch. Bsca-f: Efficient fuzzy valued stablecoalition forming among agents. In Proceedings of the IEEE International Con-ference on Intelligent Agent Technology (IAT). IEEE Computer Society Press,2005.

[11] Bastian Blankenburg, Matthias Klusch, and Onn Shehory. Fuzzy kernel-stablecoalitions between rational agents. In Proceedings of the second internationaljoint conference on Autonomous agents and multiagent systems (AAMAS-03).ACM Press, 2003.

[12] Christopher H. Brooks and Edmund H. Durfee. Congregation formation in mul-tiagent systems. Autonomous Agents and Multi-Agent Systems, 7(1-2):145–170,2003.

[13] Philippe Caillou, Samir Aknine, and Suzanne Pinson. A multi-agent method forforming and dynamic restructuring of pareto optimal coalitions. In AAMAS ’02:Proceedings of the first international joint conference on Autonomous agents andmultiagent systems, pages 1074–1081, New York, NY, USA, 2002. ACM Press.

[14] Georgios Chalkiadakis and Craig Boutilier. Bayesian reinforcement learning forcoalition formation under uncertainty. In Proceedings of the third InternationalJoint Conference on Autonomous Agents and Multiagent Systems(AAMAS’04),2004.

[15] Georgios Chalkiadakis and Craig Boutilier. Sequential decision making in re-peated coalition formation under uncertainty. In Proc. of the 7th Int. Conf. onAutonomous Agents and Multiagent Systems (AAMAS-08), May 2008.

[16] Georgios Chalkiadakis and Craig Boutilier. Sequentially optimal repeated coali-tion formation. Autonomous Agents and Multi-Agent Systems, to appear, 2010.Published online November 2010.

[17] Georgios Chalkiadakis, Edith Elkind, and Nicholas R. Jennings. Simple coali-tional games with beliefs. In Proceedings of the 21st international jont conferenceon Artifical intelligence, pages 85–90, San Francisco, CA, USA, 2009. MorganKaufmann Publishers Inc.

[18] Georgios Chalkiadakis, Edith Elkind, Evangelos Markakis, and Nicholas R. Jen-nings. Overlapping coalition formation. In Proceeedings of the 4th Interna-tional Workshop on Internet and Network Economics (WINE2008), pages 307–321, 2008.

Page 121: Lectures in Cooperative Game Theory

Bibliography 129

[19] Georgios Chalkiadakis, Edith Elkind, Evangelos Markakis, and Nicholas R. Jen-nings. Cooperative games with overlapping coalitions. Journal of Artificial In-telligence Research, 39:179–216, 2010.

[20] Vincent Conitzer and Tuomas Sandholm. Computing shapley values, manipu-lating value division schemes, and checking core membership in multi-issue do-mains. In Proceedings of the 19th National Conference on Artificial Intelligence(AAAI-04), pages 219–225, 2004.

[21] Viet Dung Dang, Rajdeep K. Dash, Alex Rogers, and Nicholas R. Jennings. Over-lapping coalition formation for efficient data fusion in multi-sensor networks. InProceedings of the Twenty-First Conference on Artificial Intelligence (AAAI-06),pages 635–640, 2007.

[22] Nathan Griffiths and Michael Luck. Coalition formation through motivation andtrust. In Proceedings of the second international joint conference on Autonomousagents and multiagent systems (AAMAS03), New York, NY, USA, 2003. ACMPress.

[23] Bryan Horling and Victor Lesser. A survey of multi-agent organizationalparadigms. The Knowledge Engineering Review, 19:281–316, 2004.

[24] Samuel Ieong and Yoav Shoham. Bayesian coalitional games. In Proceedings ofthe Twenty-Second AAAI Conference on Artificial Intelligence (AAAI-08), pages95–100, 2008.

[25] Steven P. Ketchpel. Forming coalitions in the face of uncertain rewards. In Pro-ceedings of the Eleventh National Conference on Artificial Intelligence, pages414–419, August 1994.

[26] Matthias Klusch and Onn Shehory. Coalition formation among rational informa-tion agents. In Rudy van Hoe, editor, Seventh European Workshop on ModellingAutonomous Agents in a Multi-Agent World, Eindhoven, The Netherlands, 1996.

[27] Matthias Klusch and Onn Shehory. A polynomial kernel-oriented coalition algo-rithm for rational information agents. In Proceedings of the Second InternationalConference on Multi-Agent Systems, pages 157 – 164. AAAI Press, December1996.

[28] Sarit Kraus, Onn Shehory, and Gilad Taase. Coalition formation with uncertainheterogeneous information. In Proceedings of the second international joint con-ference on Autonomous agents and multiagent systems, pages 1–8. ACM Press,2003.

[29] Hoong Chuin Lau and Lei Zhang. Task allocation via multi-agent coalitionformation: Taxonomy, algorithms and complexity. In 15th IEEE International

Page 122: Lectures in Cooperative Game Theory

130 Bibliography

Conference on Tools with Artificial Intelligence (ICTAI 2003), pages 346–350,November 2003.

[30] Katia Lerman and Onn Shehory. Coalition formation for large-scale electronicmarkets. In Proceedings of the Fourth International Conference on MultiAgentSystems (ICMAS-2000). IEEE Computer Society, July 2000.

[31] M. Mares. Fuzzy cooperative games: cooperation with vague expectations. Stud-ies in fuzziness and soft computing, 72, 2001.

[32] Carlos Mérida-Campos and Steven Willmott. Modelling coalition formation overtime for iterative coalition games. In Proceedings of the third International JointConference on Autonomous Agents and Multiagent Systems(AAMAS’04), 2004.

[33] Carlos Mérida-Campos and Steven Willmott. The effect of heterogeneity oncoalition formation in iterated request for proposal scenarios. In Proceedingsof the Forth European Workshop on Multi-Agent Systems (EUMAS 06), 2006.

[34] I. Nishizaki and M. Sakawa. Masatoshi. Fuzzy and multiobjective games forconflict resolution. Studies in Fuzziness and Soft Computing, 64, 2001.

[35] Naoki Ohta, Vincent Conitzer, Yasufumi Satoh, Atsushi Iwasaki, and MakotoYokoo. Anonimity-proof shapley value: Extending shapley value for coalitionalgames in open environments. In Proc. of the 8th Int. Conf. on Autonomous Agentsand Multiagent Systems (AAMAS-09), May 2009.

[36] Michal Pechoucek, Vladimír Marík, and Jaroslav Bárta. A knowledge-based ap-proach to coalition formation. IEEE Intelligent Systems, 17(3):17–25, 2002.

[37] Ariel D. Procaccia and Jeffrey S. Rosenschein. The communication complexityof coalition formation among autonomous agents. In Proceedings of the FifthInternational Joint Conference on Autonomous Agents and Multiagent SystemsAAMAS-06, May 2006.

[38] Tuomas Sandholm and Victor R. Lesser. Coalitions among computationallybounded agents. AI Journal, 94(1–2):99–137, 1997.

[39] Onn Shehory and Sarit Kraus. Methods for task allocation via agent coalitionformation. Artificial Intelligence, 101(1-2):165–200, May 1998.

[40] Onn Shehory and Sarit Kraus. Feasible formation of coalitions among au-tonomous agents in nonsuperadditve environments. Computational Intelligence,15:218–251, 1999.

[41] Leen-Kiat Soh and Costas Tsatsoulis. Satisficing coalition formation amongagents. In AAMAS ’02: Proceedings of the first international joint conferenceon Autonomous agents and multiagent systems, pages 1062–1063, New York,NY, USA, 2002. ACM Press.

Page 123: Lectures in Cooperative Game Theory

Bibliography 131

[42] Fernando Tohmé and Tuomas Sandholm. Coalition formation processes withbelief revision among bounded-rational self-interested agents. Journal of Logicand Computation, 9(6):793–815, 1999.

[43] Julita Vassileva, Silvia Breban, and Michael Horsch. Agent reasoning mechanismfor making long-term coalitions based on decision making and trust. Computa-tional Intelligence, 18(4):583–595, 2002.

[44] Makoto Yokoo, Vincent Conitzer, Tuomas Sandholm, Naoki Ohta, and AtsushiIwasaki. Coalitional games in open anonymous environments. In Proceedingsof the Twentieth National Conference on Artificial Intelligence, pages 509–515.AAAI Press AAAI Press / The MIT Press, 2005.

[45] Makoto Yokoo, Vincent Conitzer, Tuomas Sandholm, Naoki Ohta, and AtsushiIwasaki. A compact representation scheme for coalitional games in open anony-mous environments. In Proceedings of the Twenty First National Conference onArtificial Intelligence, pages –. AAAI Press AAAI Press / The MIT Press, 2006.