game theory report

74
CONTENTS ABSTRACT I LIST OF TABLES II LIST OF SYMBOLS III CHAPTER TITLE 1 INTRODUCTION TO GAME THEORY 1.1 DEFINITION OF GAME THEORY 1.2 HISTORY 2 BASICS OF GAME THEORY 2.1 GAME 2.2 MOVE 2.3 STRATEGY 2.4 PLAYERS 2.5 TIMING 2.6 CONFLICTING GOALS 2.7 REPETITION 2.8 PAYOFF 2.9 INFORMATION AVAILIBLITY 2.10 EQUILIBRIUM 3 TYPES OF GAMES 3.1 CO-OPERATIVE AND NON-CO- OPERATIVE 3.2 SYMMETRIC AND ASYMMETRIC MCOE, T.E. Computer Science 2012 1

Upload: soumya-bilwar

Post on 14-Oct-2014

159 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Game Theory Report

CONTENTS

ABSTRACT I

LIST OF TABLES II

LIST OF SYMBOLS III

CHAPTER TITLE

1 INTRODUCTION TO GAME THEORY

1.1 DEFINITION OF GAME THEORY

1.2 HISTORY

2 BASICS OF GAME THEORY

2.1 GAME

2.2 MOVE

2.3 STRATEGY

2.4 PLAYERS

2.5 TIMING

2.6 CONFLICTING GOALS

2.7 REPETITION

2.8 PAYOFF

2.9 INFORMATION AVAILIBLITY

2.10 EQUILIBRIUM

3 TYPES OF GAMES

3.1 CO-OPERATIVE AND NON-CO-OPERATIVE

3.2 SYMMETRIC AND ASYMMETRIC

3.3 ZERO SUM AND NON ZERO SUM

3.4 SIMULTANEOUS AND SEQUENTIAL

3.5 PERFECT INFORMATION AND IMPERFECT

INFORMATION

3.6 COMBINATIONAL GAMES

3.7 INFINITELY LONG GAMES

3.8 DISCREATE AND CONTINUOUS GAMES

3.9 MANY PLAYER AND POPULATION GAME

MCOE, T.E. Computer Science 20121

Page 2: Game Theory Report

3.10 METAGAMES

4 REPRESENTATION OF GAMES

4.1 EXTENSIVE FORM

4.2 NORMAL FORM

4.3 CHARACTERISTIC FUNCTION FORM

4.4 PARTITION FUNCTION FORM

5 NASH EQUILIBRIUM

5.1 INTRODUCTION

5.2 HISTORY

5.3 INFORMAL DEFINITION

5.4 FORMAL DEFINITION

5.5 APPLICATION

5.6 STABILITY

5.7 OCCURRENCES

5.8 COMPUTING NASH EQUILIBRIUM

5.9 PROOF OF EXISTENCE

5.10 PURE AND MIXED STRATEGIES

5.11 MIXED STRATEGY

6 POPULAR GAMES ON GAME THEORY

6.1 PRISONER’S DILEMMA

6.2 CHICKEN GAME

7 GENERAL AND APPLIED USES OF GAME THEORY

7.1 ECONOMICS AND BUSINESS

7.2 POLITICAL SCIENCE

7.3 BIOLOGY

7.4 COMPUTER SCIENCE AND LOGIC

7.5 PHILOSOPHY

8 CONCLUSION

MCOE, T.E. Computer Science 20122

Page 3: Game Theory Report

28/02/2012

GAME THEORY

Game theory is a method of studying strategic decision making. More formally, it is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. It’s the study of rational behaviour broken down into strategic decisions, is all about equations. If you do this, I will decide to do this, and then you will probably decide to do that -- expressed in the language of math. Game theorists try to find math equations that describe a problem as completely as possible in order to predict the outcome that benefits each individual in a group. The most universally beneficial outcome is considered the logically best outcome, and the solution. But the central paradox of game theory is that it seeks to mathematically explain decisions that are frequently made in the grip of intense emotion. Game theory sets out to analyse and explain rational behaviour. Plants, evolving mutely as they do, are rational. People, who screw up, break hearts and move markets, aren't always.

The difficulty with game theory is that its attempt to explain everything in one unified theory results in a patchwork of math that is possibly too ugly to be elegantly unified, and one that cannot possibly explain everything. A chess game, maybe. The worth tomorrow of the mutual fund in your retirement account, no .As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviours. It was initially developed in economics to understand a large collection of economic behaviours, including behaviours of firms, markets, and consumers. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviours as well.

In addition to being used to describe, predict, and explain behaviour, game theory has also been used to develop theories of ethical or normative behaviour and to prescribe such behaviour. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behaviour. Game-theoretic arguments of this type can be found as far back as Plato.

Soumyashree Bilwar

MCOE, T.E. Computer Science 20123

Page 4: Game Theory Report

List of Tables:

1. Extensive form of Game representation

2. Normal form of Game representation

3. Pure co-ordination game

4. Prisoner’s Dilemma

5. Chicken Game

List of Symbols:ααα

1. - symmetric difference

2. ∑ - summation

3. ¶ - product

4. Ω -omega

5. δ- delta

6. α – alpha

7. + - addition

8. - - subtraction

9. * - multiplication

10. > - greater

11. < -smaller

12. = - equal

MCOE, T.E. Computer Science 20124

Page 5: Game Theory Report

CHAPTER 1

INTRODUCTION TO GAME THEORY

1.1 Definition of Game Theory

The study of mathematical models of conflict and cooperation between

intelligent rational decision-makers.

Game theory is a method of studying strategic decision making. More

formally, it is "the study of mathematical models of conflict and cooperation

between intelligent rational decision-makers." An alternative term suggested "as

a more descriptive name for the discipline" is interactive decision theory. Game

theory is mainly used in economics, political science, and psychology, as well as

logic and biology. The subject first addressed zero-sum games, such that one

person's gains exactly equal net losses of the other participant(s). Today,

however, game theory applies to a wide range of class relations, and has

developed into an umbrella term for the logical side of science, to include both

human and non-humans, like computers. Classic uses include a sense of balance

in numerous games, where each person has found or developed a tactic that

cannot successfully better his results, given the other approach.

Modern game theory began with the idea regarding the existence of mixed-

strategy equilibrium in two-person zero-sum games and its proof by John von

Neumann. Von Neumann's original proof used Brouwer's fixed-point theorem on

continuous mappings into compact convex sets, which became a standard method

in game theory and mathematical economics. His paper was followed by his 1944

book Theory of Games and Economic Behaviour, with Oskar Morgenstern,

which considered cooperative games of several players. The second edition of

this book provided an axiomatic theory of expected utility, which allowed

MCOE, T.E. Computer Science 20125

Page 6: Game Theory Report

mathematical statisticians and economists to treat decision-making under

uncertainty.

This theory was developed extensively in the 1950s by many scholars. Game

theory was later explicitly applied to biology in the 1970s, although similar

developments go back at least as far as the 1930s. Game theory has been widely

recognized as an important tool in many fields. Eight game-theorists have won

the Nobel Memorial Prize in Economic Sciences, and John Maynard Smith was

awarded the Crafoord Prize for his application of game theory to biology.

1.2 History

The Danish mathematician Zeuthen proved that a mathematical model has a

winning strategy by using Brouwer's fixed point theorem. In his 1938

bookApplications aux Jeux de Hasard and earlier notes, Émile Borel proved a

minimax theorem for two-person zero-sum matrix games only when the pay-off

matrix was symmetric. Borel conjectured that non-existence of a mixed-strategy

equilibria in two-person zero-sum games would occur, a conjecture that was

proved false.

Game theory did not really exist as a unique field until John von

Neumann published a paper in 1928. His paper was followed by his 1944

book Theory of Games and Economic Behavior, with Oskar Morgenstern, which

considered cooperative games of several players. Von Neumann's work in game

theory culminated in the 1944 book Theory of Games and Economic Behavior by

von Neumann and Oskar Morgenstern. This foundational work contains the

method for finding mutually consistent solutions for two-person zero-sum games.

During this time period, work on game theory was primarily focused

on cooperative game theory, which analyses optimal strategies for groups of

individuals, presuming that they can enforce agreements between them about

proper strategies.

MCOE, T.E. Computer Science 20126

Page 7: Game Theory Report

In 1950, the first discussion of the prisoner's dilemma appeared, and an

experiment was undertaken on this game at the RAND Corporation. Around this

same time, John Nash developed a criterion for mutual consistency of players'

strategies, known as Nash equilibrium, applicable to a wider variety of games

than the criterion proposed by von Neumann and Morgenstern. This equilibrium

is sufficiently general to allow for the analysis of non-cooperative games in

addition to cooperative ones.

In the 1970s, game theory was extensively applied in biology, largely as a

result of the work of John Maynard Smith and his evolutionarily stable strategy.

In addition, the concepts of correlated equilibrium, trembling hand perfection,

and common knowledge were introduced and analysed.

In 2005, game theorists Thomas Schelling and Robert Aumann followed

Nash, Selten and Harsanyi as Nobel Laureates. Schelling worked on dynamic

models, early examples of evolutionary game theory. Aumann contributed more

to the equilibrium school, introducing an equilibrium coarsening, correlated

equilibrium, and developing an extensive formal analysis of the assumption of

common knowledge and of its consequences.

In 2007, Leonid Hurwicz, together with Eric Maskin and Roger Myerson, was

awarded the Nobel Prize in Economics "for having laid the foundations

of mechanism design theory." Myerson's contributions include the notion

of proper equilibrium, and an important graduate text: Game Theory, Analysis of

Conflict (Myerson 1997). Hurwicz introduced and formalized the concept

of incentive compatibility.

MCOE, T.E. Computer Science 20127

Page 8: Game Theory Report

CHAPTER 2

BASICS OF GAME THEORY

Game theory is the process of modelling the strategic interaction between two

or more players in a situation containing set rules and outcomes. While used in a

number of disciplines, game theory is most notably used as a tool within the study

of economics. The economic application of game theory can be a valuable tool to aide

in the fundamental analysis of industries, sectors and any strategic interaction

between two or more firms. Here, we'll take an introductory look at game theory and

the terms involved, and introduce you to a simple method of solving games, called

backwards induction.

2.1 Game

A conflict in interest among n individuals or groups (players). There

exists a set of rules that define the terms of exchange of information and

pieces, the conditions under which the game begins, and the possible legal

exchanges in particular conditions. The entirety of the game is defined by all

the moves to that point, leading to an outcome.

2.2 Move

The way in which the game progresses between states through exchange of

pieces and information. Moves are defined by the rules of the game and can be made

in either alternating fashion ,occur simultaneously for all players. Moves may be

choice or by chance. For example , choosing a card from a deck or rolling a die is a

MCOE, T.E. Computer Science 20128

Page 9: Game Theory Report

chance move with known probabilities. On the other hand ,asking for cards in

blackjack is a choice move.

2.3 Strategy

A strategy is a set of best choices for a player for an entire game. It is an

overlying plan that cannot be upset by occurrences in the game first.

2.4 Players

The number of participants may be two or more. A player can be a single

individual or a group with the same objective.

2.5 Timings

The conflicting parties decide simultaneously.

2.6 Conflicting goals

Each party is interested in maximizing his or her goal at the expense of the

other.

2.7 Repetition

Most instances involve repetitive solutions.

MCOE, T.E. Computer Science 20129

Page 10: Game Theory Report

2.8 Payoff

The payoffs for each combination of decisions are known by all parties.

2.9 Information Availability

All parties are aware of all pertinent information. Each player knows all

possible courses of action open to the opponent as well as anticipated payoffs.

2.10 Equilibrium

The point in a game where both players have made their decisions and an

outcome is reached.

MCOE, T.E. Computer Science 201210

Page 11: Game Theory Report

CHAPTER 3

TYPES OF GAMES

3.1 Cooperative or non-cooperative

A game is cooperative if the players are able to form binding

commitments. For instance the legal system requires them to adhere to their

promises. In non-cooperative games this is not possible.Often it is assumed

that communication among players is allowed in cooperative games, but not in

non-cooperative ones. However, this classification on two binary criteria has

been questioned, and sometimes rejected (Harsanyi 1974).

Of the two types of games, non-cooperative games are able to model

situations to the finest details, producing accurate results. Cooperative games

focus on the game at large. Considerable efforts have been made to link the

two approaches. The so-called Nash-programme has already established many

of the cooperative solutions as non-cooperative equilibria. Hybrid games

contain cooperative and non-cooperative elements. For instance, coalitions of

players are formed in a cooperative game, but these play in a non-cooperative

fashion.

3.2 Symmetric and asymmetric

A symmetric game is a game where the payoffs for playing a particular

strategy depend only on the other strategies employed, not on who is playing

them. If the identities of the players can be changed without changing the

payoff to the strategies, then a game is symmetric. Many of the commonly

studied 2×2 games are symmetric. The standard representations of chicken,

the prisoner's dilemma, and the stag hunt are all symmetric games. Some

MCOE, T.E. Computer Science 201211

Page 12: Game Theory Report

scholars would consider certain asymmetric games as examples of these

games as well. However, the most common payoffs for each of these games

are symmetric.

Most commonly studied asymmetric games are games where there are

not identical strategy sets for both players. For instance, the ultimatum

game and similarly the dictator game have different strategies for each player.

It is possible, however, for a game to have identical strategies for both players,

yet be asymmetric. For example, the game pictured to the right is asymmetric

despite having identical strategy sets for both players.

3.3 Zero-sum and non-zero-sum

Zero-sum games are a special case of constant-sum games, in which

choices by players can neither increase nor decrease the available resources.

In zero-sum games the total benefit to all players in the game, for every

combination of strategies, always adds to zero (more informally, a player

benefits only at the equal expense of others). Poker exemplifies a zero-sum

game (ignoring the possibility of the house's cut), because one wins exactly

the amount one's opponents lose. Other zero-sum games include matching

pennies and most classical board games including Go and chess.

Many games studied by game theorists (including the

infamous prisoner's dilemma) are non-zero-sum games, because

some outcomes have net results greater or less than zero. Informally, in non-

zero-sum games, a gain by one player does not necessarily correspond with a

loss by another.

Constant-sum games correspond to activities like theft and gambling,

but not to the fundamental economic situation in which there are

potential gains from trade. It is possible to transform any game into a

(possibly asymmetric) zero-sum game by adding an additional dummy player

(often called "the board"), whose losses compensate the players' net winnings.

MCOE, T.E. Computer Science 201212

Page 13: Game Theory Report

3.4 Simultaneous and sequential

Simultaneous games are games where both players move

simultaneously, or if they do not move simultaneously, the later players are

unaware of the earlier players' actions (making them effectively

simultaneous). Sequential games (or dynamic games) are games where later

players have some knowledge about earlier actions. This need not be perfect

information about every action of earlier players; it might be very little

knowledge. For instance, a player may know that an earlier player did not

perform one particular action, while he does not know which of the other

available actions the first player actually performed.

The difference between simultaneous and sequential games is captured

in the different representations discussed above. Often, normal form is used to

represent simultaneous games, and extensive form is used to represent

sequential ones. The transformation of extensive to normal form is one way,

meaning that multiple extensive form games correspond to the same normal

form. Consequently, notions of equilibrium for simultaneous games are

insufficient for reasoning about sequential games; see sub game perfection.

3.5 Perfect information and imperfect information

An important subset of sequential games consists of games of perfect

information. A game is one of perfect information if all players know the

moves previously made by all other players. Thus, only sequential games can

be games of perfect information, since in simultaneous games not every player

knows the actions of the others. Recreational games of perfect information

games include chess, go, and mancala. Many card games are games of

imperfect information, for instance poker or contract bridge.

MCOE, T.E. Computer Science 201213

Page 14: Game Theory Report

Perfect information is often confused with complete information,

which is a similar concept. Complete information requires that every player

know the strategies and payoffs available to the other players but not

necessarily the actions taken. Games of incomplete information can be

reduced, however, to games of imperfect information by introducing "moves

by nature" .

3.6 Combinatorial games

Games in which the difficulty of finding an optimal strategy stems

from the multiplicity of possible moves are called combinatorial games.

Examples include chess and go. Games that involve imperfect or incomplete

information may also have a strong combinatorial character, for

instance backgammon. There is no unified theory addressing combinatorial

elements in games. There are, however, mathematical tools that can solve

particular problems and answer some general questions.

Games of perfect information have been studied in combinatorial

game theory, which has developed novel representations, e.g. surreal numbers,

as well as combinatorial and algebraic (and sometimes non-constructive)

proof methods to solve games of certain types, including some "loopy" games

that may result in infinitely long sequences of moves. These methods address

games with higher combinatorial complexity than those usually considered in

traditional (or "economic") game theory. A typical game that has been solved

this way is hex. A related field of study, drawing from computational

complexity theory, is game complexity, which is concerned with estimating

the computational difficulty of finding optimal strategies.

Research in artificial intelligence has addressed both perfect and

imperfect (or incomplete) information games that have very complex

combinatorial structures (like chess, go, or backgammon) for which no

provable optimal strategies have been found. The practical solutions involve

MCOE, T.E. Computer Science 201214

Page 15: Game Theory Report

computational heuristics, like alpha-beta pruning or use of artificial neural

networks trained by reinforcement learning, which make games more tractable

in computing practice.

3.7 Infinitely long games

Games, as studied by economists and real-world game players, are

generally finished in finitely many moves. Pure mathematicians are not so

constrained, and set theorists in particular study games that last for infinitely

many moves, with the winner (or other payoff) not known until after all those

moves are completed.

The focus of attention is usually not so much on what is the best way

to play such a game, but simply on whether one or the other player has

a winning strategy. (It can be proven, using the axiom of choice, that there are

games—even with perfect information, and where the only outcomes are

"win" or "lose"—for which neither player has a winning strategy.) The

existence of such strategies, for cleverly designed games, has important

consequences in descriptive set theory.

3.8 Discrete and continuous games

Much of game theory is concerned with finite, discrete games, that

have a finite number of players, moves, events, outcomes, etc. Many concepts

can be extended, however. Continuous games allow players to choose a

strategy from a continuous strategy set. For instance, Cournot competition is

typically modeled with players' strategies being any non-negative quantities,

including fractional quantities.

3.9 Many-player and population games

MCOE, T.E. Computer Science 201215

Page 16: Game Theory Report

Games with an arbitrary, but finite, number of players are often called

n-person games (Luce & Raiffa 1957). Evolutionary game theory considers

games involving a population of decision makers, where the frequency with

which a particular decision is made can change over time in response to the

decisions made by all individuals in the population. In biology, this is

intended to model (biological)evolution, where genetically programmed

organisms pass along some of their strategy programming to their offspring.

In economics, the same theory is intended to capture population changes

because people play the game many times within their lifetime, and

consciously (and perhaps rationally) switch strategies (Webb 2007).

3.10 Metagames

These are games the play of which is the development of the rules for

another game, the target or subject game. Metagames seek to maximize the

utility value of the rule set developed. The theory of metagames is related

to mechanism design theory.

The term metagame analysis is also used to refer to a practical

approach developed by Nigel Howard (Howard 1971) whereby a situation is

framed as a strategic game in which stakeholders try to realise their objectives

by means of the options available to them. Subsequent developments have led

to the formulation of Confrontation Analysis.

MCOE, T.E. Computer Science 201216

Page 17: Game Theory Report

CHAPTER 4

REPRESENTATION OF GAMES

4.1 Type 1: Extensive form

Fig 4.1

The extensive form can be used to formalize games with a time

sequencing of moves. Games here are played on trees (as pictured to the left).

Here each vertex (or node) represents a point of choice for a player. The

player is specified by a number listed by the vertex. The lines out of the vertex

represent a possible action for that player. The payoffs are specified at the

bottom of the tree. The extensive form can be viewed as a multi-player

generalization of adecision tree.

In the game pictured to the left, there are two players. Player 1 moves

first and chooses either F or U. Player 2 sees Player 1's move and then

chooses A or R. Suppose that Player 1 chooses U and then Player

2 chooses A, then Player 1 gets 8 and Player 2 gets 2.

The extensive form can also capture simultaneous-move games and

games with imperfect information. To represent it, either a dotted line

connects different vertices to represent them as being part of the same

information set (i.e., the players do not know at which point they are), or a

closed line is drawn around them.

MCOE, T.E. Computer Science 201217

Page 18: Game Theory Report

4.2 Type 2: Normal form

Fig 4.2

The normal (or strategic form) game is usually represented by

a matrix which shows the players, strategies, and pay-offs (see the example to

the right). More generally it can be represented by any function that associates

a payoff for each player with every possible combination of actions. In the

accompanying example there are two players; one chooses the row and the

other chooses the column. Each player has two strategies, which are specified

by the number of rows and the number of columns. The payoffs are provided

in the interior. The first number is the payoff received by the row player

(Player 1 in our example); the second is the payoff for the column player

(Player 2 in our example). Suppose that Player 1 plays Up and that Player 2

plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3.

Every extensive-form game has an equivalent normal-form game,

however the transformation to normal form may result in an exponential blow

up in the size of the representation, making it computationally impractical.

4.3 Type 3: Characteristic function form

MCOE, T.E. Computer Science 201218

Player 2

chooses Left

Player 2

chooses Right

Player 1

chooses Up4, 3 –1, –1

Player 1

chooses Down0, 0 3, 4

Normal form or payoff matrix of a 2-player, 2-strategy

Page 19: Game Theory Report

In games that possess removable utility separate rewards are not given;

rather, the characteristic function decides the payoff of each unity. The idea is

that the unity that is 'empty', so to speak, does not receive a reward at all.

The origin of this form is to be found in John von Neumann and Oskar

Morgenstern's book; when looking at these instances, they guessed that when

a union C appears, it works against the fraction (N/C) as if two individuals

were playing a normal game. The balanced payoff of C is a basic function.

Although there are differing examples that help determine coalitional amounts

from normal games, not all appear that in their function form can be derived

from such.

Such characteristic functions have expanded to describe games where

there is no removable utility.

4.4 Type 4: Partition function form

The characteristic function form ignores the possible externalities of

coalition formation. In the partition function form the payoff of a coalition

depends not only on its members, but also on the way the rest of the players

are partitioned (Thrall & Lucas 1963).

MCOE, T.E. Computer Science 201219

Page 20: Game Theory Report

CHAPTER 5

NASH EQUILLIBRIUM

5.1 Introduction

In game theory, Nash equilibrium (named after John Forbes Nash,

who proposed it) is a solution concept of a game involving two or more

players, in which each player is assumed to know the equilibrium strategies of

the other players, and no player has anything to gain by changing only his

own strategy unilaterally. If each player has chosen a strategy and no player

can benefit by changing his or her strategy while the other players keep theirs

unchanged, then the current set of strategy choices and the corresponding

payoffs constitute a Nash equilibrium. The practical and general implication is

that when players also act in the interests of the group, then they are better off

than if they acted in their individual interests alone.

Stated simply, Amy and Phil are in Nash equilibrium if Amy is

making the best decision she can, taking into account Phil's decision, and Phil

is making the best decision he can, taking into account Amy's decision.

Likewise, a group of players are in Nash equilibrium if each one is making the

best decision that he or she can, taking into account the decisions of the

others. However, Nash equilibrium does not necessarily mean the best payoff

for all the players involved; in many cases, all the players might improve their

payoffs if they could somehow agree on strategies different from the Nash

equilibrium: e.g., competing businesses forming a cartel in order to increase

their profits.

5.2 History

A version of the Nash equilibrium concept was first used by Antoine

Augustin Cournot in his theory of oligopoly (1838). In Cournot's theory, firms MCOE, T.E. Computer Science 2012

20

Page 21: Game Theory Report

choose how much output to produce to maximize their own profit. However,

the best output for one firm depends on the outputs of others. A Cournot

equilibrium occurs when each firm's output maximizes its profits given the

output of the other firms, which is a pure strategy Nash Equilibrium.

The modern game-theoretic concept of Nash Equilibrium is instead

defined in terms of mixed strategies, where players choose a probability

distribution over possible actions.

Since the development of the Nash equilibrium concept, game

theorists have discovered that it makes misleading predictions (or fails to

make a unique prediction) in certain circumstances. Therefore they have

proposed many related solution concepts (also called 'refinements' of Nash

equilibrium) designed to overcome perceived flaws in the Nash concept. One

particularly important issue is that some Nash equilibria may be based on

threats that are not 'credible'.

5.3 Informal definition

Informally, a set of strategies is a Nash equilibrium if no player can do

better by unilaterally changing his or her strategy. To see what this means,

imagine that each player is told the strategies of the others. Suppose then that

each player asks himself or herself: "Knowing the strategies of the other

players, and treating the strategies of the other players as set in stone, can I

benefit by changing my strategy?"

If any player would answer "Yes", then that set of strategies is not a

Nash equilibrium. But if every player prefers not to switch (or is indifferent

between switching and not) then the set of strategies is a Nash equilibrium.

Thus, each strategy in a Nash equilibrium is a best response to all other

strategies in that equilibrium.

The Nash equilibrium may also have non-rational consequences in

sequential games because players may "threaten" each other with non-rational

MCOE, T.E. Computer Science 201221

Page 22: Game Theory Report

moves. For such games the subgame perfect Nash equilibrium may be more

meaningful as a tool of analysis.

5.4 Formal definition

Let (S, f) be a game with n players, where Si is the strategy set for

player i, S=S1 × S2 ... × Sn is the set of strategy profiles and f=(f1(x), ...,

fn(x)) is the payoff function for x S. Let xi be a strategy profile of

player i and x-i be a strategy profile of all players except for player i. When

each player i 1, ..., n chooses strategy xi resulting in strategy profile x =

(x1, ..., xn) then player i obtains payofffi(x). Note that the payoff depends on the

strategy profile chosen, i.e., on the strategy chosen by player i as well as the

strategies chosen by all the other players. A strategy profile x* S is a Nash

equilibrium (NE) if no unilateral deviation in strategy by any single player is

profitable for that player, that is

When the inequality above holds strictly (with instead of ) for all

players and all feasible alternative strategies, then the equilibrium is classified

as a strict Nash equilibrium. If instead, for some player, there is exact

equality between and some other strategy in the set , then the equilibrium

is classified as a weak Nash equilibrium.

5.5 Applications

Game theorists use the Nash equilibrium concept to analyze the

outcome of the strategic interaction of several decision makers. In other

words, it provides a way of predicting what will happen if several people or

several institutions are making decisions at the same time, and if the outcome MCOE, T.E. Computer Science 2012

22

Page 23: Game Theory Report

depends on the decisions of the others. The simple insight underlying John

Nash's idea is that we cannot predict the result of the choices of multiple

decision makers if we analyze those decisions in isolation. Instead, we must

ask what each player would do, taking into account the decision-making of

the others.

Nash equilibrium has been used to analyze hostile situations

like war and arms races(see Prisoner's dilemma), and also how conflict may

be mitigated by repeated interaction (see Tit-for-tat). It has also been used to

study to what extent people with different preferences can cooperate

(see Battle of the sexes), and whether they will take risks to achieve a

cooperative outcome (see Stag hunt). It has been used to study the adoption

of technical standards, and also the occurrence of bank runs and currency

crises (see Coordination game). Other applications include traffic flow

(see Wardrop's principle), how to organize auctions (see auction theory), the

outcome of efforts exerted by multiple parties in the education process, [3] and

even penalty kicks in soccer (see Matching pennies).

5.6 Stability

The concept of stability, useful in the analysis of many kinds of

equilibria, can also be applied to Nash equilibria

A Nash equilibrium for a mixed strategy game is stable if a small

change (specifically, an infinitesimal change) in probabilities for one

player leads to a situation where two conditions hold:

1. the player who did not change has no better strategy in the new circumstance

2. the player who did change is now playing with a strictly worse strategy.

If these cases are both met, then a player with the small change in his mixed-

strategy will return immediately to the Nash equilibrium. The equilibrium is said

to be stable. If condition one does not hold then the equilibrium is unstable. If

MCOE, T.E. Computer Science 201223

Page 24: Game Theory Report

only condition one holds then there are likely to be an infinite number of optimal

strategies for the player who changed. John Nash showed that the latter situation

could not arise in a range of well-defined games.

In the "driving game" example above there are both stable and unstable

equilibria. The equilibria involving mixed-strategies with 100% probabilities are

stable. If either player changes his probabilities slightly, they will be both at a

disadvantage, and his opponent will have no reason to change his strategy in turn.

The (50%,50%) equilibrium is unstable. If either player changes his probabilities,

then the other player immediately has a better strategy at either (0%, 100%) or

(100%, 0%).

Stability is crucial in practical applications of Nash equilibria, since the

mixed-strategy of each player is not perfectly known, but has to be inferred from

statistical distribution of his actions in the game. In this case unstable equilibria

are very unlikely to arise in practice, since any minute change in the proportions

of each strategy seen will lead to a change in strategy and the breakdown of the

equilibrium.

The Nash equilibrium defines stability only in terms of unilateral deviations.

In cooperative games such a concept is not convincing enough. Strong Nash

equilibrium allows for deviations by every conceivable coalition. Formally,

a Strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the

actions of its complements as given, can cooperatively deviate in a way that

benefits all of its members. However, the Strong Nash concept is sometimes

perceived as too "strong" in that the environment allows for unlimited private

communication. In fact, Strong Nash equilibrium has to be Pareto efficient. As a

result of these requirements, Strong Nash is too rare to be useful in many

branches of game theory. However, in games such as elections with many more

players than possible outcomes, it can be more common than a stable

equilibrium.

A refined Nash equilibrium known as coalition-proof Nash

equilibrium (CPNE)[6] occurs when players cannot do better even if they are

allowed to communicate and make "self-enforcing" agreement to deviate. Every

MCOE, T.E. Computer Science 201224

Page 25: Game Theory Report

correlated strategy supported by iterated strict dominance and on the Pareto

frontier is a CPNE.[8] Further, it is possible for a game to have a Nash equilibrium

that is resilient against coalitions less than a specified size, k. CPNE is related to

the theory of the core.

Finally in the eighties, building with great depth on such ideas Mertens-stable

equilibria were introduced as a solution concept. Mertens stable equilibria satisfy

both forward induction and backward induction. In a Game theory context stable

equilibria now usually refer to Mertens stable equilibria.

5.7 Occurrences

If a game has a unique Nash equilibrium and is played among players under

certain conditions, then the NE strategy set will be adopted. Sufficient conditions

to guarantee that the Nash equilibrium is played are:

1. The players all will do their utmost to maximize their expected payoff as

described by the game.

2. The players are flawless in execution.

3. The players have sufficient intelligence to deduce the solution.

4. The players know the planned equilibrium strategy of all of the other players.

5. The players believe that a deviation in their own strategy will not cause

deviations by any other players.

6. There is common knowledge that all players meet these conditions, including

this one. So, not only must each player know the other players meet the

conditions, but also they must know that they all know that they meet them,

and know that they know that they know that they meet them, and so on.

Where the conditions are not met

MCOE, T.E. Computer Science 201225

Page 26: Game Theory Report

Examples of game theory problems in which these conditions are not met:

1. The first condition is not met if the game does not correctly describe the

quantities a player wishes to maximize. In this case there is no particular

reason for that player to adopt an equilibrium strategy. For instance, the

prisoner’s dilemma is not a dilemma if either player is happy to be jailed

indefinitely.

2. Intentional or accidental imperfection in execution. For example, a computer

capable of flawless logical play facing a second flawless computer will result

in equilibrium. Introduction of imperfection will lead to its disruption either

through loss to the player who makes the mistake, or through negation of

the common knowledge criterion leading to possible victory for the player.

(An example would be a player suddenly putting the car into reverse in

the game of chicken, ensuring a no-loss no-win scenario).

3. In many cases, the third condition is not met because, even though the

equilibrium must exist, it is unknown due to the complexity of the game, for

instance in Chinese chess. Or, if known, it may not be known to all players,

as when playing tic-tac-toe with a small child who desperately wants to win

(meeting the other criteria).

4. The criterion of common knowledge may not be met even if all players do, in

fact, meet all the other criteria. Players wrongly distrusting each other's

rationality may adopt counter-strategies to expected irrational play on their

opponents’ behalf. This is a major consideration in “Chicken” or an arms

race, for example.

Where the conditions are met

Due to the limited conditions in which NE can actually be observed,

they are rarely treated as a guide to day-to-day behaviour, or observed in

practice in human negotiations. However, as a theoretical concept

in economics and evolutionary biology, the NE has explanatory power. The

MCOE, T.E. Computer Science 201226

Page 27: Game Theory Report

payoff in economics is utility (or sometimes money), and in evolutionary

biology gene transmission, both are the fundamental bottom line of survival.

Researchers who apply games theory in these fields claim that strategies

failing to maximize these for whatever reason will be competed out of the

market or environment, which are ascribed the ability to test all strategies.

This conclusion is drawn from the "stability" theory above.

5.8 Computing Nash Equilibrium

If a player A has a dominant strategy then there exists a Nash

equilibrium in which A plays . In the case of two players A and B, there

exists a Nash equilibrium in which A plays and B plays a best

response to . If is a strictly dominant strategy, A plays in all Nash

equilibria. If both A and B have strictly dominant strategies, there exists a

unique Nash equilibrium in which each plays his strictly dominant strategy.

In games with mixed strategy Nash equilibria, the probability of a

player choosing any particular strategy can be computed by assigning a

variable to each strategy that represents a fixed probability for choosing that

strategy. In order for a player to be willing to randomize, his expected payoff

for each strategy should be the same. In addition, the sum of the probabilities

for each strategy of a particular player should be 1. This creates a system of

equations from which the probabilities of choosing each strategy can be

derived.

5.9 Proof of existence

Proof using the Kakutani fixed point theorem

Nash's original proof (in his thesis) used Brouwer's fixed point

theorem (e.g., see below for a variant). We give a simpler proof via

MCOE, T.E. Computer Science 201227

Page 28: Game Theory Report

the Kakutani fixed point theorem, following Nash's 1950 paper (he

credits David Gale with the observation that such a simplification is possible).

To prove the existence of a Nash Equilibrium, let be the best

response of player i to the strategies of all other players.

Here, , where , is a mixed strategy profile in the set

of all mixed strategies and is the payoff function for player i. Define a set-

valued function such that . The

existence of a Nash Equilibrium is equivalent to having a fixed point.

Kakutani's fixed point theorem guarantees the existence of a fixed point if the

following four conditions are satisfied.

1. is compact, convex, and nonempty.

2. is nonempty.

3. is convex.

4. is upper hemicontinuous

Condition 1. is satisfied from the fact that is a simplex and thus compact.

Convexity follows from players' ability to mix strategies. is nonempty as

long as players have strategies.

Condition 2. is satisfied because players maximize expected payoffs which is

continuous function over a compact set. The Weierstrass Extreme Value

Theorem guarantees that there is always a maximum value.

Condition 3. is satisfied as a result of mixed strategies.

Suppose , then . i.e. if two

strategies maximize payoffs, then a mix between the two strategies will yield

the same payoff.

MCOE, T.E. Computer Science 201228

Page 29: Game Theory Report

Condition 4. is satisfied by way of Berge's maximum theorem. Because is

continuous and compact, is upper hemicontinuous.

Therefore, there exists a fixed point in and a Nash Equilibrium.

When Nash made this point to John von Neumann in 1949, von Neumann

famously dismissed it with the words, "That's trivial, you know. That's just

a fixed point theorem." (See Nasar, 1998, p. 94.)

Alternate proof using the Brouwer fixed-point theorem

We have a game where is the number of players

and is the action set for the players. All of the action

sets are finite. Let denote the set of mixed

strategies for the players. The finiteness of the s ensures the compactness

of .

We can now define the gain functions. For a mixed strategy ,

we let the gain for player on action be

The gain function represents the benefit a player gets by unilaterally changing

his strategy. We now define where

MCOE, T.E. Computer Science 201229

Page 30: Game Theory Report

for . We see that

We now use to define as follows. Let

for . It is easy to see that each is a valid mixed strategy in . It is

also easy to check that each is a continuous function of , and hence is a

continuous function. Now is the cross product of a finite number of

compact convex sets, and so we get that is also compact and convex.

Therefore we may apply the Brouwer fixed point theorem to . So has a

fixed point in , call it .

I claim that is a Nash Equilibrium in . For this purpose, it suffices to

show that

This simply states the each player gains no benefit by unilaterally changing

his strategy which is exactly the necessary condition for being a Nash

Equilibrium.

MCOE, T.E. Computer Science 201230

Page 31: Game Theory Report

Now assume that the gains are not all zero. Therefore, , ,

and such that . Note then that

So let .

Also we shall denote as the gain vector indexed by actions in .

Since we clearly have that . Therefore we see

that

Since we have that is some positive scaling of the vector .

Now I claim that

. To see this, we first note that if then this is

true by definition of the gain function. Now assume that . By our previous

statements we have that

MCOE, T.E. Computer Science 201231

Page 32: Game Theory Report

and so the left term is zero, giving us that the entire expression is as needed.

So we finally have that

where the last inequality follows since is a non-zero vector. But this is a

clear contradiction, so all the gains must indeed be zero. Therefore is a

Nash Equilibrium for as needed.

5.10 Pure and mixed strategies

A pure strategy provides a complete definition of how a player will

play a game. In particular, it determines the move a player will make for any

situation he or she could face. A player's strategy set is the set of pure

strategies available to that player.

MCOE, T.E. Computer Science 201232

Page 33: Game Theory Report

A mixed strategy is an assignment of a probability to each pure

strategy. This allows for a player to randomly select a pure strategy. Since

probabilities are continuous, there are infinitely many mixed strategies

available to a player, even if their strategy set is finite.

Of course, one can regard a pure strategy as a degenerate case of a

mixed strategy, in which that particular pure strategy is selected with

probability 1 and every other strategy with probability 0.

A totally mixed strategy is a mixed strategy in which the player

assigns a strictly positive probability to every pure strategy. (Totally mixed

strategies are important for equilibrium refinement such astrembling hand

perfect equilibrium.)

5.11 Mixed strategy

Illustration

Consider the payoff matrix pictured to the right (known as a coordination

game). Here one player chooses the row and the other chooses a column. The row

player receives the first payoff, the column player the second. If row opts to

play A with probability 1 (i.e. play A for sure), then he is said to be playing a pure

MCOE, T.E. Computer Science 201233

A B

A 1, 1 0, 0

B 0, 0 1, 1

Pure coordination game Fig 5.1

Page 34: Game Theory Report

strategy. If column opts to flip a coin and play A if the coin lands heads and B if the

coin lands tails, then she is said to be playing a mixed strategy, and not a pure

strategy.

Significance

In his famous paper, John Forbes Nash proved that there is

an equilibrium for every finite game. One can divide Nash equilibria into two

types. Pure strategy Nash equilibriaare Nash equilibria where all players are

playing pure strategies. Mixed strategy Nash equilibria are equilibria where at

least one player is playing a mixed strategy. While Nash proved that every

finite game has a Nash equilibrium, not all have pure strategy Nash equilibria.

For an example of a game that does not have a Nash equilibrium in pure

strategies, see Matching pennies. However, many games do have pure strategy

Nash equilibria (e.g. the Coordination game, the Prisoner's dilemma, the Stag

hunt). Further, games can have both pure strategy and mixed strategy

equilibria.

A disputed meaning

During the 1980s, the concept of mixed strategies came under heavy

fire for being "intuitively problematic".[2] Randomization, central in mixed

strategies, lacks behavioral support. Seldom do people make their choices

following a lottery. This behavioral problem is compounded by the cognitive

difficulty that people are unable to generate random outcomes without the aid

of a random or pseudo-random generator.[2]

In 1991, game theorist Ariel Rubinstein described alternative ways of

understanding the concept. The first, due to Harsanyi (1973), [4] is

called purification, and supposes that the mixed strategies interpretation

merely reflects our lack of knowledge of the players' information and

decision-making process. Apparently random choices are then seen as

MCOE, T.E. Computer Science 201234

Page 35: Game Theory Report

consequences of non-specified, payoff-irrelevant exogeneous factors.

However, it is unsatisfying to have results that hang on unspecified factors.[3]

A second interpretation imagines the game players standing for a large

population of agents. Each of the agents chooses a pure strategy, and the

payoff depends on the fraction of agents choosing each strategy. The mixed

strategy hence represents the distribution of pure strategies chosen by each

population. However, this does not provide any justification for the case when

players are individual agents.

Later, Aumann and Brandenburger (1995), [5] re-interpreted Nash

equilibrium as an equilibrium in beliefs, rather than actions. For instance,

in Rock-paper-scissors an equilibrium in beliefs would have each

player believing the other was equally likely to play each strategy. This

interpretation weakens the predictive power of Nash equilibrium, however,

since it is possible in such an equilibrium for each player to actually play a

pure strategy of Rock.

Ever since, game theorists' attitude towards mixed strategies-based

results have been ambivalent. Mixed strategies are still widely used for their

capacity to provide Nash equilibria in games where no equilibrium in pure

strategies exists, but the model does not specify why and how players

randomize their decisions.

MCOE, T.E. Computer Science 201235

Page 36: Game Theory Report

CHAPTER 6

POPULAR PROBLEMS ON GAME THEORY

6.1 Prisoner’s Dilemma

Co-operation is usually analyzed in game theory by the means of non-zero-sum game called

Prisoner’s Dilemma.

Prisoner B stays silent

(co-operates)

Prisoner B confesses

(defects)

Prisoner A stays silent

(co-operates)

Each serve one month Prisoner A: 1 year

Prisoner B: goes free

Prisoner A confesses

(defects)

Prisoner A: goes free

Prisoner B: 1 year

Each serves 3 months

Analysis of Prisoner’s Dilemma

Each player gains when both co-operate (1 month )

One player co-operates then one who defects will gain more(defects –

freed ,confesses- 1year)

If both defect both lose (or gain very little) but not as much as the “cheated”

co-operator who’s co-operation is not returned.(3 months)

Prisoner’s Dilemma has a single Nash Equilibrium.

MCOE, T.E. Computer Science 201236

Page 37: Game Theory Report

6.2 Chicken Game

Chicken is a famous game where two people drive on a collision course

straight towards each other. Whoever swerves is considered a ‘chicken’ and loses, but

if nobody swerves, they will both crash.

Driver B

Swerve

Driver B

Goes Straight

Driver A

Swerve

Tie, Tie Lose , Win

Driver A

Goes Straight

Win ,Lose Crash

Analysis of Chicken’s Game

Both lose when both swerve

One player wins when one swerves and other goes straight.

If both go straight , both lose (lose more than what they would have lost when

both swerve. Because if both go straight they crash)

Chicken Game has two Nash Equilibrium.

CHAPTER 7MCOE, T.E. Computer Science 2012

37

Page 38: Game Theory Report

GENERAL AND APPLIED USES OF GAME THEORY

7.1 Economics and Business

Game theory is a major method used in mathematical economics and

business for modelling competing behaviours of interacting agents.

Applications include a wide array of economic phenomena and approaches,

such as auctions, bargaining, fair division, duopolies, oligopolies, social

network formation, agent-based computational economics, general

equilibrium, mechanism design, andvoting systems, and across such broad

areas as experimental economics, behavioral economics, information

economics, industrial organization, and political economy.

This research usually focuses on particular sets of strategies known

as equilibrium in games. These "solution concepts" are usually based on what

is required by norms of rationality. In non-cooperative games, the most

famous of these is the Nash equilibrium. A set of strategies is a Nash

equilibrium if each represents a best response to the other strategies. So, if all

the players are playing the strategies in a Nash equilibrium, they have no

unilateral incentive to deviate, since their strategy is the best they can do given

what others are doing.

The payoffs of the game are generally taken to represent the utility of

individual players. Often in modeling situations the payoffs represent money,

which presumably corresponds to an individual's utility. This assumption,

however, can be faulty.

A prototypical paper on game theory in economics begins by

presenting a game that is an abstraction of some particular economic situation.

One or more solution concepts are chosen, and the author demonstrates which

strategy sets in the presented game are equilibria of the appropriate type.

Naturally one might wonder to what use should this information be put.

Economists and business professors suggest two primary uses (noted

above): descriptive and prescriptive.

MCOE, T.E. Computer Science 201238

Page 39: Game Theory Report

7.2 Political science

The application of game theory to political science is focused in the

overlapping areas of fair division, political economy, public choice, war

bargaining, positive political theory, and social choice theory. In each of these

areas, researchers have developed game-theoretic models in which the players

are often voters, states, special interest groups, and politicians.

For early examples of game theory applied to political science, see the

work of Anthony Downs. In his book An Economic Theory of

Democracy (Downs 1957) he applies the Hotelling firm location model to the

political process. In the Downsian model, political candidates commit to

ideologies on a one-dimensional policy space. The theorist shows how the

political candidates will converge to the ideology preferred by the median

voter.

A game-theoretic explanation for democratic peace is that public and

open debate in democracies send clear and reliable information regarding their

intentions to other states. In contrast, it is difficult to know the intentions of

nondemocratic leaders, what effect concessions will have, and if promises will

be kept. Thus there will be mistrust and unwillingness to make concessions if

at least one of the parties in a dispute is a non-democracy (Levy &

Razin 2003).

7.3 Biology

Unlike economics, the payoffs for games in biology are often

interpreted as corresponding to fitness. In addition, the focus has been less

on equilibria that correspond to a notion of rationality, but rather on ones that

would be maintained by evolutionary forces. The best known equilibrium in

biology is known as theevolutionarily stable strategy (or ESS), and was first

introduced in (Smith & Price 1973). Although its initial motivation did not

MCOE, T.E. Computer Science 201239

Page 40: Game Theory Report

involve any of the mental requirements of the Nash equilibrium, every ESS is

a Nash equilibrium.

In biology, game theory has been used to understand many different

phenomena. It was first used to explain the evolution (and stability) of the

approximate 1:1sex ratios. (Fisher 1930) suggested that the 1:1 sex ratios are a

result of evolutionary forces acting on individuals who could be seen as trying

to maximize their number of grandchildren.

Additionally, biologists have used evolutionary game theory and the

ESS to explain the emergence of animal communication (Harper & Maynard

Smith 2003). The analysis of signaling games and other communication

games has provided some insight into the evolution of communication among

animals. For example, the mobbing behavior of many species, in which a

large number of prey animals attack a larger predator, seems to be an example

of spontaneous emergent organization. Ants have also been shown to exhibit

feed-forward behavior akin to fashion, see Butterfly Economics.

Biologists have used the game of chicken to analyze fighting behavior

and territoriality.

Maynard Smith, in the preface to Evolution and the Theory of Games,

writes, "paradoxically, it has turned out that game theory is more readily

applied to biology than to the field of economic behaviour for which it was

originally designed". Evolutionary game theory has been used to explain

many seemingly incongruous phenomena in nature.

One such phenomenon is known as biological altruism. This is a

situation in which an organism appears to act in a way that benefits other

organisms and is detrimental to itself. This is distinct from traditional notions

of altruism because such actions are not conscious, but appear to be

evolutionary adaptations to increase overall fitness. Examples can be found in

species ranging from vampire bats that regurgitate blood they have obtained

from a night's hunting and give it to group members who have failed to feed,

to worker bees that care for the queen bee for their entire lives and never mate,

to Vervet monkeys that warn group members of a predator's approach, even MCOE, T.E. Computer Science 2012

40

Page 41: Game Theory Report

when it endangers that individual's chance of survival. All of these actions

increase the overall fitness of a group, but occur at a cost to the individual.

Evolutionary game theory explains this altruism with the idea of kin

selection. Altruists discriminate between the individuals they help and favor

relatives. Hamilton's rule explains the evolutionary reasoning behind this

selection with the equation c<b*r where the cost ( c ) to the altruist must be

less than the benefit ( b ) to the recipient multiplied by the coefficient of

relatedness ( r ). The more closely related two organisms are causes the

incidences of altruism to increase because they share many of the same

alleles. This means that the altruistic individual, by ensuring that the alleles of

its close relative are passed on, (through survival of its offspring) can forgo

the option of having offspring itself because the same number of alleles are

passed on. Helping a sibling for example (in diploid animals), has a

coefficient of ½, because (on average) an individual shares ½ of the alleles in

its sibling's offspring. Ensuring that enough of a sibling’s offspring survive to

adulthood precludes the necessity of the altruistic individual producing

offspring. Similarly if it is considered that information other than that of a

genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time

the playing field becomes larger still, and the discrepancies smaller.

7.4 Computer science and logic

Game theory has come to play an increasingly important role

in logic and in computer science. Several logical theories have a basis in game

semantics. In addition, computer scientists have used games to

model interactive computations. Also, game theory provides a theoretical

basis to the field of multi-agent systems.

Separately, game theory has played a role in online algorithms. In

particular, the k-server problem, which has in the past been referred to

as games with moving costs and request-answer games (Ben David, Borodin

& Karp et al. 1994). Yao's principle is a game-theoretic technique for proving

MCOE, T.E. Computer Science 201241

Page 42: Game Theory Report

lower bounds on the computational complexity of randomized algorithms, and

especially of online algorithms.

The emergence of the internet has motivated the development of

algorithms for finding equilibria in games, markets, computational auctions,

peer-to-peer systems, and security and information markets.Algorithmic game

theory and within it algorithmic mechanism design combine

computational algorithm design and analysis of complex systems with

economic theory.

7.5 Philosophy

Game theory has been put to several uses in philosophy. Responding

to two papers by W.V.O. Quine (1960, 1967), Lewis (1969) used game theory

to develop a philosophical account of convention. In so doing, he provided the

first analysis of common knowledge and employed it in analyzing play

in coordination games. In addition, he first suggested that one can

understand meaning in terms of signaling games. This later suggestion has

been pursued by several philosophers since Lewis (Skyrms (1996), Grim,

Kokalis, and Alai-Tafti et al. (2004)). Following Lewis (1969) game-theoretic

account of conventions, Ullmann Margalit (1977) and Bicchieri (2006) have

developed theories of social norms that define them as Nash equilibria that

result from transforming a mixed-motive game into a coordination game.

In ethics, some authors have attempted to pursue the project, begun

by Thomas Hobbes, of deriving morality from self-interest. Since games like

the Prisoner's dilemma present an apparent conflict between morality and self-

interest, explaining why cooperation is required by self-interest is an

important component of this project. This general strategy is a component of

the general social contractview in political philosophy (for examples,

see Gauthier (1986) and Kavka (1986).

MCOE, T.E. Computer Science 201242

Page 43: Game Theory Report

Other authors have attempted to use evolutionary game theory in order

to explain the emergence of human attitudes about morality and corresponding

animal behaviors. These authors look at several games including the Prisoner's

dilemma, Stag hunt, and the Nash bargaining game as providing an

explanation for the emergence of attitudes about morality (see, e.g.,

Skyrms (1996, 2004) and Sober and Wilson (1999)).

Some assumptions used in some parts of game theory have been

challenged in philosophy; psychological egoism states that rationality reduces

to self-interest—a claim debated among philosophers.

MCOE, T.E. Computer Science 201243

Page 44: Game Theory Report

CHAPTER 8

CONCLUSION

"Managers have much to learn from game theory - provided they use it

to clarify their thinking, not as a substitute for business experience" .

FOR old-fashioned managers, business was a branch of warfare - a

way of 'capturing markets' and 'making a killing'. Today, however, the

language is all about working with suppliers, building alliances, and thriving

on trust and loyalty. Management theorists like to point out that there is such a

thing as 'win-win', and that business feuds can end up hurting both parties.

But this can be taken too far. Microsoft's success has helped Intel, but

it has been hell for Apple Computer. Instead, business needs a new way of

thinking that makes room for collaboration as well as competition, for mutual

benefits as well as trade-offs. Enter game theory.

Stripped to its essentials, game theory is a tool for understanding how

decisions affect each other. Until the theory came along, economists assumed

that firms could ignore the effects of their behaviour on the actions of rivals,

which was fine when competition was perfect or a monopolist held sway, but

was otherwise misleading. Game theorists argue that firms can learn from

game players: no card player plans his strategy without thinking about how

other players are planning theirs.

Economists have long used game theory to illuminate practical

problems, such as what todo about global warming or about fetuses with

Down's syndrome. Now business people have started to wake up to the

theory's possibilities. McKinsey, a consultancy, is setting up a practice in

game theory. Firms as diverse as Xerox, an office-equipment maker, Bear

Stearns, an investment bank, and PepsiCo, a soft-drinks giant, are all

MCOE, T.E. Computer Science 201244

Page 45: Game Theory Report

interested. They will no doubt seize on 'Co-petition' (Doubleday, $ 24.95),

because it is written by two of the leading names in the field, Adam

Brandenburger, of Harvard Business School, and Barry Nalebuff, of the Yale

School of Management. It also helps by using readable case studies rather

than complex mathematics.

The main practical use of game theory, say the authors, is to help a

firm decide when to compete and when to co-operate. Broadly speaking, the

time to co-operate is when you are increasing the size of the pie, and the time

to compete is when you are dividing it up. The authors also argue that, to get a

full picture of their business, managers need to think about a new category of

firms, 'complementers', which lead your customers to value your products

more highly than if they had only your product. Hot-dog makers and

Colman'smustard are complementers: buy one and you are more likely to buy

the other. So are Intel and Microsoft.

The most important thing to know about a game is who the players

are. A small changein the number of players can have unexpected

consequences. NutraSweet managed to keep the predator out, but only after

Coca-Cola and Pepsi used the threat of competition to force NutraSweet to

lower its prices.

When competition between two players benefits third parties in this

way, there is scope for the beneficiary to split its gains. Holland Sweetener in

effect gave up its share of the gains that it had helped Coke and Pepsi to win.

BellSouth, a telephone company, was wiser: it insisted on being paid to play.

The firm said that it would bid against CraigMcCaw for control of LIN

Broadcasting Corporation only if LIN paid it $ 54m for entering the fray and a

further $ 15m in expenses if it lost the bid.

MCOE, T.E. Computer Science 201245

Page 46: Game Theory Report

One way for a player to do well in a game is to make itself

indispensable. Nintendo built its video-games business in the late 1980s by

restricting software developers to making five games each, keeping retailers

on short rations, and doing much of the developmentin-house. Nobody else

had any bargaining power. By contrast, IBM stored up trouble for itself in

personal computers by allowing Microsoft and Intel to establish a lock on the

two most valuable bits of the business.

A second technique is to tempt lots of competing players into the game

- for instance by increasing the prize. That is what American Express did in

1994 when it organised acoalition with other big companies to purchase health

care. The potential contract was so large that a host of health-care providers

got into a bidding war.

A third technique is to make intelligent use of a resource which is

worth more to your customer than to you. In1993 TWA lifted itself off the

bottom of the airline league by tearing out several rows of seats that were

usually empty because the carrier was so unpopular, giving passengers more

leg-room - and making the airline popular once more.

MCOE, T.E. Computer Science 201246

Page 47: Game Theory Report

Summary

Game theory is exciting because although the principles are simple, the

applications are far reaching.

Game theory is the study of co-operative and non-co-operative approaches

to games and social situations in which participants must choose between

individual benefits and collective benefits.

Gam theory can be used to design credible commitments, threats, or

promises, or to assess propositions and statements offered by others.

Non-cooperative game theory … has brought a fairly flexible language to

many issues, together with a collection of notions of "similarity" that has

allowed economists to move insights from one context to another and to

probe the reach of these insights. But too often it, and in particular

equilibrium analysis, gets taken too seriously at levels where its current

behavioural assumptions are inappropriate. We (economic theorists and

economists more broadly) need to keep a better sense of proportion about

when and how to use it. And we (economic and game theorists) would do

well to see what can be done about developing formally that senses of

proportion.

MCOE, T.E. Computer Science 201247

Page 48: Game Theory Report

REFERENCES

Books:

Author: James D. Miller

Year of Publication: 2003

Title: Game Theory at Work

Publisher: Mcgraw-hill Companies

Author: Avinash Dixit

Year of Publication: 2004

Title: Thinking Strategically: Competitive Edge in Business, Politics

and Everyday Life

Publisher: MIT press

Author: Somdeb Lahiri

Year of Publication: 2003

Title: Existence of Equilibrium in Discrete Market Games

Publisher: Tata Mc

Author : J.D. Williams

Year of Publication :2005

Title: The Compleat Strategyst: Being a Primer on the Theory of

Games of Strategy

Publisher : MIT press

Author: Adam M. Brandenburger

Year of Publication : 2004

Title: Co-Opetition : A Revolution Mindset That Combines

Competition and Cooperation : The Game Theory Strategy That's

Changing the Game of Business Publisher : Mcgraw-hill Companies

MCOE, T.E. Computer Science 201248

Page 49: Game Theory Report

Author: Tom Siegfried

Year of Publication : 2002

Title : A Beautiful Math: John Nash, Game Theory, and the Modern

Quest for a Code of Nature

Publisher : British Press

URL:

http://en.wikipedia.org/wiki/Game_theory

http://faculty.lebow.drexel.edu/mccainr/top/eco/game/game-toc.html

http://www2.owen.vanderbilt.edu/mike.shor/courses/gametheory/

quiz/problems2.html

http://en.wikipedia.org/wiki/Nash_equilibrium

https://class.coursera.org/gametheory/auth/

http://www.dklevine.com/general/whatis.htm

MCOE, T.E. Computer Science 201249