game theory and psychology - le

22
Game Theory and Psychology - Psychology - Oxford Bibliographies Game Theory and Psychology Andrew M. Colman, Eva M. Krockow LAST MODIFIED: 27 JUNE 2017 DOI: 10.1093/OBO/9780199828340-0192 Introduction Game theory is a branch of decision theory focusing on interactive decisions, applicable whenever the actions of two or more decision makers jointly determine an outcome that affects them all. Strategic reasoning amounts to deciding how to act to achieve a desired objective, taking into account how others will act and the fact that they will also reason strategically. The primitive concepts of the theory are players (decision makers), strategies (alternatives among which each player chooses), and payoffs (numerical representations of the players’ preferences among the possible outcomes of the game). The theory’s fundamental assumptions are (i) that all players have consistent preferences and are instrumentally rational in the sense of invariably choosing an alternative that maximizes their individual payoffs, relative to their knowledge and beliefs at the time and (ii) that the specification of the game and the players’ preferences and rationality are common knowledge among the players (explained under Common Knowledge). Game theory amounts to working out the implications of these assumptions in particular classes of games and thereby determining how rational players will act. Psychology is the study of the nature, functions, and phenomena of behavior and mental experience, and two branches of psychology provide bridges to and from game theory: cognitive psychology, concerned with all forms of cognition, including decision making, and social psychology, concerned with how individual behavior and mental experience are influenced by other people. Psychology uses empirical research methods, including controlled experiments, and its usefulness for studying games emerges from three considerations. First, many games turn out to lack determinate game-theoretic solutions, and psychological theories and empirical evidence are therefore required to discover and understand how people play them. Second, human decision makers have bounded rationality and are rarely blessed with full common knowledge; consequently, except in the simplest cases, they do not necessarily choose strategies that maximize their payoffs even when determinate game-theoretic solutions exist. Third, human decision makers have other-regarding preferences and sometimes do not even try to maximize their personal payoffs, without regard to the payoffs of others, and psychological theory and empirical research are therefore required to provide a realistic account of real-life strategic interaction. Psychology has investigated strategic interaction since the 1950s; behavioral game theory, a branch of the emergent subdiscipline of behavioral economics, has used similar techniques since the late 1980s. General Overviews of Game Theory There are many excellent textbooks devoted to game theory and behavioral game theory, varying in their levels of mathematical difficulty and relevance to psychology. Luce and Raiffa 1957 is the most influential and widely read early text, and it has remained useful for succeeding generations of students and researchers. It offers an excellent introduction to standard concepts of game theory, including Nash equilibrium, the most fundamental solution concept for games of all types. A Nash equilibrium is an outcome of any game in which the strategy chosen by each player is a best reply to the strategies chosen by the other player(s), in the sense that no other choice would have yielded a better payoff, given the strategy choices of the other player(s), and as a consequence no player has cause to regret the chosen strategy when the outcome is revealed. Binmore 1991 is useful for mathematically minded beginners and more

Upload: others

Post on 05-Oct-2021

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Game Theory and PsychologyAndrew M. Colman, Eva M. Krockow

LAST MODIFIED: 27 JUNE 2017DOI: 10.1093/OBO/9780199828340-0192

Introduction

Game theory is a branch of decision theory focusing on interactive decisions, applicable whenever the actions of two or more decisionmakers jointly determine an outcome that affects them all. Strategic reasoning amounts to deciding how to act to achieve a desiredobjective, taking into account how others will act and the fact that they will also reason strategically. The primitive concepts of the theoryare players (decision makers), strategies (alternatives among which each player chooses), and payoffs (numerical representations of theplayers’ preferences among the possible outcomes of the game). The theory’s fundamental assumptions are (i) that all players haveconsistent preferences and are instrumentally rational in the sense of invariably choosing an alternative that maximizes their individualpayoffs, relative to their knowledge and beliefs at the time and (ii) that the specification of the game and the players’ preferences andrationality are common knowledge among the players (explained under Common Knowledge). Game theory amounts to working out theimplications of these assumptions in particular classes of games and thereby determining how rational players will act. Psychology is thestudy of the nature, functions, and phenomena of behavior and mental experience, and two branches of psychology provide bridges toand from game theory: cognitive psychology, concerned with all forms of cognition, including decision making, and social psychology,concerned with how individual behavior and mental experience are influenced by other people. Psychology uses empirical researchmethods, including controlled experiments, and its usefulness for studying games emerges from three considerations. First, manygames turn out to lack determinate game-theoretic solutions, and psychological theories and empirical evidence are therefore requiredto discover and understand how people play them. Second, human decision makers have bounded rationality and are rarely blessedwith full common knowledge; consequently, except in the simplest cases, they do not necessarily choose strategies that maximize theirpayoffs even when determinate game-theoretic solutions exist. Third, human decision makers have other-regarding preferences andsometimes do not even try to maximize their personal payoffs, without regard to the payoffs of others, and psychological theory andempirical research are therefore required to provide a realistic account of real-life strategic interaction. Psychology has investigatedstrategic interaction since the 1950s; behavioral game theory, a branch of the emergent subdiscipline of behavioral economics, has usedsimilar techniques since the late 1980s.

General Overviews of Game Theory

There are many excellent textbooks devoted to game theory and behavioral game theory, varying in their levels of mathematicaldifficulty and relevance to psychology. Luce and Raiffa 1957 is the most influential and widely read early text, and it has remained usefulfor succeeding generations of students and researchers. It offers an excellent introduction to standard concepts of game theory,including Nash equilibrium, the most fundamental solution concept for games of all types. A Nash equilibrium is an outcome of any gamein which the strategy chosen by each player is a best reply to the strategies chosen by the other player(s), in the sense that no otherchoice would have yielded a better payoff, given the strategy choices of the other player(s), and as a consequence no player has causeto regret the chosen strategy when the outcome is revealed. Binmore 1991 is useful for mathematically minded beginners and more

Page 2: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

advanced readers, and the simpler text Gibbons 1992 conveys the basic mathematics more briefly. Colman 1995 reviews thefundamental ideas of game theory and related experimental research from a psychological perspective. Camerer 2003 provides the firstwide-ranging survey of behavioral game theory in book form. In an influential monograph, Schelling 1960 uses game theory brilliantly toilluminate psychological features of human strategic interaction.

Binmore, K. 1991. Fun and games: A text on game theory. Lexington, MA: Heath.

This is a basic text on mathematical game theory written by a leading game theorist. It presents mathematical aspects of the theoryexceptionally clearly, and readers with a basic knowledge of school mathematics should be able to understand it. Parts of it are far fromelementary, making it interesting and informative even for readers with an intermediate-level understanding of game theory.

Binmore, K. 2007. Game theory: A very short introduction. Oxford: Oxford Univ. Press.

This very short introduction to the formal aspects of the theory outlines the basic ideas in an easily digestible form.

Camerer, C. F. 2003. Behavioral game theory: Experiments in strategic interaction. Princeton, NJ: Princeton Univ. Press.

This is a magisterial review of almost the whole of behavioral game theory up to the early 2000s. This book covers many key topics inremarkable depth, and much of it is essentially psychological in flavor.

Colman, A. M. 1995. Game theory and its applications in the social and biological sciences. 2d ed. London: Routledge.

This monograph presents the basic ideas of game theory from a psychological perspective, reviews experimental evidence up to themid-1990s, and discusses applications of game theory to voting, evolution of cooperation, and moral philosophy. An appendix containsthe most elementary available self-contained proof of the minimax theorem (see Strategic Reasoning Before Game Theory). The firstedition was published in 1982.

Gibbons, R. 1992. A primer in game theory. Hemel Hempstead, UK: Harvester Wheatsheaf.

A more orthodox and slightly simpler and shorter basic text on mathematical game theory than Binmore 1991, widely prescribed instandard university courses and easily accessible to readers with a basic knowledge of school mathematics.

Luce, R. D., and H. Raiffa. 1957. Games and decisions: Introduction and critical survey. New York: Wiley.

This was the text that first brought game theory to the attention of behavioral and social scientists, being much more accessible than thebook by von Neumann and Morgenstern 1944 (cited under Strategic Reasoning Before Game Theory) that had preceded it. It is abrilliant textbook with some simple mathematical content, and it has remained highly relevant and useful for subsequent generations ofresearchers and scholars.

Schelling, T. C. 1960. The strategy of conflict. Cambridge, MA: Harvard Univ. Press.

This fascinating monograph, by a psychologically minded economist, was largely responsible for its author’s Nobel Prize. It has hardlyany mathematical content but instead uses the conceptual framework of game theory to focus on aspects of interactive decision makingthat lie outside the formal theory. This is a must-read for anyone interested in game theory in psychology. It was reprinted with a newpreface in 1980.

Page 3: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Strategic Reasoning before Game Theory

The emergence of game theory is usually traced back to a proof in von Neumann 1928 of a key theorem (the minimax theorem,establishing the existence of, and characterizing, what were later called “Nash equilibria” for strictly competitive games) or to the firstedition of Theory of Games and Economic Behavior (von Neumann and Morgenstern 1944). However, early records confirm that peoplehad been aware of problems of strategic interaction in historical times. For example, the Babylonian Talmud, one of the core texts ofRabbinic Judaism, written in the 3rd to the 5th centuries CE, contains a detailed discussion of the fairest way to divide up the estate of aperson who dies owing several creditors different amounts, the total owed exceeding the value of the estate, and the solution that issuggested (not simple proportionality) seems counterintuitive and baffled scholars for centuries. Aumann and Maschler 1985 shows thatthe Talmudic suggestion coincides with a game-theoretic solution called the “nucleolus,” the full mathematical details of which were firstworked out and published in Schmeidler 1969. Evidence also exists of strategic reasoning in the Bible (see Games of Strategy in theBible) and in ancient Rome (see Strategic Voting in Ancient Rome).

Aumann, R. J., and M. Maschler. 1985. Game theoretic analysis of a bankruptcy problem from the Talmud. Journal of EconomicTheory 36.2: 195–213.

This article, the first author of which is the Nobel laureate and game theorist Robert J. Aumann, describes the problem of dividing up abankrupt estate among creditors. It shows, in easily understood language, how the solution recommended in the Babylonian Talmud,which deviates from simple proportionality, coincides with the nucleolus of Schmeidler 1969.

Schmeidler, D. 1969. The nucleolus of a characteristic function game. SIAM Journal of Applied Mathematics 17.6: 1163–1170.

This is the original presentation of the theory of the nucleolus, considered an important though advanced “solution concept” for aparticular class of games.

von Neumann, J. 1928. Zur Theorie der Gesellschaftsspiele. Mathematische Annalen 100.1: 295–320.

This article marks the emergence of game theory, according to many historians of game theory, and also according to von Neumann,who pointed out that earlier work by the French mathematician Borel had failed to prove the key minimax theorem that established thetheory. The proof is long and labyrinthine, and far simpler proofs of the minimax theorem have since emerged, although none of themcould be called easy.

von Neumann, J., and O. Morgenstern. 1944. Theory of games and economic behavior. Princeton, NJ: Princeton Univ. Press.

This was the first major text on game theory. Although it is worth looking at, it is densely mathematical, using obsolete notation thatrenders it largely inaccessible even to professional mathematicians in the 21st century. A second edition, including an appendix with anaxiomatization of expected utility theory, appeared in 1947, and a third edition in 1953.

Games of Strategy in the Bible

Brams 1980 analyzes more than twenty stories in the Old Testament, dating from between the 6th and the 4th centuries BCE, accordingto their strategic properties. The players include God and various biblical characters, such as Adam and Eve, Cain and Abel, Jacob andEsau, and Joseph and his brothers. Brams seeks to show that the players generally acted rationally according to the prescriptions ofgame theory.

Page 4: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Brams, S. J. 1980. Biblical games: Strategic analysis of stories in the Old Testament. Cambridge, MA: MIT.

This book is simple and easy to understand, and the conclusions seem quite convincing. It is well written and instructive to anyone newto game theory, because it helps to clarify many of the key concepts of the theory, such as dominant strategies and Nash equilibrium. Itprovides a master class in elementary mathematical modeling.

Strategic Voting in Ancient Rome

A letter written by the Roman lawyer and senator Pliny the Younger, the Epistle to Titus Aristo, contains a sophisticated game-theoreticanalysis of strategic or tactical voting that had occurred in the Senate. A consul Afranius Dexter had been found dead, and it was notclear whether he had died by his own hand or been killed by his freedmen. When the matter came before the Senate, Pliny and someother senators (let us label this group A) wanted to acquit the freedmen, some others (B) wanted to banish them to an island, and a thirdgroup (C) wanted to condemn them to death. The largest group was A, but the senators in C realized that, instead of voting sincerely,they could vote strategically with B, and their combination would then outvote A and get a result (B) that they obviously preferred to A.To see this clearly, imagine a small committee with three members supporting a motion A, two supporting B, and two supporting C.Suppose that A, B, and C lie along a single dimension: the motions might be to spend a large amount of money (A), a moderate amount(B), or a small amount (C) on some project. If the plurality voting system (first past the post) is used and everyone votes sincerely, thenA will win, but if the two committee members who support C vote strategically for B, then the committee’s decision will be B and not A,and we may reasonably assume that they prefer B to A. Pliny managed to prevent this maneuver by suggesting a different, sequentialvoting procedure, and he outmaneuvered the senators who wanted to condemn the freedmen to death. Farquharson 1969 builds awhole theory of strategic voting out of this idea and establishes a branch of research devoted to this important application of gametheory. Robin Farquharson was a brilliant South African game theorist who suffered from bipolar disorder and was not well enough tocheck the proofs of his book, which is consequently riddled with errors, and for this reason Niemi 1983 provides a useful companionwhen reading Farquharson’s book.

Farquharson, R. 1969. Theory of voting. New Haven, CT: Yale Univ. Press.

The main text of this book comprises just fifty-six small though concentrated pages. It launched a new branch of game theory devoted tostrategic voting. When it was in press, its author was unable to check the proofs. Consequently, it contains numerous errors that make itdifficult to follow, but many of these are corrected in an article by Niemi 1983.

Niemi, R. G. 1983. An exegesis of Farquharson’s Theory of Voting. Public Choice 40.3: 323–328.

This short article corrects thirty-seven errors in Farquharson 1969. Most of the corrections relate to simple typographical errors, not onlyin the text but also in the diagrams.

Pliny the Younger. 1751. Epistle to Titus Aristo. In The letters of Pliny the Younger with observations on each letter. Vol. 2.Translated by Earl of Orrery John, 199–206. London: Paul Vaillant.

In this letter, Pliny tells Titus Aristo about Afranius Dexter’s death and comments on the discussion in the Senate about the freedmenwho were accused of killing him. It provides evidence of strategic reasoning not only by Pliny but also by other Roman senators.Originally written 105 CE.

Common Knowledge

Page 5: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

The most difficult of the fundamental assumptions of game theory is common knowledge, mentioned in the Introduction, and it is aquintessentially psychological concept. A proposition is common knowledge in a specified group if every member of the group knows it,knows that every member knows it, knows that every member knows that every member knows it, and so on, ad infinitum. This may atfirst seem an impossibly deep concept for the human mind to comprehend, but it is not necessary to think through each individual step—that would indeed be impossible—and it was pointed out by Milgrom 1981 that common knowledge can sometimes be attained withouteffort, when a public announcement is made, for example. Suppose someone walks on to the stage just before a concert andannounces: “Unfortunately the soloist who was supposed to play the cello concerto tonight has had an accident and is now in hospital.”The fact that the cellist is in hospital is immediately common knowledge among members of the audience who hear and understand theannouncement: they can apprehend without any effort whatsoever that everyone knows it, that everyone knows that everyone knows it,and so on. The concept of common knowledge was introduced into game theory by Lewis 1969 and was later formalized by Aumann1976. It needs to be distinguished carefully from general knowledge or mutual knowledge. It is possible for all members of a group toknow something without it being common knowledge in the group, and from a strategic point of view this can make a big difference. Thisis not easy to grasp, and the best aid to explaining it is the Muddy Children Problem. Thomas, et al. 2014 discusses common knowledgein relation to the bystander effect, a social psychological phenomenon according to which people are less likely to intervene in anemergency—to help a person in distress, for example—when other people are present than when they are alone.

Aumann, R. J. 1976. Agreeing to disagree. Annals of Statistics 4.6: 1236–1239.

A short and influential but difficult article by a Nobel laureate, mathematician, and game theorist, formalizing the concept of commonknowledge.

Lewis, D. K. 1969. Convention: A philosophical study. Cambridge, MA: Harvard Univ. Press.

On pages 52–69, the author (a philosopher) introduces the concept of common knowledge into game theory informally. Prior to 1969,the idea had been implicit in the theory.

Milgrom, P. 1981. An axiomatic characterization of common knowledge. Econometrica 49.1: 219–222.

A full axiomatic presentation of common knowledge by an economist and game theorist, also explaining how it can be attainedeffortlessly from a public announcement.

Thomas, K. A., P. De Scioli, O. S. Haque, and S. Pinker. 2014. The psychology of coordination and common knowledge. Journalof Personality and Social Psychology 107.4: 657–676.

A psychological discussion of common knowledge in relation to the bystander effect. On the basis of three experiments, the authorsconclude that the effect may result not from a mere diffusion of responsibility when many people are present, as is widely believed, butfrom actors’ strategic computations based on common knowledge.

Muddy Children Problem

A classroom of perceptive, intelligent, honest children can all see one another’s foreheads but not their own. Two of them have muddyforeheads. Suppose a teacher announces: “At least one of you has a muddy forehead” and asks those who know that they have muddyforeheads to raise their hands. The two muddy children would not respond because, although each can see one other muddy child, thisdoes not imply that they too are muddy. But if the teacher repeats the question, then both would raise their hands. Each would thenrealize that, if they were not muddy themselves, then the other muddy child whom each can see would have raised a hand when thequestion was first asked, because that child would not in that case see any other muddy child. When the question is repeated, eachmuddy child knows that the failure of the other muddy child to have owned up the first time must mean that they too are muddy, and they

Page 6: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

would both raise their hands. Now suppose that the teacher had not first announced: “At least one of you has a muddy forehead.” Eachmuddy child would still know that at least one child is muddy, because each would see one muddy child, but this fact would not becommon knowledge. Even when the question was repeated, they would have no reason to raise their hands, because the other muddychild may not know that there are any muddy children in the classroom. The teacher’s announcement transforms something that everychild already knows—because every child can see at least one muddy child—into something that is common knowledge, and it makesall the difference in this case. It can be proved that if n children are muddy, then they will all raise their hands when the question is askedfor the nth time. The problem is sometimes expressed in terms of people with unfaithful spouses that everyone but they know about, andit is therefore sometimes called the unfaithful wives problem or the cheating husbands problem. Fagin, et al. 1995 is the locus classicusfor in-depth analysis of this problem and related issues. Fagin, et al. 1999 examines problems arising from the fact that full commonknowledge is generally unattainable in everyday interactions but is required for coordination.

Fagin, R., J. Y. Halpern, Y. Moses, and M. Y. Vardi. 1999. Common knowledge revisited. Annals of Pure and Applied Logic 96.1–3: 89–105.

With the help of the muddy children problem, this article addresses the paradox that common knowledge is necessary for agreementand coordination but is unattainable in the real world because communication is imprecise.

Fagin, R., J. Y. Halpern, Y. Moses, M. Y. Vardi, et al. 1995. Reasoning about knowledge. Cambridge, MA: MIT.

This book discusses the muddy children problem and related problems of knowledge and common knowledge in depth. What isinteresting from a psychological viewpoint is that common knowledge is different from general or mutual knowledge, and the muddychildren problem makes this distinction clear.

Social Dilemmas

In a social dilemma, the pursuit of individual self-interest leaves every player worse off than if each acts cooperatively. In the two-playerversion, the Prisoner’s Dilemma game, each player chooses between a cooperative strategy C and a defecting strategy D. Eachreceives a higher payoff from D than C whatever the other player chooses, but each receives a higher payoff if both choose C than ifboth choose D. It was named in Tucker 2001 (originally issued in 1950), along with the following illustrative scenario: Two people,charged with involvement in a serious crime, are arrested, and held in separate cells. To obtain a conviction, the police have to persuadeat least one of them to confess, and thus each prisoner chooses between cooperating with the other prisoner by refusing to confess (C)and defecting from the cooperative path by confessing (D). The police offer each of them separately the following proposal: If bothrefuse to confess, then both will be acquitted (a good outcome for each); if both confess, then both will be convicted and fined (a worseoutcome for each); but if only one confesses, then that prisoner will not only be acquitted but will receive a reward for helping the police(the best possible outcome), and the other will not only be convicted but will receive an especially heavy fine (the worst possible payoff).The game presents a genuine paradox, because D is a dominant strategy for both players—each receives a better payoff by defectingthan cooperating whether or not the other prisoner cooperates; therefore, it must be rational for both players to choose D, but eachplayer receives a better payoff if both cooperate and choose C than if both defect and choose D. In the finitely repeated Prisoner’sDilemma, defecting on every round is the only Nash equilibrium, although Kreps, et al. 1982 includes a proof that a small relaxation ofcommon knowledge allows rational players to cooperate, and Rabin 1993 shows how cooperation can arise from “fairness equilibria.”Experiments using this game throw light on phenomena such as cooperation and competition, trust and suspicion, altruism and spite,risk taking and caution, threats, promises, and commitments. Balliet and Van Lange 2013, Pruitt and Kimmel 1977, Roth 1995, and Sally1995 review the experimental evidence on behavior in the original Prisoner’s Dilemma game. Multi-player social dilemmas are discussedunder Multi-Player Social Dilemmas.

Balliet, D., and P. A. M. Van Lange. 2013. Trust, conflict, and cooperation: A meta-analysis. Psychological Bulletin 139.5: 1090–1112.

Page 7: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

A meta-analytic review of social dilemmas, including particularly experimental Prisoner’s Dilemma games.

Kreps, D. M., P. Milgrom, J. Roberts, and R. Wilson. 1982. Rational cooperation in the finitely repeated Prisoners’ Dilemma.Journal of Economic Theory 27.2: 245–252.

In the Prisoner’s Dilemma game repeated a finite number of times, the only Nash equilibrium involves both players defecting on everytrial, but in this influential article the authors prove that if both players are strictly rational utility maximizers, but one believes (or bothbelieve) that there is a small probability that the other is irrational, then rational players will cooperate until near the end.

Pruitt, D. G., and M. J. Kimmel. 1977. Twenty years of experimental gaming: Critique, synthesis, and suggestions for the future.Annual Review of Psychology 28.1: 363–392.

This is a comprehensive review of experiments on Prisoner’s Dilemma and closely related games performed mostly by psychologists upto the late 1970s, shortly after the emergence of multi-player social dilemmas and the switch of research interest in that direction.

Rabin, M. 1993. Incorporating fairness into game theory and economics. American Economic Review 83.5: 1281–1302.

A highly cited article in which fairness considerations are used to propose a nonstandard game theory in which joint cooperation in thePrisoner’s Dilemma game could be a new kind of “fairness equilibrium.”

Roth, A. E. 1995. Introduction to experimental economics. In Handbook of experimental economics. Edited by J. Kagel andA. E. Roth, 3–109. Princeton, NJ: Princeton Univ. Press.

A thoughtful and informative review, by a Nobel laureate, of experiments on repeated Prisoner’s Dilemma and other games.

Sally, D. 1995. Conversation and cooperation in social dilemmas: A meta-analysis of experiments from 1958 to 1992.Rationality and Society 7.1: 58–92.

A detailed meta-analysis of experiments on the Prisoner’s Dilemma game up to the early 1990s, when behavioral economists beganperforming experiments with this game.

Tucker, A. W. 2001. A two-person dilemma. Unpublished notes, Stanford University. Reprinted In Readings in games andinformation. Edited by E. Rasmussen, 7–8. Malden, MA: Blackwell.

This two-page document contains the original symmetric version of the Prisoner’s Dilemma game presented by Albert W. Tucker at aseminar in the Psychology Department at Stanford University a few months after the game had been discovered by mathematiciansMerrill Flood and Melvin Dresher at the Rand Corporation in Santa Barbara, California. Originally issued 1950.

Multi-Player Social Dilemmas

Schelling 1973 shows how the strategic structure of the Prisoner’s Dilemma can be generalized to multi-player groups. In the N-playerPrisoner’s Dilemma game, each player receives a better payoff by choosing D than C, irrespective of how many of the others choose Cor D, but each receives a better payoff if all choose C than if all choose D. A classic experiment using an experimental N-playerPrisoner’s Dilemma is reported in Dawes, et al. 1977. In later experiments, multi-player social dilemmas have also been presented aspublic goods games, in which each player can contribute as little or as much as desired to a public good; each is better off contributingas little as possible, but each is better off if all contribute a large amount. Dawes and Thaler 1988 provides an excellent introduction to

Page 8: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

this game. A third presentation form is a resource dilemma or commons dilemma, in which players harvest resources from a commonpool, each player receiving a larger payoff from harvesting as much as possible irrespective of how much others harvest, but if allplayers pursue this uncooperative strategy, then each ends up worse off than if all cooperate by harvesting less. Ahn, et al. 2010 is anexcellent example. Multi-player social dilemmas have been used to model problems of inflation and voluntary wage restraint,conservation of natural resources, environmental pollution, arms races and multilateral disarmament, mob behavior, and many othersocial problems involving cooperation and trust. In both the two-player Prisoner’s Dilemma and multi-player social dilemmas,experiments reveal a great deal of cooperation, and much research has been directed at explaining why players cooperate and whatfactors encourage or discourage cooperation. One factor is the fear of punishment by other players for non-cooperation: see theinfluential article Fehr and Gächter 2002. A second is group size; less cooperation tends to occur in large groups than small groups: seeDawes 1980. A third is perceived efficacy: see Kerr and Kaufman-Gilliland 1997. A fourth is communication between players: see Balliet2010. A fifth is group identity: see Dawes, et al. 1988. For a comprehensive review of early experimental findings on cooperation insocial dilemmas, see Dawes 1980, and for later experiments, see Van Lange, et al. 2013.

Ahn, T. K., E. Ostrom, and J. Walker. 2010. A common-pool resource experiment with postgraduate subjects from 41 countries.Ecological Economics 69.12: 2624–2633.

An article by the Nobel laureate Elinor Ostrom and two colleagues summarizing a series of experiments using postgraduate summerschool attendees from forty-one countries playing resource dilemmas, confirming the powerful effects of non-binding communication inincreasing cooperation.

Balliet, D. 2010. Communication and cooperation in social dilemmas: A meta-analytic review. Journal of Conflict Resolution54.1: 39–57.

An extensive recent review of communication effects in forty-five studies published since the 1950s.

Dawes, R. M. 1980. Social dilemmas. Annual Review of Psychology 31.1: 169–193.

A masterly introduction to multi-player social dilemmas and review of related published experiments up to 1980.

Dawes, R. M., J. McTavish, and H. Shaklee. 1977. Behavior, communication, and assumptions about other people’s behavior ina commons dilemma situation. Journal of Personality and Social Psychology 35.1: 1–11.

A classic multi-person social dilemma experiment with evidence on attribution effects.

Dawes, R. M., and R. H. Thaler. 1988. Anomalies: Cooperation. Journal of Economic Perspectives 2.3: 187–197.

A short, well-written commentary on public goods games, and related experimental evidence, in relation to altruism and reciprocalaltruism.

Dawes, R. M., J. C. van de Kragt, and J. M. Orbell. 1988. Not me or thee but we: The importance of group identity in elicitingcooperation in dilemma situations: Experimental manipulations. Acta Psychologica 68.1–3: 83–97.

A summary of research on group identity and other psychological factors affecting cooperation in social dilemmas.

Fehr, E., and S. Gächter. 2002. Altruistic punishment in humans. Nature 415.6868: 137–140.

A short but very influential article discussing evidence that punishment of defectors can maintain cooperation in social dilemmas.

Page 9: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Kerr, N. L., and C. M. Kaufman-Gilliland. 1997. “. . . and besides, I probably couldn’t have made a difference anyway”:Justification of social dilemma defection via perceived self-inefficacy. Journal of Experimental Social Psychology 33.3: 211–230.

An article that discusses perceived efficacy and group size effects and also presents some new experimental evidence.

Schelling, T. C. 1973. Hockey helmets, concealed weapons, and daylight saving: A study of binary choices with externalities.Journal of Conflict Resolution 17.3: 381–428.

A theoretical tour de force by the Nobel laureate Thomas C. Schelling, one of three game theorists who independently andsimultaneously discovered this multi-player generalization of the Prisoner’s Dilemma game. The article is easy to read, uses strikingeveryday examples, and displays Schelling’s formidable theoretical powers to the full.

Van Lange, P. A. M., J. Joireman, C. Parks, and E. Van Dijk. 2013. The psychology of social dilemmas: A review. OrganizationalBehavior and Human Decision Processes 120.2: 125–141.

A detailed and readable review of social dilemma research, both two-player and multi-player, but concentrating mainly on multi-playerdilemmas.

Sequential Games and Backward Induction

Sequential games, also referred to as dynamic games or extensive-form games, model decision situations where players take turns inmaking moves. The order in which they proceed is predetermined and typically represented by a game tree. These tree diagramsconsist of decision nodes, representing points at which one of the players makes a choice, and terminal nodes that indicate the game’send and specify payoffs to the players. The nodes of the tree are connected through branches showing the path of game progression.Most sequential games used in experiments are characterized by perfect information, a condition specifying that all players are informedabout all preceding choices throughout the game, in contradistinction to complete information, the standard game-theoretic assumptionthat the specification of the game and the players’ instrumental rationality are common knowledge in the game. In sequential games withcomplete and perfect information, a subgame perfect equilibrium can be derived with the help of backward induction reasoning, asformally proved by Aumann 1995. This type of reasoning starts at the final decision node of the game, where it determines a rational(payoff-maximizing) choice for the player whose turn it is. On the basis of that move, backward induction then allows for the identificationof a rational choice at the penultimate decision node, and that in turn allows for the identification of a rational choice at the previousdecision node, and so on. In this way, backward induction reasoning can be continued all the way back to the game’s first decision node.The result of this process of reasoning determines the subgame perfect equilibrium of a sequential-choice game. The logic of backwardinduction has been questioned by many researchers, as exemplified in Colman, et al. 2017, which provides a critique from apsychological perspective. Examples of sequential games include well-known zero-sum games (two-player games in which the players’interests are directly opposed) such as Tic-Tac-Toe or Noughts and Crosses, Chess, and Go. In addition, and of more psychologicalinterest, several mixed-motive sequential games have attracted theoretical and empirical attention (see Ultimatum and Dictator Games,Trust Game, Centipede Game). Although it is widely believed that backward induction is applicable only in finite-horizon games withend-points known by the players in advance, Jiborn and Rabinowicz 2003 contains a proof that it is not necessary for the players toknow the end-point for backward induction to apply.

Aumann, R. J. 1995. Backward induction and common knowledge of rationality. Games and Economic Behavior 8.1: 6–19.

This article by the Nobel laureate mathematician and game theorist Robert J. Aumann provides a formal mathematical proof that thebackward induction outcome is obtained in games of perfect information.

Page 10: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Colman, A. M., E. M. Krockow, C. A. Frosch, and B. D. Pulford. 2017. Rationality and backward induction in Centipede games.In The thinking mind: A Festschrift for Ken Manktelow. Edited by N. Galbraith, E. Lucas, and D. E. Over, 139–150. London:Routledge.

Drawing on examples of the unexpected hanging paradox and the Centipede game, this easily accessible book chapter points outtheoretical contradictions that raise questions about the validity of the backward induction argument.

Jiborn, M., and W. Rabinowicz. 2003. Reconsidering the Foole’s rejoinder: Backward induction in indefinitely iteratedPrisoner’s dilemmas. Synthese 136.2: 135–157.

This article challenges a common misconception that backward induction reasoning is applicable only to finite games with known end-points. The argument is based on the assumption that indefinitely extended games are practically impossible, players invariably formingexpectations of end-points, and the authors show how a consequence of this is that backward induction reasoning unravels just as infinitely repeated games.

Ultimatum and Dictator Games

A much-researched and psychologically interesting class of sequential games includes the Ultimatum game and the related Dictatorgame. In the Ultimatum game, introduced by Güth, et al. 1982, two players are assigned the roles of proposer and responder. Theproposer receives a cash endowment to divide between both players in whatever proportions the proposer wishes, and the respondermust then decide whether to accept or reject the proposer’s take-it-or-leave-it offer. If the responder accepts the offer, then the playersreceive the shares suggested by the proposer; if the responder rejects the offer, then both players go empty-handed. The Dictator gameis the same, except for the fact that the second player has an entirely passive role of merely receiving the proposer’s allocation andcannot reject what is offered. In the Ultimatum game, the backward induction outcome entails offering the smallest possible positiveamount to the responder and the responder accepting the offer. In the Dictator game, the backward induction outcome entails theproposer not sharing any part of the endowment with the second player. Nevertheless, empirical research has demonstrated reliabledeviations from these game theory solutions, with proposers making much more generous offers than predicted and responders in theUltimatum game tending to reject offers of less than about 20 percent of the endowment. Camerer and Thaler 1995 affords an excellentintroduction to Ultimatum and Dictator games. Dana, et al. 2006 shows that deviations from game-theoretic rationality are not (as widelybelieved) motivated by simple considerations of fairness toward recipients. Henrich, et al. 2005 provides evidence that high-stakesUltimatum games do not follow game-theoretic rationality in any of fifteen developing communities all over the world in which they werestudied. Zak, et al. 2007 illustrates an emerging branch of research into the neural substrates of strategic behavior, in this case in theUltimatum game.

Camerer, C., and R. H. Thaler. 1995. Anomalies: Ultimatums, dictators and manners. Journal of Economic Perspectives 9.2:209–219.

This article reviews experimental findings from Ultimatum and Dictator games and provides a theoretical discussion of the frequentlyirrational choices reported. The authors suggest that etiquette including social norms of fairness may influence the players’ behavior.

Dana, J., D. M. Cain, and R. M. Dawes. 2006. What you don’t know won’t hurt me: Costly (but quiet) exit in dictator games.Organizational Behavior and Human Decision Processes 100.2: 193–201.

In a Dictator game, proposers were offered a last-minute exit option of taking $9 without making a decision about splitting a $10endowment with another player and without the other player even knowing that this might happen. Many proposers accepted this exitoption, showing that the generosity observed in Dictator games arises not from proposers’ desire to be fair but from their desire to avoidappearing unfair.

Page 11: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Güth, W., R. Schmittberger, and B. Schwarze. 1982. An experimental analysis of ultimatum bargaining. Journal of EconomicBehavior & Organization 3.4: 367–388.

An influential article in which the Ultimatum game is introduced for the first time. The article also includes reports of the first twoexperiments on behavior in this deceptively simple bargaining game.

Henrich, J., R. Boyd, S. Bowles, et al. 2005. “Economic man” in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. The Behavioral and Brain Sciences 28.6: 795–855.

A report of a huge study in which Ultimatum, Public Goods (Social Dilemma), and Dictator games were all investigated in developingcommunities around the world, using endowments worth two months’ or more average household income in those communities.Behavior varied from one community to another, but in none of the fifteen communities investigated did it approach the game-theoreticequilibrium prescribed by backward induction.

Zak, P. J., A. A. Stanton, and S. Ahmadi. 2007. Oxytocin increases generosity in humans. PLOS ONE 2.11: e1128.

This article is representative of a trend aiming to identify neural bases for choices in experimental games. In an experiment usingUltimatum games, the authors found that higher levels of the neuromodulator oxytocin led to significant increases of generosity byproposers in the games. Subsequent research has suggested that the effect may be mediated by cognitive styles or social valueorientations.

Trust Game

In the two-player Trust game, first introduced by Berg, et al. 1995, the players receive an initial cash endowment. One player, assignedthe role of sender, decides how much of the endowment to pass to the other player, assigned the role of receiver. Whatever the amountpassed, the experimenter trebles it or multiplies it by some other fixed value. The receiver then decides how much of the enlargedendowment to pass back to the sender. After the transfer, the game ends and the players keep the money that they have received. Theproportion passed by the sender is interpreted as an index of trust, and the proportion returned an index of trustworthiness andreciprocity. Backward induction mandates the sender to pass none of the endowment to the receiver, thereby ruling out trust andrendering any trustworthy or reciprocal move by the receiver impossible, but experimental evidence reveals that most playersdemonstrate much more trust or generosity than the equilibrium level prescribed by game theory. Güth, et al. 1997 highlights theimportance of player asymmetry in the Trust game, and the meta-analysis Johnson and Mislin 2011 summarizes all the experimentalresearch on this game up to 2011.

Berg, J., J. Dickhaut, and K. McCabe. 1995. Trust, reciprocity, and social history. Games and Economic Behavior 10.1: 122–142.

This article is usually cited as the first presentation of the Trust game. Experimental evidence is included, showing frequent deviationsfrom the game theory prescriptions of decision making.

Güth, W., P. Ockenfels, and M. Wendel. 1997. Cooperation based on trust: An experimental investigation. Journal of EconomicPsychology 18.1: 15–43.

This article reports an interesting experiment on the Trust game, comparing two treatment conditions where participants were eitherrandomly assigned to player roles or could auction their roles. The findings demonstrate the important influence of player asymmetryand the method of role allocation on decision making in the game.

Page 12: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Johnson, N. D., and A. A. Mislin. 2011. Trust games: A meta-analysis. Journal of Economic Psychology 32.5: 865–889.

A meta-analysis of 162 experiments using the Trust game in Berg, et al. 1995. The article identifies several factors influencing theproportion of the endowment passed by the sender, indicative of trust, and the proportion returned by the receiver, indicative oftrustworthiness and reciprocity.

Centipede Game

The Centipede game comprises a sequence of individual trust games with fixed rather than variable payoffs. Two individuals take turnsin deciding between cooperation and defection. Cooperation decreases the personal payoff of the cooperator and increases both theother player’s payoff and the player pair’s joint payoff. Defection terminates the game with a higher payoff to the player who defects thanthe other player. The backward induction outcome mandates unconditional defection at the first decision point and thus precludes anytrusting behavior. However, the vast majority of empirical research suggests that backward induction may play only a minor role inpractice, with participants making highly trusting and cooperative choices. The first experiment on this game was reported by McKelveyand Palfrey 1992. Rapoport, et al. 2003 reports the results of an experiment using a three-player version of the Centipede game andshowed that extremely high financial incentives can increase equilibrium behavior (early defection). Krockow, et al. 2016 provides asystematic review of Centipede game experiments up to 2016.

Krockow, E. M., A. M. Colman, and B. D. Pulford. 2016. Cooperation in repeated Interactions: A systematic review of Centipedegame experiments, 1992–2016. European Review of Social Psychology 27.1: 231–282.

This is a comprehensive review article discussing all empirical research on Centipede games up to 2016. The appendix includes atabular comparison of all studies conducted up to that time.

McKelvey, R. D., and T. R. Palfrey. 1992. An experimental study of the Centipede game. Econometrica 60.4: 803–836.

The first experimental study of the Centipede game, reported in considerable detail, together with theoretical speculations about whatdrives cooperative behavior in this game.

Rapoport, A., W. E. Stein, J. E. Parco, and T. E. Nicholas. 2003. Equilibrium play and adaptive learning in a three-personCentipede game. Games and Economic Behavior 43.2: 239–265.

A report of experiments with a three-player Centipede game showing that, in this version of the game, extremely high financial incentives($2,560 if all players cooperated to the end) increases the frequency of early defection.

Coordination

Coordination games, in which players are motivated to coordinate their strategies and their expectations of one another’s strategies,have multiple Nash equilibria and lack determinate game-theoretic solutions. The simplest is the Hi-Lo pure coordination gameintroduced by Schelling 1960. Two players independently choose H or L; if both choose H, then each wins a large prize; if both chooseL, then each wins a smaller prize; if one chooses H and the other L, then neither wins anything. Surprisingly, standard game theoryassumptions provide no reason to choose H. Although (H, H) is better for both players than (L, L), H is not unconditionally best (not adominant strategy): it yields a better payoff only if the other player also chooses H. Hence H is the payoff-maximizing choice if and only ifthere is a reason to expect the co-player to choose H. But there can be no such reason, because the co-player has a reason to chooseH if and only if there is a reason to expect the first player to choose it—an inconclusive vicious circle. Schelling showed that human

Page 13: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

decision makers solve pure coordination games with surprising ease, and understanding this is a challenging psychological problem.What explains our powerful intuition that H is the rational strategy? Harsanyi and Selten 1988 propose a payoff dominance principle asan axiom of rational choice, according to which rational players choose the strategies associated with an equilibrium that yields both orall players better payoffs than any other equilibrium. According to theories of team reasoning proposed by Sugden 1993 and Bacharach1999, players are sometimes motivated to maximize their collective payoff rather than their personal payoffs. According to the cognitivehierarchy theory of Camerer, et al. 2004, players choose best replies against co-players whom they always assume to reason at lowerlevels of strategic depth than themselves. The social projection theory championed by Krueger 2014 and others suggests that playersexpect others to act as they do. All of these theories can potentially explain coordination in games such as Hi-Lo. Colman, et al. 2014reports two experiments designed to provide critical tests of these and other theories, including their own strong Stackelberg reasoning,according to which players act as though any strategy choice will be anticipated by a best-replying co-player. Later, Misyak and Chater2014 suggests a further theory of virtual bargaining.

Bacharach, M. 1999. Interactive team reasoning: A contribution to the theory of co-operation. Research in Economics 53.2:117–147.

A formal theory of team reasoning, designed specifically to solve games such as the Hi-Lo pure coordination game.

Camerer, C. F., T.-H. Ho, and J.-K. Chong. 2004. A cognitive hierarchy model of games. Quarterly Journal of Economics 119.3:861–898.

An influential theory of strategy choices in games according to which players choose best replies against co-players whom they assumeto reason at levels of strategic depth below their own. This theory provides a potential explanation for coordination in games such as Hi-Lo.

Colman, A. M., B. D. Pulford, and C. L. Lawrence. 2014. Explaining strategic coordination: Cognitive hierarchy theory, strongStackelberg reasoning, and team reasoning. Decision 1.1: 35–58.

An experiment in which the principal theories purporting to explain coordination are tested against one another. Cognitive hierarchytheory and theories of team reasoning and strong Stackelberg reasoning turn out to predict strategic decisions most accurately.

Harsanyi, J. C., and R. Selten. 1988. A general theory of equilibrium selection in games. Cambridge, MA: MIT.

A densely mathematical book that was largely responsible for a Nobel prize shared by the authors. In it, the payoff dominance principleis proposed as an axiom of rationality to solve coordination games, together with a secondary risk dominance principle, according towhich rational players avoid strategies that risk very low payoffs.

Krueger, J. I. 2014. Heuristic game theory. Decision 1.1: 59–61.

Replying to an article in which strong Stackelberg reasoning was proposed as an explanation of coordination, the author defends socialprojection theory.

Misyak, J. B., and N. Chater. 2014. Virtual bargaining: A theory of social decision-making. Philosophical Transactions of theRoyal Society B: Biological Sciences 369.1655.

According to this theory, individuals reason about problems of coordination by considering what strategies they would agree on if theycould bargain or negotiate explicitly. In the Hi-Lo game, it is obvious what bargain they would arrive at; hence, communication isunnecessary and they choose the appropriate H strategy without actual bargaining being necessary.

Page 14: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Schelling, T. C. 1960. The strategy of conflict. Cambridge, MA: Harvard Univ. Press.

Much of this book is taken up with a discussion of pure coordination games, including the game now known as the Hi-Lo game,introduced as a pure coordination game on page 291 but not named. Before Schelling, game theory neglected such games entirely,considering them trivial or uninteresting. This book was reprinted with a new preface in 1980.

Sugden, R. 1993. Thinking as a team: Towards an explanation of nonselfish behaviour. Social Philosophy and Policy 10.1: 69–89.

An influential theory of team reasoning presented informally. This theory can potentially explain coordination in games such as Hi-Lo andalso cooperation in social dilemmas.

Social Projection and Evidential Reasoning

Among the psychological theories that have been put forward to explain why people cooperate in social dilemmas (see SocialDilemmas) and how they coordinate in pure coordination games (see Coordination), social projection theory, reviewed in Krueger 2007and discussed as a source of cooperation in Krueger 2013, Krueger—and suggested as a solution to social dilemmas in Krueger, et al.2012—is undoubtedly one of the most influential. Its fundamental assumption is that “most people have a strong expectation thatmembers of their own groups will act as they themselves do” (Krueger 2008, p. 399). In any social dilemma game, this means that acooperative strategy choice is likely to be matched by cooperation from the other player(s) and defection is likely to be matched bydefection; and because the first outcome is clearly preferable to the second, not only collectively but also individually, this provides anexplanation for cooperation. The same applies in pure coordination games such as the Hi-Lo game, where social projection theoryprovides an apparent reason for choosing H and an explanation for coordination. According to the influential social psychologist RobynDawes in Dawes 1989, Dawes 1990, and Dawes 2000, social projection theory has an empirical foundation in the false consensus effectfirst reported by Ross, et al. 1977, which reports four experiments confirming the effect. In one of these experiments, Stanford Universitystudents were approached on campus and asked whether they would be willing to walk around for thirty minutes wearing a sandwichboard advertising the message REPENT. Those who consented to do this estimated, on average, that 64 percent of other studentswould also agree, whereas those who refused estimated that 77 percent would also refuse. Ross and his collaborators considered thefalse consensus effect to be an egocentric bias, but Dawes, and also Krueger and his collaborators, suggested that it is rational, in theabsence of other evidence, to treat one’s own behavior as evidence of how others are likely to behave. This method of deciding how toact is called evidential reasoning, a form of reasoning that has been most intensively investigated in relation to Newcomb’s Problem.

Dawes, R. M. 1989. Statistical criteria for establishing a truly false consensus effect. Journal of Experimental SocialPsychology 25.1: 1–17.

This is the article in which Dawes first links social projection to the false consensus effect and attempts to establish a justification for it inBayesian statistics.

Dawes, R. M. 1990. The potential non-falsity of the false consensus effect. In Insights in decision making: A tribute to Hillel J.Einhorn. Edited by R. M. Hogarth, 179–199. Chicago: Univ. of Chicago Press.

In this book chapter, Dawes goes further than in his first publication on the subject, claiming that the so-called false consensus effectmight be fully rational.

Dawes, R. M. 2000. A theory of irrationality as a “reasonable” response to an incomplete specification. Synthese 122.1: 133–163.

Page 15: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

In this article, Dawes tries to rationalize the false consensus effect using Bayes theorem.

Krueger, J. I. 2007. From social projection to social behavior. European Review of Social Psychology 18.1: 1–35.

A full exposition and defense of social projection theory.

Krueger, J. I. 2008. Methodological individualism in experimental games: Not so easily dismissed. Acta Psychologica 128.2:398–401.

This is a reply to an article criticizing evidential decision theory and suggesting that it cannot provide a rational explanation ofcoordination. The author defends it and claims that it is rational.

Krueger, J. I. 2013. Social projection as a source of cooperation. Current Directions in Psychological Science 22.4: 289–294.

A detailed exposition of social projection theory. The author claims that the theory explains why cooperative behavior occurs, predictshow levels of cooperation will be affected by changes in payoff structure, and shows how a group of self-interested individuals canobtain socially desirable outcomes.

Krueger, J. I., T. E. DiDonato, and D. Freestone. 2012. Social projection can solve social dilemmas. Psychological Inquiry 23.1:1–27.

An explicit argument that social projection can solve social dilemmas and that cooperation is rational in strategic interactions of that type.

Ross, L., D. Greene, and P. House. 1977. The “false consensus effect”: An egocentric bias in social perception and attributionprocesses. Journal of Experimental Social Psychology 13.3: 279–301.

The original account of the false consensus effect, supported by the results of four experiments.

Newcomb’s Problem

This most difficult problem of decision making, first published in Nozick 1969 and clearly explained by Gardner 1973 and Gardner andNozick 1974, is especially useful for showcasing evidential reasoning (see Social Projection and Evidential Reasoning) anddemonstrating how persuasive it can be. A transparent box containing $1,000 and an opaque box containing either $1 million or nothingare placed on a table. A decision maker can choose either the opaque box only or both boxes, knowing that a predictor of choicebehavior, such as a computer programmed with relevant psychological information, has already put $1 million in the opaque box if andonly if it predicted that the decision maker would take the opaque box only. The decision maker knows that predictions are correct in 95percent of cases (the exact figure is immaterial). Both strategies can be justified by seemingly irrefutable arguments. The expected utilityof taking the opaque box only is 0.95 × $1 million + 0.05 × $0 = $950,000, whereas the expected utility of taking both boxes is (0.95 ×$0) + (0.05 × $1 million) + $1,000 = $51,000; therefore, it is rational to choose the opaque box only. But the predictor has already eitherput or not put $1 million in the opaque box; therefore, the decision maker receives $1,000 more by taking both boxes than by taking theopaque box only in either case: taking both boxes is a dominant strategy yielding a better payoff irrespective of the predictor’s action.The debate is hopelessly deadlocked, but it is generally agreed that taking the opaque box only is justified by evidential decision theory,whereas taking both boxes is justified by causal expected utility theory; hence, the argument revolves around the validity of evidentialreasoning. Evidential reasoning was powerfully attacked by Lewis 1981 but has influential defenders, including Eells 1984, Jeffrey 1983,and Nozick 1993. Quattrone and Tversky 1984 provides experimental evidence that people use it: their subjects stated that they wouldbe more likely to vote in an election if they believed that the outcome would depend on the turnout of voters who shared theirpreferences, and they predicted that their preferred candidate would be significantly more likely to win if they voted, the strength of this

Page 16: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

perceived association correlating substantially (r = .32; p < 0.001) with their willingness to vote. Thus, many subjects behaved as thoughtheir own actions could provide evidence of how others would vote.

Eells, E. 1984. Newcomb’s many solutions. Theory and Decision 16.1: 59–105.

An analysis of Newcomb’s problem and a defense of evidential reasoning by a distinguished decision theorist and logician.

Gardner, M. 1973. Free will revisited, with a mind-bending paradox by William Newcomb. Scientific American 229.1: 104–109.

A brilliantly clear and informative introduction to Newcomb’s problem.

Gardner, M., and R. Nozick. 1974. Reflections on Newcomb’s problem: A prediction and free will dilemma. Scientific American230.3: 102–108.

A follow-up commentary following the Scientific American article by Gardner 1973, discussing comments submitted by numerousreaders of that earlier article.

Jeffrey, R. C. 1983. The logic of decision. 2d ed. Chicago: Univ. of Chicago Press.

A textbook of decision theory by a leading philosopher, logician, and decision theorist that endorses evidential reasoning.

Lewis, D. 1981. Causal decision theory. Australasian Journal of Philosophy 59.1: 5–30.

A detailed analysis of the logic of Newcomb’s problem and a devastating attack on evidential decision theory in favor of causal expectedutility theory.

Nozick, R. 1969. Newcomb’s problem and two principles of choice. In Essays in honor of Carl G. Hempel: A tribute on theoccasion of his sixty-fifth birthday. Edited by N. Rescher, 114–146. Dordrecht: Reidel.

The book chapter in which Nozick’s problem first appears in print. The two principles of choice in the chapter’s title refer to expectedutility and strategic dominance. Nozick points out in a footnote that the problem was constructed by the physicist William Newcomb andcomments: “It is a beautiful problem. I wish it were mine.”

Nozick, R. 1993. The nature of rationality. Princeton, NJ: Princeton Univ. Press.

A short but magisterial book on many aspects of rationality by a distinguished philosopher. On pages 43–59 Nozick discusses causaland evidential decision theory and suggests that, in reaching decisions, both may be given some weight.

Quattrone, G. A., and A. Tversky. 1984. Causal versus diagnostic contingencies: On self-deception and the voter’s illusion.Journal of Personality and Social Psychology 46.2: 237–248.

A psychological examination of evidential reasoning, suggesting that it explains why people vote in elections, backed up by compellingexperimental evidence.

Page 17: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Social Value Orientation

The psychological concept of social value orientation (SVO) was introduced by Messick and McClintock 1968 and McClintock 1972. Itwas designed to conceptualize both selfishness and other-regarding social motivations. An SVO indicates a person’s preferencesregarding the allocation of resources such as money between self and other, and in games SVO can influence a player’s strategychoices. The SVO concept is the basic building block of interdependence theory, the only sustained attempt to investigate the full rangeof social motivations that operate in games, introduced by Kelley and Thibaut 1978 and reviewed in the light of numerous experimentsby Rusbult and Van Lange 2003. Early research focused on individualistic, cooperative, and competitive orientations, and later researchadded altruistic and equality-seeking orientations. Van Lange 1999 proposes a hybrid prosocial SVO combining the cooperative andequality-seeking orientations. Although Messick and McClintock originally conceived of SVO as state variable, manipulable byexperimenters, most subsequent researchers have interpreted it as a trait or individual difference variable, measurable byquestionnaires in which decomposed games are presented and respondents are requested to choose their preferred distributions ofresources between themselves and another person. Murphy and Ackermann 2014 discusses in detail the key issues surrounding themeasurement of SVO. It has been found to correlate significantly with personality descriptions given by friends and associates and topredict activities in everyday life, such as volunteering for charitable causes. A comprehensive review and meta-analysis, Balliet, et al.2009 and a systematic narrative review, Bogaert, et al. 2008, both confirm that SVO explains much of the variance in the strategieschosen by players in social dilemmas.

Balliet, D., C. Parks, and J. Joireman. 2009. Social value orientation and cooperation in social dilemmas: A meta-analysis.Group Processes & Intergroup Relations 12.4: 533–547.

A comprehensive review and meta-analysis of the effects of social value orientation on cooperation in social dilemmas of various types.

Bogaert, S., C. Boone, and C. Declerck. 2008. Social value orientation and cooperation in social dilemmas: A review andconceptual model. British Journal of Social Psychology 47.3: 453–480.

A systematic narrative review of the effects of social value orientation on cooperation, plus a synthesis within an integrated conceptualmodel. According to the model, the relationship between SVO and cooperative behavior is mediated by individuals’ specific goals andexpectations concerning their co-players’ behavior.

Kelley, H. H., and J. W. Thibaut. 1978. Interpersonal relations. A theory of interdependence. New York: Wiley.

This book introduces interdependence theory, based on SVO, for the first time, together with a distinctively social psychological versionof game theory.

McClintock, C. G. 1972. Social motivation: A set of propositions. Behavioral Science 17.5: 438–454.

The first systematic statement of the theoretical basis of SVO in a system of propositions and corollaries that focus almost exclusively onSVO as a state rather than an individual difference trait variable. Only in the sixth and last proposition does the author mention the ideaof individual differences in SVO as an additional factor, over and above social contexts, that might help to explain social motives.

Messick, D. M., and C. G. McClintock. 1968. Motivational bases of choice in experimental games. Journal of ExperimentalSocial Psychology 4.1: 1–25.

The original publication in which SVO was put forward. In this pioneering experiment, SVO is manipulated by describing the co-playereither as an “opponent” (to induce a competitive SVO) or a “partner” (to induce a cooperative SVO), and by displaying the players’accumulated scores in different ways to draw attention to their own payoffs, joint payoffs, or relative payoffs.

Page 18: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Murphy, R. O., and K. A. Ackermann. 2014. Social value orientation: Theoretical and measurement issues in the study of socialpreferences. Personality and Social Psychology Review 18.1: 13–41.

A conceptual clarification of the concept of SVO and a detailed discussion of issues related to the measurement of SVO and therelationship between measurement and theory development of the SVO construct, concluding with a comparative evaluation of thevarious measures and suggestions regarding their use.

Rusbult, C. E., and P. A. M. Van Lange. 2003. Independence, interaction, and relationships. Annual Review of Psychology 54.1:351–375.

A review of SVO in the context of interdependence theory, a social psychological theory of interpersonal relationships based on SVO.

Van Lange, P. A. M. 1999. The pursuit of joint outcomes and equality in outcomes: An integrative model of social valueorientation. Journal of Personality and Social Psychology 77.2: 337–349.

This frequently cited article presents a conceptual framework, backed up by three experiments, for understanding differences amongprosocial, individualistic, and competitive SVO and proposes an integrative model in which the prosocial orientation is understood interms of enhancing not only joint outcomes but both joint outcomes and equality in outcomes.

Negotiation and Coalition Formation

Social psychologists have investigated negotiation and bargaining since the 1960s, but the advent of behavioral game theory in the1980s transformed the field of research. A branch of game theory called cooperative game theory focuses on negotiation or bargainingsolutions in two-player cooperative games and coalition formation in multi-player games. Cooperative games are those in which playerscan negotiate binding and enforceable agreements, and game theorists have discovered a number of solutions to such games that meetvarious criteria of fairness and workableness. In two-player or dyadic cooperative games, pure negotiation or bargaining is possible, andin larger groups an additional phenomenon emerges: the presence of a third player makes it possible for two players to outvote the third,and this enables coalition formation to occur. Psychologists are naturally interested in how human decision makers negotiate and formcoalitions in practice, and they have formulated theories and provided experimental evidence throwing light on these processes.Bazerman, et al. 2000 includes a useful historical synopsis of psychological research into negotiation, beginning with hundreds ofempirical papers by social psychologists published in the 1960s and 1970s, followed by a reframing of this field of research under theinfluence of behavioral game theory in the 1980 and 1990s, and concluding with a review of five emerging themes in more recentresearch on negotiation. In a similar vein, Murnighan and Roth 2006 (written by a social psychologist and an economist) consists of adialogue between these researchers about their long collaboration on various topics related to game theory, including negotiation andcoalition bargaining. The authoritative review Bissonnette, et al. 2015 provides a comprehensive survey of theoretical and empiricalresearch on coalition formation throughout the animal kingdom. Experimental evidence reported by Jehn and Bezrukova 2010 providesuseful insights into the tendency of some large human groups or organizations to fractionate into smaller coalitions, leading to conflict,reduced member satisfaction, and declining group performance. An interesting series of experiments by Van Beest and Van Dijk 2007focuses on the interplay between self-interest and fairness in coalition bargaining.

Bazerman, M. H., J. R. Curhan, D. A. Moore, and K. L. Valley. 2000. Negotiation. Annual Review of Psychology 51.1: 279–314.

A brief history of psychological research into negotiation and a review of five emerging areas of research into how negotiators interpretthe task of negotiation.

Bissonnette, A., S. Perry, L. Barrett, J. C. Mitani, M. Flinn, S. Gavrilets, and F. B. M. de Waal. 2015. Coalitions in theory andreality: A review of pertinent variables and processes. Behaviour 152.1: 1–56.

Page 19: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

An authoritative review and integration of theories of coalition formation in relation to empirical evidence in humans and animals.

Jehn, K. A., and K. Bezrukova. 2010. The faultline activation process and the effects of activated fault-lines on coalitionformation, conflict, and group outcomes. Organizational Behavior and Human Decision Processes 112.1: 24–42.

An influential experimental study showing that groups with activated fault-lines divide into coalitions more frequently and exhibit moreconflict, less satisfaction, and worse group performance than groups without activated fault-lines. The study also shows that teamidentification moderates these effects: strong workgroup identity decreases the likelihood that activated fault-lines lead to coalitionformation and conflict.

Murnighan, J. K., and A. E. Roth. 2006. Some of the ancient history of experimental economics and social psychology:Reminiscences and analysis of a fruitful collaboration. In Social psychology and economics. Edited by D. De Cremer, M.Zeelenberg, and J. K. Murnighan, 321–334. Mahwah, NJ: Erlbaum.

A dialogue between the social psychologist Keith Murnighan and the Nobel laureate economist and game theorist Alvin Roth on theircollaboration, dating back to 1974, on coalition bargaining and related research topics in game theory and experimental games.

Van Beest, I., and E. Van Dijk. 2007. Self-interest and fairness in coalition formation: A social utility approach to understandingpartner selection and payoff allocations in groups. European Review of Social Psychology 18.1: 132–174.

The authors argue that bargainers are self-serving in their choice of allocation rules, their perceptions of fairness are sometimesinfluenced by self-interest, and coalitions tend to maximize the payoffs of their members. Their experiments show that bargainers arereluctant to benefit themselves when this causes harm to others but that this depends on personal factors and whether bargainingoccurs in an inter-individual or in an intergroup setting.

Evolution of Cooperation

Darwin’s theory of evolution is based on a competitive mechanism of natural selection, in which only the fittest survive, but cooperationis widespread in nature. Darwin was puzzled by social insects, which display exceptionally cooperative and altruistic behavior. Otherforms of cooperation and altruism are well established. For example, many species of animals settle most conflicts with conspecificsover food, mates, or territory using conventional fighting (harmless displays) rather than escalated fighting (lethal ferocity, as used forhunting prey). This form of cooperation puzzled evolutionary theorists for generations until it was brilliantly explained by Maynard Smithand Price 1973, when the authors first introduced game theory into evolutionary biology. The evolution of altruistic behavior can beexplained by inclusive fitness, a concept introduced by Hamilton 1964a and Hamilton 1964b. Hamilton showed that a heritable form ofbehavior will be favored by selection if rb – c > 0; that is, if the fitness benefit b to the recipient, discounted by the degree of geneticrelatedness r, is greater than the fitness cost c to the actor; this is commonly called Hamilton’s rule. As explained by West, et al. 2007,natural selection favors a heritable form of behavior displayed by an organism if it increases the number of offspring, not only of thatorganism but also of its close relatives. To explain altruism between unrelated individuals, commonly observed in human communities,Trivers 1971 introduces the idea of reciprocal altruism: it is in an individual’s self-interest to cooperate with another who may return thefavor at a later date. Human beings frequently behave altruistically or cooperatively toward non-relatives even when there is nopossibility of reciprocal altruism, and this was explained by Alexander 1987 in terms of indirect reciprocity, when an individual’s altruisticor cooperative behavior is observed by others who are subsequently more willing to cooperate with that individual. To explaincooperation or altruism between unrelated individuals when there is no possibility of direct or even indirect reciprocity, Fehr and Gächter2002 introduces the idea of altruistic punishment or what came to be called strong reciprocity. Throughout our evolutionary history, wehave needed public goods that require cooperation for their provision and, according to this theory, cooperation has been maintained bypunishment of non-cooperators. In the 1980s, Axelrod introduced a new methodology for studying the evolution of cooperation throughagent-based modeling and computer simulation (see Axelrod’s Computer Tournaments).

Page 20: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Alexander, R. D. 1987. The biology of moral systems. New York: Aldine De Gruyter.

A book in which the biologist Richard D. Alexander puts forward the notion of indirect reciprocity.

Fehr, E., and S. Gächter. 2002. Altruistic punishment in humans. Nature 415.6868: 137–140.

This article introduces the notion of altruistic punishment for the first time and shows experimentally how it can maintain cooperation in apublic goods game.

Hamilton, W. D. 1964a. The genetical evolution of social behaviour. I. Journal of Theoretical Biology 7.1: 1–16.

The first part of Hamilton’s two-part article in which he introduced the concept of inclusive fitness into evolutionary theory. It is worthlooking at for historical reasons, because it is arguably the most significant contribution to evolutionary theory since Darwin, but its styleof presentation makes it exceptionally difficult to understand. West, et al. 2007 explains the ideas more accessibly.

Hamilton, W. D. 1964b. The genetical evolution of social behaviour. II. Journal of Theoretical Biology 7.1: 17–52.

The second part of Hamilton’s two-part article on inclusive fitness.

Maynard Smith, J., and G. R. Price. 1973. The logic of animal conflict. Nature 246.5427: 15–18.

In this landmark publication, the authors, a leading British evolutionary biologist and an unemployed US physical chemist, apply gametheory for the first time to the theory of evolution. They show that conventional fighting does not contradict the theory of natural selection,as it appears to, but that it benefits individual animals in terms of Darwinian fitness, and they introduce the concept of an evolutionarilystable strategy.

Trivers, R. L. 1971. The evolution of reciprocal altruism. Quarterly Review of Biology 46.1: 35–57.

An influential article in which Trivers first explains his theory of reciprocal altruism.

West, S. A., A. S. Griffin, and A. Gardner. 2007. Evolutionary explanations for cooperation: Review. Current Biology 17.16:R661–R672.

A magisterial review of explanations for the evolution of cooperation, not difficult to understand but presented at an advanced level.Among other things, the article explains Hamilton’s rule rb – c > 0 in simple language.

Axelrod’s Computer Tournaments

In Axelrod’s computer tournaments, computer programs designed to play repeated or iterated Prisoner’s Dilemma games were pittedagainst one another. This initiated a novel methodology for studying the evolution of cooperation. Axelrod reported his tournaments inAxelrod 1980a and Axelrod 1980b and in a prize-winning co-authored article, Axelrod and Hamilton 1981. The winning program, Tit forTat, submitted by Anatol Rapoport, was the simplest program submitted, consisting of just four lines of Basic computer language.Against any co-player, it chooses C (cooperate) on the first round and then, on every subsequent Round t, chooses C if the co-playerchose C on Round t – 1 and chooses D if the co-player chose D on Round t – 1. In effect, this implements direct reciprocity: I’ll scratchyour back if you’ll scratch mine. Axelrod then performed an agent-based “ecological” (evolutionary) simulation, in which each program

Page 21: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

played two hundred rounds against each of the others, and then a second “generation” was assembled, containing varying numbers ofcopies of each of the programs from the first “generation” in proportion to the aggregate number of points that they had accumulated inpairwise contests in that generation; then a third “generation” was assembled on the basis of aggregate points accumulated in thesecond “generation,” and so on. This genetic algorithm was designed to simulate evolution by natural selection, and Tit for Tat won theevolutionary tournament by becoming most numerous after many generations. A Scientific American article, Hofstadter 1983, providesan excellent introduction to Axelrod’s original tournaments and their implications. In a bestselling book, Axelrod 1984 recommends Tit forTat as the best way to conduct personal, political, and economic interactions in everyday life. However, Nowak and Sigmund 1993 latershows that a program called Win-Stay, Lose-Shift outperforms Tit for Tat in evolutionary agent-based simulations. Win-Stay, Lose-Shiftcooperates on the first round, and then, on every subsequent Round t, repeats its own move from Round t – 1 if it received one of thetwo highest possible payoffs in Round t – 1, and shifts to the alternative strategy if it received one of the two lowest payoffs on Round t –1. Rapoport, et al. 2015 reanalyzes Axelrod’s data and provided a wide-ranging critique of Tit for Tat. In a retrospective review manyyears after introducing this branch of research, Axelrod 2012 discusses the origins of his big idea and his collaboration with Hamilton.

Axelrod, R. 1980a. Effective choice in the Prisoner’s Dilemma. Journal of Conflict Resolution 24.1: 3–25.

The original report of results from Axelrod’s first computer tournament.

Axelrod, R. 1980b. More effective choice in the Prisoner’s Dilemma. Journal of Conflict Resolution 24.3: 379–403.

The original report of the results of Axelrod’s second computer tournament and his “ecological” (evolutionary) tournament.

Axelrod, R. 1984. The evolution of cooperation. New York: Basic Books.

A bestselling book that describes Axelrod’s computer tournaments and draws conclusions from the results about the best way toconduct personal, political, and business strategic interactions.

Axelrod, R. 2012. Launching “The Evolution of Cooperation.” Journal of Theoretical Biology 299.SI: 21–24.

This article explains how the idea for a computer tournament for the iterated Prisoner’s Dilemma arose and recounts the collaborationbetween the author, a political scientist, and William D. Hamilton, an evolutionary biologist.

Axelrod, R., and W. D. Hamilton. 1981. The evolution of cooperation. Science 211.4489: 1390–1396.

A well-written summary of Axelrod’s computer tournaments, co-authored by the US political scientist Robert D. Axelrod and the Britishevolutionary biologist William D. Hamilton. This article was awarded the Newcomb Cleveland Prize of the American Association for theAdvancement of Science.

Hofstadter, D. R. 1983. Metamagical themas: Computer tournaments of the Prisoner’s Dilemma suggest how cooperationevolves. Scientific American 248.5: 14–20.

An excellent introduction and discussion of Axelrod’s computer tournaments by a commentator entirely unconnected with it.

Nowak, M., and K. Sigmund. 1993. A strategy of Win-Stay, Lose-Shift that outperforms Tit-for-Tat in the Prisoner’s Dilemmagame. Nature 364.6432: 56–58.

An influential article in which a strategy quite different from Tit for Tat was shown to exceed the performance of Tit for Tat in evolutionaryagent-based simulations.

Page 22: Game Theory and Psychology - Le

Game Theory and Psychology - Psychology - Oxford Bibliographies

Rapoport, A., D. A. Seale, and A. M. Colman. 2015. Is tit-for-tat the answer? On the conclusions drawn from Axelrod’stournaments. PLOS ONE 10.7: e0134128.

A re-analysis of Axelrod’s data and some more recent findings, suggesting that the success of Tit for Tat depends on the type oftournament design used, the criterion of success, and the particular Prisoner’s Dilemma payoffs chosen, casting doubt on the generalityof Axelrod’s results and the policy implications drawn from them.

back to top

Copyright © 2017. All rights reserved.