If the Red Sox are able to overcome Tim Wakefield’s shaky command of the plate and their frequently inept defense to score more runs than the Cardinals, they win and the Cardinals lose. If Black manages to force checkmate, White loses the game. If a salesman sells a car objectively worth $4,000 for $5,000, the buyer has lost $1,000. These are known as zero-sum games; there is only so much winnable goodness to be spread around (entries in the win column; dollars in the economic system), so for one participant to gain, another must lose. On the other hand, many games are non-zero-sum; proponents of free trade, for instance, feel that net gains resulting from a shift in production to places with a comparative advantage, and they’ve got board games to back them up. A simple example of a non-zero-sum game is the prisoner’s dilemma. Two criminals have been arrested and are being interrogated in separate rooms. If both refuse to talk, each will be out of prisoner in a year; if one plea bargains and confesses, he’ll be out in six months while the other spends five years in jail; if both confess, each will spend three years in jail. The best solution for the criminals is for neither to talk, but can they really trust one another not to fink? The prisoner’s dilemma was the basis for some of John Nash‘s Nobel-winning work in game theory and is the canonical example of a situation with multiple Nash equilibria; there are two possible outcomes in which a player will gain no benefit by switching his play. Other economists created vast array of variations (introduction of asynchronity in the players’ choices; noise in the communication of the other player’s choice; partially randomized payoffs); physicists looked at the problem set, as well, leading to such oddities as Slate’s Steven Landsburg (an economist) and Crooked Timber’s Daniel Davies hectoring one another about physics (I agree with actual physicist Chad Orzel; neither man comes off particularly well, and Landsburg in particular seems like the sort of person worth fleeing at cocktail parties). Landsburg is right that the non-intuitive of "quantum prisoner’s dilemma" is fascinating and non-intuitive, but seems to have ignored Davies’ real point, that the interesting thing about prisoner’s dilemma is not its potential real-world application. It’s that it models numerous problems in communication so neatly.

Thomas Schelling‘s work on coordination theory sought to explain how players could independently chose the better outcome; Schelling’s work suggests one reason why so many borders correspond to visible landmarks such as mountains and rivers. And in 1984, political scientist Robert Axelrod held a computer tournament to study iterated prisoner’s dilemma. The goal was to see if some strategies were more successful than others after repeated play. The winner was tit-for-tat, a computer program that only ratted out its opponent if its opponent ratted it out first. It was "nice" (it never finked first), "forgiving" (it would begin cooperating again as soon as its opponent cooperated), "retaliatory", and "clear" (the strategy was easily predictable). Recently, a team of researchers from Southampton University demolished tit-for-tat in an iterated prisoner’s dilemma tournament; their team of entries used a series of moves to identify one another and then deliberately worked to inflate their scores and punish non-teammates. As Cosma Shalizi notes, this is an impressive feat of programming, but not really surprising. If the strategy were successful enough, eventually people would create cuckoos designed to mimic the recognition pattern of the Southampton team. Then there would be counter-cuckoos and counter-counter-cuckoos, and eventually iterated prisoner’s dilemma would resemble Core Wars. Anyone who has ever seen a beehive, much less read Richard Dawkins, knows that individual sacrifice in favor of one’s species can be an evolutionary success; a team strategy designed to ruin individual scores for the greater good should win, but it doesn’t tell researchers much about evolutionary stability of cooperation problems if victory depends on randomly guessing your opponent’s strategy. Or perhaps it does; if so, perhaps economists and political scientists can find a game to study with much better visuals. There’s even a handy strategy guide.