Feeds:
Posts
Comments

Archive for the ‘Science’ Category

I find game theory interesting even though I am, at best, a mediocre player of board games and simulations. I’m much stronger as a participant in role-playing games, where the goal is to produce shared entertainment and have fun as much as it is to achieve a specific goal.

The Prisoner’s Dilemma is a staple of game theory models. Wikipedia has a good summary of the concept here. In essence, two sides must choose whether to cooperate or not. If both cooperate, they each benefit. If neither cooperate, both suffer. But if one side cooperates and the other side betrays them, the side that betrayed gains the greatest benefit and the cooperating side suffers the greatest loss. This being the case, what’s the best strategy?

In a single-move Prisoner’s Dilemma scenario, the best option is to betray your opponent. You’ll suffer, but you’ll suffer less than if you cooperate and get betrayed.

Of particular interest is the Iterative Prisoner’s Dilemma. This simply means that the same decision must be made over and over, usually for an unspecified number of turns. In this situation, the strategies change. It becomes advantageous to cooperate at least some of the time. For many years, the default, go-to strategy for the Iterative Prisoner’s Dilemma has been the Tit-For-Tat approach: begin by cooperating, then simply copy whatever your opponent did on the previous turn. If they keep cooperating, so you do. If they betray you, you punish them immediately by betraying them the following turn. By adding a slight chance of forgiveness, say a few percent, you can do even better.

This strategy doesn’t always win, but it produces the best results in the most efficient manner. It also rewards an essentially “nice,” altruistic approach.

Recently, William Press and Freeman Dyson published a paper offering a new set of strategies for the Iterated Prisoner’s Dilemma that performed better than Tit-For-Tat and seemingly fool evolution. The original paper is far too technical for my limited abilities. However, there’s an excellent summary on the EDGE site by William Poundstone, complete with a brief Q&A with original paper author William Press. There’s also a great, two-page summary of the import of the original paper written by Alexander Stewart and Joshua Plotkin and available as a PDF download here.

In essence, what the authors are arguing is that there is a way to extort your opponent by being more clever than they are. In particular, if you know that your opponent is following a basic, evolutionary, maximize-short-term-gains approach, you can not only beat them, you can even take steps to dictate what their score will be. Moreover, if two players adopt this same strategic approach, called zero determinant, then it will become necessary for them to make one of three choices:

  • One player has to choose to accept some benefit but lose the game to the player who gained the initial advantage
  • One player has to choose to sabotage his own score in order to exact punishment on the opponent–the only way to get back at the other guy is to harm yourself as well
  • Both players need to agree to terms that will allow them to dictate each other’s success but not their own

It’s hard for me to wrap my head around this last option, but it’s one of the more powerful conclusions stemming from the initial research. If both sides are aware that the other side is a thinking, strategic player, it should be possible for them to agree on the most mutually beneficial outcome and then entrust each other to enforce it. Neither side can improve their own score by cheating at that point.

So using these new strategies, you can in theory either dominate an unaware opponent who is behaving in a mechanistic fashion, or you can mutually destroy two clever opponents, or two clever opponents can agree to maximize their own gains. The first option produces more head-to-head wins than Tit-for-Tat, while the last option produces more mutual benefit.

I suppose this offers some hope for diplomatic efforts in the future. For me, having read Peter Watt’s novel Blindsight, it offers some hope that, unlike the bleak vision proposed in that science fiction story, being a self-aware species competing against aliens driven by purely evolutionary, instinctive strategies might not be the automatic losing proposition he makes it out to be.

Advertisements

Read Full Post »