Game Development Reference
In-Depth Information
have already seen, in Example 8.2, one approach to explaining cooperation,
in terms of memory costs. Here is another in terms of regret.
Suppose that prisoner's dilemma is played for N rounds. There are 2 2 N 1
pure strategies for each player in N -round FRPD; computing the regret of
each one can be rather complicated. Thus, when using regret, it is reasonable
for the players to focus on a much more limited set of strategies. Suppose that
each player believes that the other player is using a strategy where he plays
tit for tat for some number k of rounds, and then defects from then on. Call
this strategy s k . (So, in particular, s 0 is the strategy of always defecting and
s N is tit for tat.) A straightforward argument shows that if a player believes
that the other player is playing a strategy of the form s k for some k with
0 ≤ k ≤ N , then the strategy that minimises player 1's regret is either s N− 1 ,
s 1 ,or s 0 ; see [Halpern and Pass, 2009] for details. Moreover, for su ciently
large N (it turns out that N> (( u 3 + u 2 − u 1 ) / ( u 2 − u 1 )) + 1 su ces), the
strategy that minimises regret is s N− 1 . Thus, if each player believes that
the other player is playing a strategy of the form s k - a reasonable set of
strategies to consider - then we get a strategy that looks much more like
what people do in practice.
8.6 Conclusions
As I mentioned earlier, economists have considered quite a few alternatives to
Nash equilibrium, including -Nash equilibrium, subgame perfect equilibrium,
sequential equilibrium, rationalizability, and many other notions [Osborne
and Rubinstein, 1994]. But none of these directly addresses the concerns that
I have addressed here: fault tolerance, computation, lack of awareness, and
lack of knowledge of other players' strategies. The results and approaches
of this chapter are are clearly only first steps. Here are some directions
for further research (some of which I am currently engaged in with my
While ( k, t )-robust equilibrium does seem to be a reasonable way of cap-
turing some aspects of robustness, for some applications, it does not go
far enough. I said earlier that in economics, all players were assumed to
be strategic, or 'rational'; in distributed computing, all players were either
'good' (and followed the recommended protocol) or 'bad' (in which case
they could be arbitrarily malicious). Immunity takes into account the
bad players. The definition of immunity requires that the rational players
are not hurt no matter what the 'bad' players do. But this may be too
strong. As Ayer et al. [2005] point out, it is reasonable to expect a certain
Search Nedrilad ::

Custom Search