Wednesday, January 16, 2008

Notes for the second meeting - Part 2: Dominated strategies and Nash equilibrium

In this second part we will look at what are rational choices given, first, that the players are rational and that this is common belief and, second, that the players are rational and know each other's strategies. This will lead to two classical solution concepts in game theory: iterated elimination of strictly dominated strategies and Nash equilibrium.

  1. Rationality and common belief of rationality.
    Look at the game in Table 1.

    Strategy B for Bob is irrational in a very strong sense. Whatever his or Ann's information are, it is never a rational thing to choose. Take any type and t_b for Bob. Choosing B is rational if and only if the following hold:

    EV(B, t_b) EV(A, t_b)

    That is, whenever:

    π_bob(aB)λ(t_b)(a) + π_bob(bB)λ(t_b)(b) π_bob(aA)λ(t_b)(a) + π_bob(bA)λ(t_b)(b)

    Putting in the values of the payoff function we obtain:

    1λ(t_b)(a) + 0λ(t_b)(b) 2λ(t_b)(a) + 1λ(t_b)(b)

    That is:

    λ(t_b)(a) 2λ(t_b)(a) + λ(t_b)(b)

    But then we would have that :

    0 λ(t_b)(a) + λ(t_b)(b)

    Which is impossible given that λ(t_b) induces a probability distribution, that is:
    λ(t_b)(a) + λ(t_b)(b) = 1

    Since we took an arbitrary type for Bob, and that the last observation is true irrespective of Ann's type set T_ann, we know that it is never rational for Bob to choose B. Indeed, this is what comes out intuitively if we look at the game matrix. Whatever Ann chooses, it is always better for Bob to choose A. This conclusion could also have been reached using relational models (I leave it as an exercise - note that knowledge of one's own strategy is also crucial here!).

    I mentioned in the first lecture that classical game-theoretic analysis were always assuming that the players are rational and that this fact is common knowledge. Now, observe that there are a lot of possible states for this game where this assumption does not hold. For instance in any state where Bob plays B. But let us look at states where Ann and Bob are rational and where this is commonly known. At such states Ann must believe that Bob is rational or, more precisely, all of Ann's types must assign probability 0 to Bob playing B: to all pairs (B, t_b), for any type t_b of Bob. But then all of Ann's types must assign probability 1 to Bob playing A. It should then be clear that Ann can only be rational, given her belief in Bob's rationality, at states where she plays a. In other words, the only strategy profile that is rational for both players, given the fact that Ann believes that Bob is rational, is aA.

    This strategy profile is the only one that survives iterated elimination strictly dominated strategy, one of the most well-known solution concept in game theory. In classical game theory, a solution concept can be seen as a proposal of what rational player will/should choose. Iterated elimination of strictly dominated strategies runs as follow. First, remove from the original game all strictly dominated strategies. (A strategy s_i of player i strategy is strictly dominated by another strategy s'_i when for all combinations of strategies s_(-i) of the other players, π_i(s_i, s_(-i)) < π_i(s'_i, s_(-i)).) Then look at the resulting sub-game, remove the strategies have become strictly dominated there, and repeat this process until the elimination does not remove any strategies. The profiles that survive this process are said to be iteratively non-dominated.

    In our example above, elimination of strictly dominated strategies closely follows the reasoning steps we took by looking at what Ann and Bob would choose if they are rational and believe in each others' rationality. This is of course not a coincidence. In this simple game we only needed rationality for Bob an Ann and one level of interactive belief in rationality (Ann believing that Bob is rational). But in the general case, that is for arbitrary large (finite) strategic game, if all players are rational and this is common belief then they will choose a strategy that is iteratively non-dominated.

    Theorem (Brandenburger and Dekel 1987; Tan and Werlang 1988): For any state (s, t) of a type structure for an arbitrary finite game G, if all players are rational and it is common belief that all players are rational at (s, t), then s is a iteratively non-dominated strategy profile.

    This epistemic characterization result says that iteratively non-dominated strategies are rational choices for the players, under the assumption that they are rational and that this is common belief. This last assumption concerns specifically the players' mutual expectation. Under common belief of rationality, everyone expects everyone else to choose rationally, and this is enough to exclude the choice of iteratively dominated strategies. In comparison with the specific information structure that we studied in the coordination game example, this assumption is definitely less ad hoc. It indeed covers a huge set of possible expectations, the "only" requirement being common belief in rationality.

    I mentioned that the introduction of information structure to represent the player's expectations turns the question "what is the rational thing to do?" into "where do a given set of expectations come from?" The epistemic characterization of iteratively non-dominated strategies answers that question by invoking classical game theoretic "intuitive" motivation for the iterated dominance solution (see e.g. section 1.1 in Myerson's book in the bibliography): if the players are rational and that is common knowledge, like classical game theorists generally assumed, then they will not play iteratively dominated strategies. In other words, if one grants that it is intuitively plausible to assume rationality and common belief in rationality, then one is committed to accept that the players will choose a iteratively non-dominated strategy.

    For more on this, see A. Brandenburger's "The power of paradox" and G. Bonanno's presentation. K. Apt, in "The many faces of rationalizability", offers an encompassing and penetrating analysis of solution concepts like iterated elimination of dominated strategies. All these references are in the bibliography.

  2. A dynamic view on iterated elimination of dominated strategies.
    Strategy profiles which are not iteratively dominated can also be characterized in a more "dynamic" way, in which one recovers the algorithmic or dynamic character of the elimination process. An information structure (whether a relational model or a type structure), is indeed an essentially "static" representation of the game. The players' expectations are fixed by certain assumptions - in the present case rationality and common belief of rationality - and we or the players "compute" the rational choices from there.

    In "Rational Dynamic..." (see the bibliography), J. van Benthem has proposed to look at iteratively non-dominated strategies as those who survive iterated announcements of rationality from the players. The idea is that by announcing publicly - that is to everyone - that they are rational, the players dynamically change each other's expectations.

    Formally, these announcements take the form of operations which remove states in relational models. Here we will be mainly interested in announcement of rationality, as defined in the first part of these notes, but this style of analysis can be applied to various sorts of announcements. Following van Benthem, let us call a full relational model for a given game in strategic form the relational model whose states are in 1 to 1 correspondence with the strategy profiles, and where w ∼_i w' iff w_i = w'_i. Intuitively, a full relational model is just a copy of the original game where the player know their own strategies, but nothing more. They consider possible every other combination of actions of the other players.

    Look at the full relational model for the game in Table 1. The reflexives loops are omitted.

    As before, Ann's accessibility relation runs along the solid arrows, while Bob's runs on the dashed ones. If both players announce that they are rational, then they both "eliminate" or "rule out" the possibility that they are not rational. Following the definition of rationality in relational model that we saw before, this removes aB and bB, the two states where Bob is not rational (Ann is rational everywhere in this full model). This leaves us the following sub-model, where Ann is not rational at state bA (Bob is trivially rational in this reduced game, because he only has one action available). A second announcement of rationality thus eliminates this state, leaving only aA.

    Again, the fact that by iterating announcements of rationality we end up in a sub-model where all states get assigned a iteratively non-dominated strategy is not a coincidence.

    Theorem (van Benthem 2006): The following are equivalent for states w in full game models:
    (a) f(w) is a iteratively non-dominated strategies,
    (b) w is in the submodel which remains after any number of successive announcement of rationality.

    In other words, by repeatedly announcing their own rationality the players shape each other's expectations in a way that leads them to play strategies which are not iteratively dominated.

    There is of course an intuitive connection between this dynamic and the "static" characterization. Removing states in a model is an operation through which the players get more and more informed about each others, and adapt their expectations
    accordingly. In full game models they are as uniformed as they can be, considering possible all actions of others. Upon learning that they are rational, they narrow the range of what they consider possible actions of others. By learning once more that the others are rational, this time with respect to this smaller set of possibilities, what they expect the others to do become even more precise, and so on until they reach a point where announcing rationality does not change the player's expectations anymore. In all the states that remain, all players are rational, and this is common knowledge. In other words, iterated announcement of rationality is a process through which the expectations of the players converge towards sufficient epistemic conditions to play iteratively non-dominated strategies.

    Possible research questions: van Benthem's result is based on full relational models. What happen if we allow for bigger models, like the one we used to model Bob's two possible types in the coordination example? Along the same line: in full epistemic models the players are never wrong about the other's strategy choices. Their information is veridical. What happens is we drop this assumption? For new work in that direction, see Mihalache and Baltag & Smets (reference in Part 1). Finally, how to formulate public announcements of rationality in type structures? Probability update? Conditioning?

    For (much) more on logics for public announcements:
    • A. Baltag, L.S. Moss, and S. Solecki. The logic of public announcements, common knowledge and private suspicions. In TARK 98, 1998.
    • H. van Ditmarsch, W. van de Hoek, and B. Kooi. Dynamic Epistemic Logic, volume 337 of Synthese Library Series. Springer, 2007.

  3. Mutual knowledge of strategies and Nash equilibrium:
    Iterated elimination of strictly dominated strategies is a very intuitive concept, but for many games it does not tell anything about what the players will or should choose. The coordination game that we studied before is a typical example. Rationality and common belief of rationality does not exclude any strategy choice for Ann or for Bob. Whether choosing A is rational for Bob depends on what he expects Ann will choose, and believing that she is rational does not help him to narrow his expectations to a or b.

    But what if Bob knows (that is rightly believes) Ann's strategy choice? Intuitively, this is quite clear what he should do, that is what is his best response: coordinate with her. If he knows that she plays a, for instance, then playing A is clearly the only rational thing for him to choose, and vice versa if he knows that she plays b. The situation is entirely symmetrical for Ann. If she is sure that Bob plays A, and she is right about it, then her only rational choice is to choose a, and vice versa if she knows that Bob plays B. In terms of type structure, in all states where Ann is rational and her type knows Bob's strategy choice where Bob is also rational and his type knows Ann's strategy choices, they either play aA or bB. The same hold for relational models: they play either aA or bB in all states where Ann and Bob are rational and in all situation they considers possible they play the same strategy.

    The strategy profiles aA and bB are Nash equilibrium of that game. This solution concept is usually described as the profiles where no player has an incentive to unilaterally change his strategy choice. In other words, a Nash equilibrium is a combination of strategies, one for each player, such they all play their best response given the strategy choices of the others. This is precisely what happens at the profiles aA and bB in our coordination example. At aA, for instance, a is the best response of Ann, given that Bob plays A, and similarly for Bob.

    This key intuition being Nash equilibria, the idea of "best response given the action of others", is exactly what is ensured by the mutual knowledge of strategy in the example above. If Ann knows at a state that Bob chooses A, she is not only convinced of that, but she is also right. Bob is playing A at that state. By playing rationally given her information, Ann is thus playing her best response given Bob's strategy choice. This is not only the case in our small example, but also in general:

    Theorem (R. Aumann & A. Brandenburger, 1995): For any two-players strategic game and epistemic model for that game, if at state s both players are rational and know each other's strategy choice, then the profile they play at s is a Nash equilibrium.

    This epistemic characterization differs from the one of iterated elimination of dominated strategies in three respects. First, it requires mutual knowledge and not only beliefs. It is crucial that each player is right about the strategy choice of the other. Otherwise his response will simply fall off the mark. Second, it only requires one level of interactive knowledge. It is not necessary that both players know that they know each other's choices, nor is any higher-order level of information necessary. Finally, I have stated the result only for games with two players. One can find the general result in:
    • R.J. Aumann and A. Brandenburger. Epistemic conditions for nash equilibrium.
      Econometrica, pages 1161–1180, 1995.

    Nash equilibrium can also be characterized dynamically, much in the flavor of van Benthem's approach to iterated elimination of dominated strategies. Just like one only needs one level of interactive knowledge to ensure equilibrium choices, only one announcement is needed to narrow down a relational model to the equilibrium state: the announcement of each player's strategy.

    Theorem. For any two-players strategic game and relational model for that game, if at state s after announcing player i's strategy choice in the original model j is rational, and after announcing player j's strategy choice in the original model i is rational, then the profile they play at s is a Nash equilibrium.

    Again, there's an intuitive connection between the static and the dynamic characterizations of Nash equilibrium. Each announcement provides the players with all the information they need to make the best decision given the other's strategy choice. Observe, however, that the two announcements of strategy are not simultaneous nor successive. They are both made individually in the original model. A joint announcement of all player's strategy choice trivialize any posterior announcement of rationality. It leaves only one possible strategy profile to consider possible for all players, which makes them trivially rational. What is required is rather a "test" of each player's rationality posterior to the announcement of his opponent's choice. This is indeed what captures the idea that each strategy in a Nash equilibrium is a best response, given the choices of others.

    To conclude, I will make two general remarks about epistemic characterizations of solutions concepts. First, the reader will probably have noticed that all the epistemic characterizations mentioned above are "if... then" statements. They provide sufficient conditions on the players' expectations to play according to such and such solution concept. What about necessary conditions? Surprisingly, this is a question which makes little sense, given the way we constructed our representations of the players' information. Given a fixed strategy profile s, for example one that is a Nash equilibrium, we can construct any state we want. The definition of a relational model or of a type structure does not impose any constraint on what information (type or accessibility relation) should be assigned to the player at a given state. In other words, we can always construct a states where the players do in fact play a Nash equilibrium at s, but out of sheer luck! They might be totally wrong about each other's choices or beliefs, or even be irrational given what they expect, but still playing a strategy which turns out to be the best response given what the other in fact chose. In view of this, most characterization results provide a weaker "converse direction" result: that if a strategy profile fall under such and such solution concept then we can construct a model where they have the expectations which match the epistemic characterization. This is obviously a much weaker statement (and usually much easier to prove), but one that is as good as one can get. van Benthem's dynamic characterization of iteratively non-dominated strategy is, however, an "if and only if". Question: does it has to do with the fact that he is working with "full models"?

    The second remark concern the concern more precisely the epistemic conditions for Nash equilibrium. One can reasonably wonder how the players might ever come to know the strategy choice of their opponent. This is especially true in the coordination game that I used to introduce this solution concept. The whole puzzle about that game is indeed how would they come to have enough information about each other's action to successfully coordinate. By using directly mutual knowledge of strategy choice, the epistemic condition of Nash equilibrium brings little light on this particular problem. In the next two lectures we will look at two proposals which provides partially cope with this issue: correlated beliefs and team reasoning.


Stephan202 said...

Point 1: The inequalities λ(t_b)(a) ≥ 2λ(t_b)(a) + λ(t_b)(a) and 0 ≥ λ(t_b)(a) + λ(t_b)(a) should be λ(t_b)(a) ≥ 2λ(t_b)(a) + λ(t_b)(b) and 0 ≥ λ(t_b)(a) + λ(t_b)(b).

Point 3, paragraph 2: ``If she is sure that Bob plays A, and she is right about it, then her only rational choice is to choose a, [...]''. What is written in boldface is not part of this conditional. Acting on one's belief (being sure) is rational. Right?

Olivier Roy said...

Both remarks are correct. I've corrected the typo. The second remark raise important issues about the introspection of knowledge and beliefs. (If you believe that E occurs do you believe that you believe that?) I let you guys think about it.

Stephan202 said...

I'll give it a shot:

If player x believes E in some state w, then B_xE holds in w. Now, the question ``If you believe that E occurs do you believe that you believe that?'', can be translated to asking wheter the modal formula B_xE --> B_xB_xE holds for all B_x. Note that this formula is simply \Box p \to \Box\Box p, i.e. axiom 4, which defines transitive frames.

Now, intuitively: Suppose one believes, in some state w', that some state w'' is possible. Suppose further that in some state w one believes w' to be possible. Then naturally w'' is also believed to be possible in w. Or maybe not, but that's my intuition ;).

Need to visit the lecture now, So I will just submit this.

Stephan202 said...

Links to some of the papers mentioned:
R.J. Aumann and A. Brandenburger. Epistemic conditions for nash equilibrium. Econometrica, pages 1161–1180, 1995.

A. Baltag, L.S. Moss, and S. Solecki. The logic of public announcements, common knowledge and private suspicions. In TARK 98, 1998.