Monday, January 14, 2008

Notes for the second meeting - Part 1: Rationality and Beliefs

At the end of the last lecture we saw how relational structures and type spaces allow to precisely represent the player's mutual expectations in games, which are arguably the missing ingredient to understand game-theoretic rationality. In this lecture I will start by spelling out the notion of a rational strategy itself, and then we will look at a classical result for the epistemic program in game theory: the characterization of iterated removal of strictly dominated strategies. I will present two perspectives on this solution concept: the "static" one which uses rationality and common beliefs in rationality, and a "dynamic" one which rather use iterated announcements of rationality in relational models. To do that we will have to look more precisely are what beliefs, and common beliefs are in type spaces and relational models. We will finally look at the epistemic characterization of Nash equilibrium, both from a static and from a dynamic point of view.

  1. Rationality and rational play in epistemic games.
    Just like in decision theory, Bayesian or instrumental rationality in games is a matter of maximizing one's payoffs given one's expectation. The key difference is that in games the expectations are about the choices of other rational agents, who also form expectations about each other's choices. As we saw in the last lecture, relational models and type spaces are two precise ways to represent these inter-dependent expectations. With them it becomes relatively simple to define instrumental rationality in games.

    In type spaces, the expected payoffs of player i playing strategy s_i, given that he is of type t_i, is defined as follows:
    EV(s_i, t_i) = Σ_s'_(-i) π(s_i, s'_(-i))λ(t_i)(s'_(-i))
    s'_(-i) are combinations of strategies, one for each player different from i. The expected value of playing (s_i, t_i) is thus the sum, over all such possible combinations of strategy s'_(-i) of the other players, of the payoff π(s_i, s'_i) that i gets by playing s_i against s'_(-i), weighted by the probability λ(t_i)(s'_(-i)) that i's type t_i assigns to the others playing s'_(-i). The term λ(t_i)(s'_(-i)) is a shorthand:
    λ(t_i)(s'_(-i)) = Σ_t'_(-i)λ(t_i)(s'_(-i) & t_'(-i))
    where λ(t_i)(s'_(-i) & t_'(-i)) is the probability that the image of λ(t_i) assigns to the pair (s'_(-i), t_'(-i)).

    This definition of expected payoff in type spaces is indeed very close to the decision-theoretic idea of expected value/utility/payoffs. The payoff that player i gets from choosing a strategy s_i depends on what the other players choose. Player i has some expectations or information relative to these choices, which are encoded by his type t_i. Hence, the payoff that i can expect from choosing s_i is just the sum of the payoffs he would get by choosing this strategy against the various possible combinations of choices of the others, π(s_i, s'_(-i)), each weighted by how probable i thinks these choices are, that his by the probability he or his type t_i assigns to them.

    A player's strategy s_i is rational, given that i is of type t_i, simply when it maximizes expected value. In other words, it is rational of i to choose s_i, given what he expects about the choices of his co-players, his type t_i, when no other strategy gives him a strictly higher expected value.

    Consider the state (aA, t_a, u_b) in the example of the first lecture. Ann is indeed rational at that state: she is convinced that Bob will play A, against which a is the only best response. Her current strategy, a, gives her an expected payoff of 1:
    EV_ann(a, t_a) = 1(1/2) + 1(1/2) + 0(0) + 0(0)
    while the expected payoff of choosing b instead is 0:
    EV_ann(b, t_a) = 0(1/2) + 0(1/2) + 1(0) + 1(0)
    Bob, on the other hand, is not rational at that state. The problem is not that he his choosing a wrong strategy in itself. After all, he successfully coordinates with Ann at that state. Bob's irrationality stems from the fact that Bob mistakingly believes that Ann chooses b. Given this belief, Bob's best response would be to choose B, and not A:
    EV_bob(A, u_b) = 1(0) + 0(1)
    EV_bob(B, u_b) = 0(0) + 1(1)
    One can easily check that, in this example, Ann and Bob are rational at (aA, t_a, t_b). Their respective strategy choices maximize expected value given their types.

    Rational choices in relational models are also thought in Bayesian terms, with the difference that the payoffs and expectations are now defined qualitatively instead of in terms of utility values and probability functions. Look again at state (aA, t_a, u_b), but this time in the Kripke structure (which is state aA). Here Bob is sure that Ann plays b: he considers only one state possible, namely bA. But in that context he would strictly prefer the outcome bB. Switching his strategy choice from A to B would give him an outcome he strictly prefers, in all eventualities that he considers possible. Choosing A is, in other words, a very bad decision for Bob, thus the following definition of an irrational strategy in a relational model.

    Let w_i be i's component of f(w), that is i's strategy in the profile that f assigns to w. Given a state w, I write w[s_i/w_i] be the state, if any, that is just like w except that w'_i = s_i. A player i is said to be irrational at state w when there is a s'_i ≠ w_i such that for all w' ∼_i w, w' ≤ w'[s'_i/w'_i] (observe that is it important here that the player know their own actions, so that w' _i = w_i). In words, a player is irrational at a state w where there is an action of his, different from the one he plays at w, which gives him an outcome he prefers in every situation he considers possible.

    A player is rational when he is not irrational, that is when for each of his alternative action s'_i there is a state he considers possible where is current strategy choice is at least as good as s'_i. Again, knowledge of one's own choice is crucial here. In our model, Ann and Bob are both rational in that sense at aA.

    In our example, the two models of the agents' expectations and their associated notions of rationality thus give the same answer to the question "what is it rational for Ann and Bob to choose?" They allows to assess precisely whether a player is choosing rationally, given what he believes. But they only partially answer the question of what is, in general, a rational play. Rationality as just defined depends entirely on the player's expectations, and one can legitimately wonder where do these expectations come from. For instance, what possible ground would Ann have for being convinced that Bob will play A?

    The epistemic program in game theory takes one step towards answering that question, by looking at what are rational choices in games given some less ad hoc generic beliefs and expectations. For example, instead of looking at what are Ann's rational choices given that she is sure that Bob plays A, one might ask what is rational for her to play in all situations where she believes that Bob is rational or that she believes that he believes that they are both rational. This, of course, requires to define precisely these kinds of general beliefs in epistemic models.


    Possible research questions: Is there a general correspondence between type spaces and relational models for games? Do their associated definiti0n of rationality always coincide? van Benthem (see bibliography) propose different definitions of a rational play in relational models; what would be their quantitative (type space) counterpart? What happens if we drop the knowledge of one's own strategy assumption? Look at:

    • D. Mihalache. Safe belief, rationality and backwards induction in games. Master’s thesis, Oxford University Computing Laboratory, September 2007.
    • A. Baltag and S. Smets. Knowledge, safe belief and dynamic rationality: logics for higher-order belief revision. Oxford and Vrije Universiteit Brussel, Unpublished manuscript.

  2. Beliefs and common beliefs.

    We saw in the last lecture what it for an agent i to believe that an event E occurs at state w in a relational model: it is just for E to occurs at all states that i considers possible at w. As E can be any set of states, one can easily define the "event" corresponding to player j being rational-R_j = {w : j is rational at w}- and with it identify the set of states where i believes that j is rational: B_i(R_j) = {w : ∀w' ∼_i w, w' ∈ Rat_j}. In our example, at states aA and bA Bob believes that Ann is rational.

    Higher-order beliefs in rationality is defined in the exact same way. One can check, for instance, that in all states Ann considers it possible that Bob is not rational, and that Bob knows that she believes this - knowledge, in that context, is understood as belief which content is true.

    These notions can also be defined in type spaces. Recall that there states are just tuples (s, t) where s is a strategy profile and t is a "profile" of types. In type structures beliefs are defined relative to types. An event for a type t_i is a set of pairs (s_(-j), t_(-j)), i.e. a set of strategy choice and types for all the other players. Type t_i of player i is said to believe the event E ⊆ S_(-i) x T_(-i) whenever the probability of E is 1 according to λ_i(t_i). In our example, Ann thus believes (in her only type) that Bob plays A because she assigns probability 1 to that event, which consists of the A column in the matrix specifying Ann's function λ_ann.



    Higher order information is build in type structure along the same line as in relational models. Let B_i(E) = {t_i : t_i believes that E} be the set of i's types where he believes that E. Agent j believes that i believes that E when λ_j(t_j) assigns probability 1 to the event consisting of all pairs (s_(-j), t_(-j)) such that t_i ∈ B_i(E), which are just pairs where player i's type believes that E occurs. In our example, Ann does not believe that Bob believes that she plays a. This would boil down for her to assign probability 1 to Bob being of type t_b, which is not the case. Bob, however, knows - again taking knowledge as truthful belief - that Ann is uncertain about what he believes she will do.


    From a given state in a relational model, or a given type in a type structure, one thus can compute any (finite) level of such higher-order information. This form what is often called the agent's belief hierarchy : starting from his beliefs about the other's strategy choices, the "ground facts", and building up to higher and higher order of beliefs about the other's beliefs.


    One can also define various notions relative to the information of groups of agents. Common belief one of the most frequently used. An event is said to be common belief when all players believe it, all players believe that all players believe it, all players believe that all players believe it, and so on for any finite iteration of "all players believe that all players believe that". Alternatively, in a more fixed-point-like formulation, an event it common belief when all agents know that it is common belief.

    In relational models, this translates as follows: Let ∼*_N be the reflexive and transitive closure of the union of all the accessibility relations of the players N. E is common knowledge at a state w when it occurs in all states w' ∼*_N w.

    In type spaces, we use an inductive definition. Take a state (s, t) - recall that a state is a pair (strategy profile, type "profile"). The set of all states where event E is common believed to degree 0, write B^0(E), is the set of all states where all types at that state believes E. Take now the set of all states where E is common belief to degree _m. Then the set of state where E is common believed to degree m+1 when is defined as the set of pairs (s, t) where all types believe that E is common believed to degree m. The set of state where E is common belief is just the set of state where E is common believed to degree m, as m goes to infinity.

    It is important to realize that, in both Kripke and type structures, common belief is not something that pertains to a single agent. It says something about what every agent in a group believes at a certain state. It is also highly interactive: it says something not only about the fact that all agents believe that an event occurs, but also about the fact that all agent believe that all others believe that, for any number of iteration of "believe that". It is, finally, a very strong notion, which is notoriously difficult to attain.

    More on knowledge, beliefs and the difficulties which arises from common knowledge in interaction, see Aumann (Interactive Epistemology) and chapter 5 of Osborne and Rubinstein in the bibliography. One can find an interesting new take on the topic, with more references, in :

    • P. Egre and Denis Bonnay, A Non-Standard Semantics for Inexact Knowledge with Introspection. 2006, in Proceedings of the ESSLLI 2006 Workshop Rationality and Knowledge, R. Parikh and S. Artemov (eds.)

3 comments:

Stephan Schroevers said...

Point 2, paragraph 4: ``Agent j believes that i believes that E when λ_i(t_i) assigns probability 1 to the event consisting of all pairs (s_(-j), t_(-j)) such that t_i ∈ B_i(E)''. I guess this should be ``Agent j believes that i believes that E when λ_j(t_j) assigns probability 1 to the event consisting of all pairs (s_(-j), t_(-j)) such that t_i ∈ B_i(E)''. Right?

Stephan Schroevers said...

Link to one of the papers mentioned:
P. Egre and Denis Bonnay, A Non-Standard Semantics for Inexact Knowledge with Introspection. 2006, in Proceedings of the ESSLLI 2006 Workshop Rationality and Knowledge, R. Parikh and S. Artemov (eds.)

Unknown said...

Thanks for the link to the paper mentioned, it is very useful.

Did anybody find a link to the other titles mentioned?