Monday, January 7, 2008

Notes for the first meeting

  1. General introduction:
    Decision
    and game theory are two extensive mathematical toolboxes to study rational decision making. Both conceive of rationality as instrumental: choosing rationality is to choose what is thought to be the best mean to reach one's end.

    Decision and game theoretic models are very similar in many respect, but they differ in a simple but crucial one, namely what determines the outcome of a decision. Decision theory is concerned with single agent decision making under uncertainty. It models individual decision makers, having to make a decision "against Nature", i.e. in an uncertain environment. In decision-theoretic situations, the choice of the decision maker together with the actual state of the world, which might be in turn dependent on some random happenings, completely determine the outcome of a decision. Game theory, on the other hand, is concerned with the interaction of rational decision makers. Here the outcome of a decision depends on the choices of all the agents involved in the situation.

    From this small difference---one agent/many agents---comes a whole array of different problems and issues. Theses rest mainly on the fact that in decision-theoretic scenarios the decision maker's expectations are based on what he knows and believes about the state of his environment, while in game-theoretic scenarios expectations are based on what agents think the others will choose. In this project we will mainly look at how rational expectations determine choices in games. But as a start it is instructive to look first at how they determine choices in decision-theoretic scenario.

  2. Rational expectations in decision under uncertainty.
    Decision theory deals with two kinds of uncertainty: endogenous and exogenous. Situations of exogenous uncertainty are situations in which the results of the agent’s actions depend on random or non-deterministic occurrences in the environment. Exogenous uncertainty is thus the result of objective non-determinacy, external to the decision maker. Buying a lottery ticket is a typical example. Someone cannot simply choose to buy “the” winning ticket. All one can do is pick one and wait for the drawing to determine whether it is a winner. In models of decision making under exogenous uncertainly, the decision maker has to choose between different “lotteries” or simply probability distributions over the set of outcomes. Buying a lottery ticket, for instance, would then be represented as an action which gives the agent a certain probability of winning.

    Situations of endogenous uncertainty, on the other hand, are situations in which the outcomes of the agent’s decisions depend on the actual state of the world, about which he has only partial information. The following is a classical example, from Savage (1954, p.17). Imagine an agent who is making an omelet. Suppose he has already broken five eggs into the bowl and he is about to break the sixth and last one. Whether this will result in a good omelet depends on the state of this last egg, which can be rotten or not. The agent, however, does not know whether the egg is rotten. In slightly more technical jargon, he lacks information about the actual state of the egg, and this makes the outcome of his decision uncertain. Note the contrast with buying a lottery ticket. There is no chance or randomness involved here. The egg is rotten or it is not. What matters is that the agent does not know the state of the egg. Decision theoretic models, for example the one proposed by Anscombe and Aumann (1963), usually represent such situations with a set of possible states. In the omelet example there would be two states, “the egg is rotten” and “the egg is not rotten”. The uncertainty over these states is represented either qualitatively, by an epistemic accessibility relation, or quantitatively, by a probability distribution.

    The most wide-spread expression of instrumental rationality in the face of uncertainty, whether exogenous or endogenous, is Bayesian rationality also known as maximization of expected utility/payoffs/values. A decision problem usually consists of a set of states, representing the agent's uncertainty, a set of outcomes, which have a certain value for the agent, and a set of actions which are functions from the set of states to the set of outcomes. This simply encodes the idea that the result (outcome) of an agent's action depends on the state of the world. The agent's uncertainty is usually represented by a probability distribution over the set of states. The expected value of an action is the sum of the values of all its possible outcomes, weighted by their respective (objective and/or subjective) probability. To be instrumentally rational, in the Bayesian sense, is to choose the action, or one of the actions, which maximizes this sum. In other words, the agent is instrumentally rational when he chooses an action which gives the best expectations.

    It is important to see that in these scenarios, the expectations of the agent are fixed by the probability distribution induced by the objective random event or by the (partial) information he has about the state of the world. These expectations are fixed in the sense
    that they do not depend on the agent’s preferences and possible choices. They only depend on "Nature", the decision-making environment, which do not in turn form expectations about what the decision maker will choose. This is where decision-theoretic scenario differ from game-theoretic ones.

    Some more references on decision theory:
    • L.J. Savage. The Foundations of Statistics. Dover Publications, Inc., New York, 1954.
    • R. Jeffrey. The Logic of Decision. McGraw-Hill, New-York, 1965.
    • R.B. Myerson. Game Theory: Analysis of Conflict. Harvard University Press,
      1997 edition, 1991. Chapter 1.
    • J.M. Joyce. "Bayesianism". In A.R. Mele and P. Rawling, editors, The Oxford Handbook of Rationality. Oxford University Press, 2004.

  3. Games (I - basic definitions and concepts)
    The key ingredients of a game-theoretic scenario are similar to those in decision theory. A given set of agents have to choose between a number of actions. The combination of these actions, one for each agent, determine an outcome which each agent values differently.

    Two main models are used in game theory: extensive and strategic representations. Extensive representations are often represented as trees, as in the following figure.
    The order of nodes in the tree represent the sequential structure of the game. In this example Bob plays first, choosing Hi or Lo, and Ann plays second, choosing between Hi or Lo if Bob played Hi, and hi or lo if Bob played Lo. At each node the thus player has a number of actions available. A strategy for a player i, in an extensive games is so to speak a plan for all eventualities. It is a function which specifies an action to take at each nodes in which player i has to make a decision.



    For this project we will be mainly interested in games in strategic representation. These abstract away from the sequential structure. The players choose once and for all a strategy for the whole game. At this point it is worthwhile to get slightly more formal.

    A game in strategic from contains a finite set of agents N and for each agent iN a finite set S_i of strategies. A strategy profile s is a combination of strategies, one for each agent. I will often use the notation s_(-i) to denote a combination of strategies for all players except i. Each agent is also endowed with a preference relation ≤_i over the set of profiles. This relation is usually assumed to be transitive, reflexive and total. In main stream game theory the preference relations are usually represented with a collections of real-valued payoff functions π_i which assign to each strategy profile a payoff to the player i. Games in strategic forms are often represented as matrices, as in Table 1.











    This game is a two-players game. The row player, which I will call here Ann, has two available actions or strategies, a and b. Bob, the column player, has also two strategies, A and B. In this game the payoffs are identical for both players. If they coordinate, either on aA or bB they get 1. Otherwise they get 0.

    The game depicted in Table 1, often called a "coordination" game, is of particular interest for us. What is the rational thing to choose, say, for Ann? Intuitively, it all depends on what she expects Bob to choose. If she thinks Bob will choose A, then it is rational for her to choose a, and symmetrically if she thinks Bob will choose B. But how can she come to have such beliefs?

    In games in strategic from, is it generally assumed that the players choose simultaneously, without observing (or knowing) each other's choices, and without telling in advance to the others what they plan choose. This means that in our coordination game, Ann and Bob are left alone with their expectations. On the other hand, it is often assumed that both players are rational, and that this fact, together with the structure of the game, are common knowledge---we will come back to this later--- i.e. that they know that they are rational, they know that they know that they are rational, and so on. This means that in making her decision, both Ann and Bob look at the game, and try to anticipate each other's choice on the basis of what they know about the payoff structure and about each other's beliefs and rationality.

    Observe here the crucial difference with the decision theoretic scenario sketched above. Here Ann's and Bob's expectations are tightly interconnected. Ann expects Bob to choose rationally. But she knows that Bob will do so on the basis of what he expects her to choose. This, in turns, also depends on what she thinks Bob thinks she thinks Bob will choose, and so on ad infinitum. The same holds for Bob. In decision theoretic scenario, on the other hand, the decision maker's expectations only stem from his uncertainty about the state of his environment, which does not depends on what the environment "expects" the decision maker to choose. In short, mutual expectations (knowledge and beliefs) are at the very heart of interactive reasoning in game theory.

    For more on strategic games, see Myerson and Osborne & Rubinstein in the bibliography. The classic in the field is of course: J. von Neumann and O. Morgenstern. A Theory of Games and Economic Behaviour. Princeton University Press: Princeton, NJ, 1944.

  4. Types and states; models of the players' informations.
    Two models of the agents' information (knowledge and beliefs) are mainly used in the literature on what is called the epistemic program in game theory: the qualitative Kripke structures or relational models that we know from (epistemic) modal logic---these are for historical reasons called Aumann structures or partition structures in the economics literature---and the quantitative type spaces. I will first introduce them formally, and then look at an example.

    A relational model of a game in strategic form is a set of state W, together with a collection of epistemic accessibility relations ∼_i, one for each player i , and a function f which gives, for each state in W, the strategy profile that is played at that state. This function is indeed assumed to be total, but does not have to be injective nor surjective.

    Each state w of a relational model represent a possible play of the game. The strategy that each player chooses in that play is given by the function f. Observe that there need not be a one to one correspondence between the possible states and the set of profile. There can be many states in which the same profile is chosen, and there can be profiles which are chosen in no state of the model.

    The information of the players at each state is give by the accessibility relations. If state w' is accessible from state w for player 1, this means that, at w, 1 considers that the actual play of the game might be w' . The relation is usually assumed to be transitive and Eucledian (if w
    ∼_i w' and w ∼_i w'' then w' ∼_i w''). An event E is just a set of states, i.e. E ⊆ W. An event E occurs at a state just in case that state is in E. A player is said to believe that a certain event occurs at a state when this event occurs all states that he considers might be the actual one. In other words, a player believes that a certain event occurs when he does not consider possible that it might not.

    A type structure for a given strategic game consists, for each player i, of a set of types T_i and function λ_i which gives for each type in T_i a probability distribution on the set of pairs (s_(-i), t_(-i)) where s_(-i) is as above and t_(-i) is a combinations of types, one for each player except i. Types and their associated image under λ_i encode possible information state of the players in that game. Intuitively, the function λ_i gives for each type of player i his probabilistic beliefs in the others playing some strategies and being of a certain type, i.e. a probability distribution on the set of pairs (s_(-i), t_(-i)). A state in a type structure is a tuple (s, t) where s is a strategy profile and t is a combination of type, one for each player.

    States in relational models and states in type structure thus encode very similar information. They specify a particular play of the game, i.e. a strategy profile, and the information the players have at that play. In relational models this information is represented by the players' accessibility relations. In type structures it is represented by the players' types.

    Let us look at an example to make this clearer. Take the game pictured in Table 1. We are going to build a model of this game where both Bob and Ann are certain each other's strategy choice, but where Bob is certain about what Ann believes while she is uncertain about Bob's information.










    That Bob is certain about Ann's information translates in her having only one type, i.e. T_ann = {t_a}.
    This is only for simplicity. We could do with more types for Ann and make λ_bob assign probability 1 to one of them. On the other hand, for Ann to be uncertain about Bob's information means that he has more than one type T_bob = {t_b, u_b}. The function λ_ann gives Ann's partial beliefs about Bob's types and strategies. An insightful way to specify these functions is by using matrices, as in Table 2.











    The rows are Bob's types, the columns are his possible strategies. The values in the cells represent the player's degree of belief in the others playing some strategies and being of some type. In our case we want to represent the fact that Ann is sure that Bob will play A but that she is uncertain about what he believes. Ann's certainty about Bob's strategy choice boils down to put all the probability weight to the column where Bob plays A. This column sums up to one. One the other hand, Ann's uncertainty about Bob's information is represented by the fact that she assigns 1/2 to him being of type u_b, and 1/2 of him being of type t_b. In the first type (Table 4), Bob is certain that she will play a. In the second (Table 3), he is certain that she will play b. So Ann is sure that Bob is certain about what she will do. But she does not know whether Bob is convinced that she will do a or he is convinced that she will do b.











    This type structure can be easily represented with a relational model, as in the figure below. Here we need eight states, i.e. two "copies" of each strategy profiles, one for each possible "type" of Bob. In the figure the blue states are those where Bob is of type u_b, while the red states are those where he is of type t_a.




















    Ann and Bob's information is represented by the arrows : plain for Ann and dashed for Bob. These should be seen as transitive, and the "reflexive" loops are omitted. There should be one at all states except at bA, bB, aA and aB for Bob, and aB, bB, aB and bB for Ann.

    The fact that, at type t_b, Bob is convinced of Ann's strategy choice is a i represented by the fact that at all states where he is of that type he considers possible only states where she plays a, and vice-versa at type u_b. The same goes for Ann. She his sure that Bob will play A. At all states she only considers possible the states where he plays A. But she is uncertain about Bob's information. At state bA, where Bob believes that she will play b, she considers it possible that Bob thinks she will play a.

    Observe that at state bA both have "true" beliefs about each other's choices. But this need not to be the case. At aA, for example, Bob believes that Ann plays b, while in fact she is playing a.

    Observe that in this relational model each player knows his own strategy choice. This is revealed by the fact that Ann's accessibility relation only runs along the horizontal axis and that Bob's runs along the vertical axis, where they respectively play the same strategies. This is a general assumption in relational models for games, but the definition does not enforce this. Here this is the case because we built our model on the basis on a type structure in which, by definition, the players are fully and truthfully informed about their own strategy choices. Just look at the definition of the functions λ_i. They assign probabilities to combinations of types and strategies of others, not on i's type. Another interesting consequence of the fact that we built our relational model on the basis of a type space is that we obtain relations that are transitive and euclidian. This see not a coincidence, see: Halpern, J. Y. (1991). The relationship between knowledge, belief, and certainty. Annals of Mathematics and Artificial Intelligence 4, 301-322.

    With these representations of the players' information in hand, we are ready to look at what it means to maximizing one's payoffs given one's information in game. This will be the topic of our next meeting. Until then I strongly recommend to look at Johan van Benthem's and Adam Brandenburger's paper in the bibliography.

7 comments:

Stephan Schroevers said...

Point 4, paragraph 4: ``An event E occurs at a state just in case that state is in W.''---Since every state is in W, I suppose thats sentence should read: ``An event E occurs at a state just in case that state is in E.'' Yes or no?

Olivier Roy said...

That is correct. I've corrected the typo. Thanks for pointing it out!

Unknown said...

I have first a comment and in the end a question. The comment is related to what I also expressed during the lecture, namely the fact that I am puzzled by the fact that a mental event can split the world. As far as I understood so far in a Kripke model we have possible worlds, in which physical or real events happen, and we encode this in the valuation, and there are mental or informational events or beliefs agents have about other events, which are encoded in the accessibility relations between worlds. This is a distinction that intuitively makes sense. In a type structure the formal definition of an event as a subset of the domain entirely wipes out this distinction because in the set of states we can find events of both kinds. Strategy profiles in a game and types of players can both be represented by matrices in a type structure, but this should not necessarily grant them the same translation in a Kripke model. In the case of games, a strategy profile would qualify for a real, physical event, since it is a combination of actions or decisions made by the players, while the types, which are supposed to encode their information or their beliefs about the game, should rather be a sort of a mental or informational event. I wouldn't have expected to find the types inside the set of states or worlds in a Kripke model, I would find it more intuitively to translate them as accessibility relations, and let only the real events, i.e. strategy profiles, populate the domain. The information provided by the types can still be encoded by labeling the relations between worlds with the relevant types. It seems to me that is only a choice between having a directed graph with more vertices and fewer edges or one with fewer vertices and more edges. Unless there are some reach formal consequences that are only achievable in the first kind of encoding and are lost in the second I do not think that sacrificing the intuitive appeal of a Kripke model is something worth doing. To illustrate my point I attach for comparison the images of a graph with labeled edges induced by the example considered in the course-notes and of a graph in which the types are splitting the worlds. Instead of ranging over worlds to find out the relevant information about the types, we can range over labels of the accessibility relations. My hunch is that if a many-vertices graph and a many-edges graph are induced by the same type structure then any solution concept definable in the first should admit a corresponding definition in the second. My question is, then, are there any advantageous formal consequences for using a model in which the types are contained in the states or the choice is just contingent? or, better put: 'Why did the mental split the world?'

http://staff.science.uva.nl/~sminica/ME-Graph.jpg
http://staff.science.uva.nl/~sminica/MV-Graph.jpg

Stephan Schroevers said...

Hi Stefan,

First off, I think that you raise a very interesting point. I cannot give a conclusive answer, but let me try to formulate my thoughts about this:

About mental events splitting the world: A player is of one specific type. The way I see it, this is a fixed property of a player that is a given (determined) even before the game starts. The other players (may) have a lack of knowledge about each other's type, but they know their own type (I know what I know, etc.), and it doesn't change. Furthermore, the strategy chosen by each player influences the state of the world, but not the type of any (other) player. The states in the Kripke model represent all possible combinations of user strategies and types. The former are decided upon during the game, the latter are fixed even before the game starts. The Kripke model shows all possible outcomes given the possible strategies and the possible types of the players. The accessibility relations visualize what the players believe possible in each situation, but beware: they will also think states possible that cannot be the case, simply because they correspond to the wrong type for some user. This, however, is not a bad thing: it merely reflects the lack of knowledge by the players.

You mention ``I wouldn't have expected to find the types inside the set of states or worlds in a Kripke model, [...]''. To me this does make sense, for the reasons explained above. Think of it this way: since the types of the players are fixed, in a way they reflect the player's state of mind.

As for your question related to the translation of a strategic game form and type structure into a Kripke model: First off, like I said above, one can very well consider the types of players as part of the world. Secondly, the translation presented during the lecture requires one accessibility relation per player, an amount of proposition letters linear in the number of types and strategies, and an amount of worlds that is exponential in the number of proposition letters (right?). Your solution (which I agree, seems to express the same amount of information) requires an amount of accessibility relations that is exponential in the number of types and players.

I suppose that this is where the issue lies. Whereas in the original model we can easily define the modal operators P_x (diamond) and B_x (box) for each accessibility relation (player) x, such is not the case in your Kripke model. There, we have an exponential number of relations, hence requiring an exponential number of modal operators.

One can argue, of course, that this is merely an inconvenience. It would be interesting to investigate whether your alternative representation has any impact on what properties of the game can and cannot be defined using modal logic. Anybody have any ideas on that?

Stephan Schroevers said...

Link to one of the papers mentioned:
Halpern, J. Y. (1991). The relationship between knowledge, belief, and certainty. Annals of Mathematics and Artificial Intelligence 4, 301-322.

Unknown said...

Hi Stephan,

Your reply was very interesting and useful for me, and I have to say from the start that I agree with most of what you say. Reading your comment was almost like hearing my own thoughts better voiced (the fact that we generously share the same first-name further enhanced this impression).

Your observation that the definitions of modal operators would be difficult is forcefully convincing, working in a model with many accessibility relations is, if not impossible, at least unpractical. I surrender this point. However, even if we work in a model in which the states contain strategy profiles together with type profiles, and, as you said, an advantageous unique accessibility relation per agent, I am as puzzled as I was in the beginning.

I will start from a very deep insight you have about what happens in a play of a game: "The former (the agents' strategies) are decided upon during the game, the latter (the agents' types) are fixed even before the game starts". As I told you in person, I both agree and disagree with that. It all depends on which player are you referring to. If I think about my opponent, his type is fixed for me from the beginning, but, in this sense, his strategy is, from what I know, fixed from the beginning as well (at least in games with simultaneous moves, as is the example in the course note). And it makes sense to consider possible worlds in which events describe both strategies and types. But it makes sense precisely because they are not mine to decide upon. If I think about my strategy, it is also true that I have to decide upon it during the game, and, in this sense, it is not, for me, fixed from the beginning. Why would it make sense to put my own strategies in a state of the model? I guess it makes sense because we want to have a general model that is a complete description for all the players in the game. The model should be neutral to epistemic perspectives of any particular agent, and this is a good thing. What is not neutral to the perspective of players is neither the definition of rationality nor that of belief. They are, I guess, going to become neutral only after they are sewed together as common-belief-in-rationality. But, until then, rationality is defined as a choice of a strategy that maximizes the expected payoff. But this choice is always made by a particular player. Moreover, it is always made inside a particular type. The same goes for belief. The sentence defining belief in Branderburger's paper begins by: "Say type t_i for player i beliefs an event ..." (p 470). As if the types could believe by themselves. Defining belief relative to a particular type of an agent makes perfectly good sense for me, swallowing the same kind of immersed definition for rationality is harder. Coming back to your insight, if my own strategy is something I have to decide upon during the game, as opposed to the ones of my opponent which I have to consider fixed from the beginning, why wouldn't also my type be subject to my own decision, either rational or not. The types of others are fixed for me (even thought maybe I can change those too by sending a signal), but why should I be incapable of changing my own type? If there is an appropriate translation of a type space into a relational model, this would just mean to use a different set of (unique) accessibility relations. Why changing my type shouldn't be a rational thing to do? If it is rational to change your strategy when by this you maximize your expected payoff, why wouldn't it be rational to change your type during a play of a game? This is, I guess, the main point of what I said in the first comment. If we can build relational models to mirror rational choices of a strategy in a type space, and since the type space structure already encodes type related information, why couldn't we build a relational model that will also mirror a change of type during a play of a game? Is a change of type impossible in real life? Is a description of a type change impossible or impractical inside a type space or a relational frame? Someone might argue that even if you can describe a change of type inside a relational frame it would be impossible to say when it is or not a rational choice because the only way to make a difference in your expected payoff is by changing a strategy, since a strategy profile determines the payoff for an outcome. So, if you are working with an instrumental definition of being rational, what difference does it make to change your type? This is true only if you change your type but keep your strategy unchanged. Why couldn't a change of type be simultaneous with a change of strategy? Changing only the type but announcing or signaling it to your opponent might also be a rational thing to do even in the instrumental sense of rationality.

Olivier Roy said...

Very interesting discussion about the "modeling choices" that I made while presenting type structures and relational models!

I intended the comments section to be a space of discussion between students, so I'll stay out of the "core" debate. Let me just make a small point, give a pointer to the literature and ask a question:

- Small point: the "translation" from type structures to relational model that I presented is by no mean supposed to represent "standard" practice in economics or computer sciences. To my knowledge, there is no such consensus on how to translate one type of model of interactive information into another.

- The
master thesis of Floris Roelofsen
contains remarks about how to model information that might shed on lights on the issues that you are discussing.

- A question about the possibility of changing one's own type. Recall that the type of a player at a state is supposed to represent his information (what he believes, knows, assumes, etc.). What does deciding to change one's type would mean in that context? Doesn't it would boils down to change one's belief at will? If so, is that something agents can rationally do?