A previous version can be found here:
Abstract: In a decision problem or game we typically fix the person's utilities but not his beliefs. What, then, do these utilities represent? To explore this question we assume that the decision maker holds a conditional preference relation -- a mapping that assigns to every possible probabilistic belief a preference relation over his choices. We impose a list of axioms on such conditional preference relations that is both necessary and sufficient for admitting an expected utility representation. Most of these axioms express the idea that the decision maker's preference intensity between two choices changes linearly with the belief. Finally, we show that under certain conditions the relative utility differences are unique across the different expected utility representations.
with Martin Meier
Abstract: We propose a new rationalizability concept for dynamic games that combines elements from forward and backward induction reasoning. For that reason, we call it forward and backward rationalizability. It is shown that in terms of outcomes, the concept is equivalent to the pure forward induction concept of extensive-form rationalizability, but both concepts may differ in terms of strategies. We argue that the new concept provides a more compelling theory for how players react to surprises as, in contrast to extensive form-rationalizability, a player always believes that the opponent will choose rationally in the future, and never attributes unreasonable beliefs to an opponent. In terms of strategies, the new concept provides a refinement of pure backward induction reasoning as embodied by backward dominance and backwards rationalizability. These two results together imply that in every dynamic game, all extensive-form rationalizable outcomes are also possible under backwards rationalizability. This may be viewed as a generalization of Battigalli's theorem, which states that in perfect information games without relevant ties, the unique extensive-form rationalizable outcome is the backward induction outcome. Finally, it is shown that the concept of forward and backward rationalizability satisfies the principle of supergame monotonicity: If a player learns that the game was actually preceded by some moves he was initially unaware of, then this new information will only refine, but never completely overthrow, his reasoning. Extensive-form rationalizability violates this principle.
with Martin Meier
A previous version can be found here:
Abstract: We propose a model of reasoning in dynamic games in which a player, at each information set, holds a conditional belief about his own future choices and the opponents' future choices. These conditional beliefs are assumed to be cautious, that is, the player never completely rules out any feasible future choice by himself or the opponents. We impose the following key conditions: (a) a player always believes that he will choose rationally in the future, (b) a player always believes that his opponents will choose rationally in the future, and (c) a player deems his own mistakes infinitely less likely than the opponents' mistakes. Common belief in these conditions leads to the new concept of strong sequential rationalizability. We show that strongly sequentially rationalizable strategies exist in every finite dynamic game. We prove, moreover, that strong sequential rationalizability constitutes a refinement of both perfect rationalizability (a rationalizability analogue to Selten's (1975) perfect equilibrium) and procedural quasi-perfect rationalizability (a rationalizability analogue to van Damme's (1984) quasi-perfect equilibrium). As a consequence, it avoids both weakly dominated strategies in the normal form and strategies containing weakly dominated actions in the agent normal form.
Previous version appeared as EPICENTER Working Paper No. 8
Abstract: In this paper we investigate the order independence of iterated reduction procedures in dynamic games. We distinguish between two types of order independence: with respect to strategies and with respect to outcomes. The first states that the specific order of elimination chosen should not affect the final set of strategy combinations, whereas the second states that it should not affect the final set of reachable outcomes in the game. We provide sufficient conditions for both types of order independence: monotonicity, and monotonicity on reachable histories, respectively.
We use these sufficient conditions to explore the order independence properties of various reduction procedures in dynamic games: the extensive-form rationalizability procedure (Pearce (1984), Battigalli (1997)), the backward dominance procedure (Perea (2014)) and Battigalli and Siniscalchi's (1999) procedure for jointly rational belief systems (Reny (1993)). We finally exploit these results to prove that every outcome that is reachable under the extensive-form rationalizability procedure is also reachable under the backward dominance procedure.
with Christian Bach
EPICENTER Working Paper No. 9
Abstract: In games with incomplete information Bayesian equilibrium constitutes the prevailing solution concept. We show that Bayesian equilibrium generalizes correlated equilibrium from complete to incomplete information. In particular, we provide an epistemic characterization of Bayesian equilibrium as well as of correlated equilibrium in terms of common belief in rationality and a common prior. Bayesian equilibrium is thus not the incomplete information counterpart of Nash equilibrium. To fill the resulting gap, we introduce the solution concept of generalized Nash equilibrium as the incomplete information analogue to Nash equilibrium, and show that it is more restrictive than Bayesian equilibrium. Besides, we propose a simplified tool to compute Bayesian equilibria.
with Christian Nauerz
EPICENTER Working Paper No. 6
Abstract: Abstract models of decision-making under ambiguity are widely used in economics. One stream of such models results from weakening the independence axiom in Anscombe et al. (1963). We identify necessary assumptions on independence to represent the decision maker's preferences such that he acts as if he maximizes expected utility with respect to a possibly local prior. We call the resulting representation Local Prior Expected Utility, and show that the prior used to evaluate a certain act can be obtained by computing the gradient of some appropriately defined utility mapping. The numbers in the gradient, moreover, can naturally be interpreted as the subjective likelihoods the decision maker assigns to the various states. Building on this result we provide a unified approach to the representation results of Maximin Expected Utility and Choquet Expected Utility and characterize the respective sets of priors.
When do Types Induce the Same Belief Hierarchy? The Case of Finitely Many Types (2014)
EPICENTER Working Paper No. 1
Abstract: Harsanyi (1967--1968) showed that belief hierarchies can be encoded by means of epistemic models with types. Indeed, for every type within an epistemic model we can derive the full belief hierarchy it induces. But for one particular belief hierarchy, there are in general many different ways of encoding it within an epistemic model. In this paper we give necessary and sufficient conditions such that two types, from two possibly different epistemic models, induce exactly the same belief hierarchy. The conditions are relatively easy to check, and seem relevant both for practical and theoretical purposes.