Back

Speakers

Zafer Akin

Pennsylvania State University

  Wednesday, July 13, 2:00, Session B

Time Inconsistency And Learning In Bargaining Games    [pdf]

Abstract

The literature on time-inconsistent preferences introduced continuum types of agents: naïve, partially naïve and sophisticated that represent different levels of unawareness of agents' self-control problems. This paper incorporates bounded rationality in the form of self-control that leads to time-inconsistency into a sequential bargaining model in which one player or both players are one of these types. We showed an immediate agreement result such that being naïve pays off in the sense that the more naïve the agent is, the higher share she gets. While, generically, it is difficult to impose an evolutionary structure on time-inconsistent behaviour of naïve agents (learning to be more rational); here, we further incorporate a learning model that allows naïve agents to learn as they play the game. We showed that there is a critical date at which parties reach agreement and before that date, there is no agreement. When at least one of the parties is naïve, depending on learning structure, we get delay in sequential bargaining game. Hence, existence of players who are time-inconsistent and can learn as they play the game can be another explanation for delays in bargaining.

Attila Ambrus

Harvard University

  Wednesday, July 13, 11:30, Session A

Theories of coalitional rationality    [pdf]

Abstract

This paper generalizes the concept of best response to coalitions of players and offers epistemic definitions of coalitional rationalizability in normal form games. The best response of a coalition is defined to be a correspondence from sets of conjectures to sets of strategies. From every best response correspondence it is possible to obtain a definition of the event that a coalition is rational. It requires that if it is common certainty among players in the coalition that play is in some subset of the strategy space then they confine their play to the best response set to those conjectures. A strategy is epistemic coalitionally rationalizable if it is consistent with rationality and common certainty that every coalition is rational. A characterization of this set of strategies is provided for best response correspondences that satisfy two consistency properties and a weak requirement along the line of Pareto dominance for members of the coalition. Special attention is devoted to two correspondences from this class. One leads to a solution concept that is generically equivalent to the set of coalitionally rationalizable strategies as defined in Ambrus [04], while the other one leads to a solution concept exactly equivalent to it.

Kurt Annen

University of Guelph

  Thursday, July 14, 2:00, Session E

Lies and Slander: The Implementation of Truth-telling in Repeated Matching Games with Private Monitoring    [pdf]

Abstract

This paper studies the implementation of truth-telling in a repeated random matching prisoners' dilemma where outcomes cannot be observed or verified by the public. It is well established that truthful information sharing can be a powerful device to overcome moral hazard problems. An equilibrium strategy can ask players to condition their behavior on this shared information, which creates strong incentives for cooperation (public information strategy). The paper, first, shows that there is no direct mechanism which implements truth-telling in Nash equilibrium, subgame perfect equilibrium or any other equilibrium solution concept if an equilibrium outcome in the repeated game is supported by a public information strategy. Second, there is a mechanism which implements truth-telling in subgame perfect equilibrium if an equilibrium strategy in the repeated game asks players to condition their behavior on both, public and private information. However, if an outcome can be supported by this strategy, it can also be supported by a strategy without information sharing.

Itai Arieli

Hebrew University of Jerusalem

  Tuesday, July 12, 11:30, Session A

Towards a Characterization of Rational Expectations    [pdf]

Abstract

R. J. Aumann and J. H. Dr`eze (2005) define a rational expectation of a game G as an expected payoff of some type of Player 1 in some belief system for G in which common knowledge of rationality and common priors obtain. Our goal is to characterize the set of rational expectations in terms of the game’s payoff matrix. We provide such a characterization for a specific class of strategic games, which we call semi-elementary.

Galit Ashkenazi-Golan

Tel-Aviv University

  Wednesday, July 13, 12:00, Session E

Confession and Pardon in Repeated Games with Private Monitoring and Communication    [pdf]

Abstract

We investigate multi-player discounted repeated games with private monitoring and communication. After each period, every player observes a random private signal whose distribution depends on the common action taken. We do not assume that all signal-profiles are always observed with a positive probability. However, we do assume that deviating from certain actions might reduce the contents of information received by the deviator. Under this assumption we obtain, via sequential equilibria, a folk theorem with Nash threats. In equilibrium players are provided with incentives to report a deviation when they detect one. Moreover, in equilibrium, the deviating player has an incentive to confess his deviation. This is done by making a punishment that follows a confession lighter than a punishment that does not follow a confession. Thus, a confession induces a pardon.

For getting other results, the proof method is combined with the results of Compte(1998) and Kandori and Matsushima(1998) who assumed that there are at least three players and full support of the signal profiles (i.e., that every signal profile has a positive probability). The combined method provides a larger set of sequential equilibrium outcomes than each method separately.

Robert John Aumann

Hebrew University of Jerusalem

  Wednesday, July 13, 4:25

Consciousness    [pdf]

Abstract

Consciousness is the last great frontier of science. Here we discuss what it is, how it differs fundamentally from other scientific phenomena, what adaptive function it serves, and the difficulties in trying to explain how it works. The emphasis is on the adaptive function.

Coralio Ballester

Universitat Autònoma de Barcelona

  Thursday, July 14, 11:30, Session D

Optimal Targets in Peer Networks

(joint work with Antoni Calvó-Armengol and Yves Zenou)

Abstract

In a model of peer effects where a network of interactions emerges among agents, we study the properties and the applicability of policies affecting the structure of the network. In particular, the key group is the optimal choice for a planner who wishes to maximally reduce aggregate activity. We show that this problem is computationally hard and that a simple greedy algorithm used for maximizing submodular set functions can be used to find an approximation. We also endogeneize the participation in the game and describe some of the properties of the key group. The use of greedy heuristics can be extended to other related problems, like the removal or addition of new links in the network.

Sylvain Béal

CREUSET, University of Saint-Etienne

  Monday, July 11, 12:00, Session D

Bounded Rationality and Repeated Network Formation    [pdf]

(joint work with Sylvain Béal, Nicolas Quérou)

Abstract

We define a finite-horizon repeated network formation game with consent, and study the differences induced by different levels of individual rationality. We prove that perfectly rational players will remain unconnected at the equilibrium, while this result does not hold when, following Neyman (1985), players are assumed to behave as finite automata. We define two types of equilibria, namely the Repeated Nash Network (RNN), in which the same network forms at each period, and the Repeated Nash Equilibrium (RNE), in which different networks may form. We state a sufficient condition under which a given network may be implemented as a RNN. Then, we provide structural properties of RNE. For instance, players may form totally different networks at each period, or the networks within a given RNE may exhibit a total order relationship. Finally we investigate the question of efficiency for both Bentham and Pareto criteria.

Eyal Beigman

Hebrew University

  Monday, July 11, 2:30, Session C

Characterizing Neutral Aggregation on Restricted Domains    [pdf]

Abstract

How should a society arrive at a consensus on preference between several alternatives? How should one individual rank several alternatives according to multiple criteria? It is well known that a simple majority vote would lead to intransitive preference relations which are considered irrational, and Arrow's impossibility theorem shows that intransitivity is inevitable for any voting mechanism that is non dictatorial. There are several directions modern theory of social choice has taken to bypass the impossibility phenomena: restricting freedom of choice, finding a good approximation, relaxing the rationality requirement etc. In this paper we study the framework of restricted domains.

The purpose of this paper is to give an exhaustive characterization of pairs of individual preference domains and alternative-neutral Arrovian social welfare functions (SWF) that aggregate profiles of relations from the domain to transitive relations. Sen characterizes such domains for majority by value restrictions on the domain, we show that these restrictions are necessary and sufficient for transitive aggregation under any neutral monotone SWF and that they are neither necessary nor sufficient for neutral non monotone SWFs. We give an exhaustive characterization through new value restrictions and a notion of embedding we introduce for simple games. We also show that Sen's value restrictions as well as a stronger restriction we introduce are preserved in the image, and that our domain restrictions are neither necessary nor sufficient for non neutral SWFs.

Juan Miguel Benito

Universidad Pública de Navarra

  Thursday, July 14, 12:00, Session D

Neighborhood Segregation: Schelling-CA Model

(joint work with Juan Miguel Benito and Penélope Hernández)

Abstract

Schelling presented a model of segregation in which populationcomposed of two well-differentiated types of agents is distributed uniformly along a segment. The utility of each agent depends on the types of its neighbors. Depending on his neighborhood each agent is defined as happy or unhappy. Unhappy agents are moved to the "nearest" place on which the happiness condition is satisfied. In this model, unhappy agents move to another site sequentially starting from left to right of the segment. Local interactions produce global structures and therefore the outcome might be a segment divided into segregated neighborhoods.

We present a variation of Schelling's segregation linear model using cellular automata. The cellular automata approach acquires the same dynamics micro-level preferences. Specifically, a cellular automata consists of a lattice of cells, or sites. At every time, each cell has a value of a given alphabet. These values are updated iteratively according to a fixed rule which specifies exactly how the value of every site is computed from its own present value and the values of its immediate neighbors.

We identify Shelling model as a cellular automata. We adapt the information structure of the agents in order to model a synchronic movement, capturing a myopic behavior in a geometric environment. Then, we characterize the shortest rule such that a synchronic movement is modelized. Finally, we compute the number of limit cycles of any period on any society size.

Oliver Board

University of Pittsburgh

Presidential veto power and its consequences for information transmission in the legislative process    [pdf]

(joint work with Tiberiu Dragu)

Abstract

The presidential veto is a vital component of the system of checks and balances established by the American Constitution. To analyze the impact of this veto power on the legislative process, we examine a cheap talk game between three players: an expert committee which drafts a bill and sends it to the floor; legislature, which can modify the bill before voting on a final version; and the president who can then veto it or not. We show that, depending on the strength and direction of the president's bias compared to that of the committee, more or less information may be transmitted then if the president has no veto power. A striking implication of this result is that the president may actually be harmed by his own veto power and suffer an overall loss of utility. Furthermore, the very existence of the veto threat can sometimes induce Congress to use what information they have less effectively.

Juergen Bracht

University of Aberdeen

  Wednesday, July 13, 2:00, Session D

Relative performance of two simple incentive mechanisms in a public good experiment    [pdf]

(joint work with Charles Figuieres, Marissa Ratto)

Abstract

The paper reports on experiments designed to compare the performance of two incentive mechanisms in public goods problems. One mechanism rewards and penalizes deviations from the average contribution of the other agents to the public good (tax-subsidy mechanism). Another mechanism allows agents to subsidize the other agents' contributions (compensation mechanism). It is found that both mechanisms lead to an increase in the level of contribution to the public good. The tax-subsidy mechanism allows for good point prediction of the average level of contribution. The compensation mechanism predicts the level of contributions less reliably.

Steven Brams

New York University

  Tuesday, July 12, 2:30, Session A

Cutting a Pie Is Not a Piece of Cake    [pdf]

(joint work with Julius B. Barbanel)

Abstract

Gale (1993) posed the question of whether there is necessarily an undominated, envy-free allocation of a pie when it is cut into wedge-shaped pieces or sectors. For two players, we give constructive procedures for obtaining such an allocation, whether the pie is cut into equal-size sectors by a single diameter cut or into two sectors of unequal size. Such an allocation, however, may not be equitable--that is, give the two players exactly the same value from their pieces.

For three players, we give a procedure for otaining an envy-free allocation, but it may be dominated either by another envy-free allocation or an envy-causing allocation. A counterexample shows that there is not always an undominated, envy-free allocation for four or more players if the players' preferences are not absolutely continuous with respect to each other. If we do make this assumption, then the existence questions remains open for four or more players. For three players, the question of whether there exists an undominated, envy-free allocation is open whether or not the players' preferences are absolutely continuous with respect to each other.

Anthony C. Brooms

Birkbeck College

  Wednesday, July 13, 2:00, Session C

A stochastic game model for a FCFS queue with load-increasing service rate    [pdf]

Abstract

Customer joining behaviour into a single-server queue operating according to the FCFS discipline is explored. Customers decide whether or not to join, using decision rules based on the queue length upon arrival. Within a particular class of decision rule for an associated infinite player game, the structure of the Nash equilibrium joining policies is characterized. It can be shown that within this class, there exist a finite number of Nash equilibria, and that at least one of these is non-randomized. It turns out that these Nash equilibria show a close correspondence with the behaviour that would be observed under a realistic, empirically-based, learning rule, thus providing a quick off-line method for gauging the performance of the system.

Yutian Chen

Stony Brook University

  Tuesday, July 12, 12:00, Session E

Outsourcing Spurred by Strategic Competition    [pdf]

(joint work with Pradeep Dubey)

Abstract

Offshore outsourcing is going to double in the next three years. This paper highlights a strategic reason underlying offshore outsourcing other than the obvious one that offshore areas have lower costs. Using a game theoretical framework, the paper shows that when achieving economies of scale, firms prefer to outsource to offshore providers which are not direct competitors in their final products, instead of outsourcing to other providers who also compete them in their final products. This is true even when those offshore providers have a higher cost compared with other providers.

Vincent Conitzer

Carnegie Mellon University

  Thursday, July 14, 12:00, Session C

A Generalized Strategy Eliminability Criterion and Computational Methods for Applying It    [pdf]

(joint work with Tuomas Sandholm)

Abstract

We define a generalized strategy eliminability criterion for bimatrix games that considers whether a given strategy is eliminable relative to given dominator & eliminee subsets of the players' strategies. We show that this definition spans a spectrum of eliminability criteria from strict dominance (when the sets are as small as possible) to Nash equilibrium (when the sets are as large as possible). We show that checking whether a strategy is eliminable according to this criterion is coNP-complete (both when all the sets are as large as possible and when the dominator sets each have size 1). We then give an alternative definition of the eliminability criterion and show that it is equivalent using the Minimax Theorem. We show how this alternative definition can be translated into a mixed integer program of polynomial size with a number of (binary) integer variables equal to the sum of the sizes of the eliminee sets, implying that checking whether a strategy is eliminable according to the criterion can be done in polynomial time, given that the eliminee sets are small. Finally, we study using the criterion for iterated elimination of strategies.

Christopher Cotton

Cornell University

  Thursday, July 14, 2:30, Session D

Seller preferences for risk seeking and limited information in an evolutionary price demand game    [pdf]

Abstract

The paper introduces evolutionary dynamics into a two-agent price demand game, in which sellers observe past period transactions before announcing a price, and buyers either accept or reject the announced price. Under the assumption of homogeneous clients, and large enough number of past-period observations, the process almost surely converges to a stable or continuously-reoccurring convention. Increased risk seeking in the class of sellers results in a long-term convention price at least as close to the buyer valuation as when the class of sellers is more risk averse. However, increasing risk seeking among sellers is often not feasible. I show that introducing imperfect information into the game by limiting the number of past period observations can also result in long-term convention prices closer to the buyer valuation, thereby increasing ex-ante expected utility among sellers. Thus sellers may have preferences for limited rather than perfect information of past transactions. However, buyers are never made better off by limiting seller memory.

Luciano I. De Castro

IMPA

  Monday, July 11, 11:30, Session B

Pure Strategy Equilibria of Single and Double Auctions with Interdependent Values    [pdf]

(joint work with Aloisio Araujo and Luciano I. de Castro)

Abstract

We prove the existence of monotonic pure strategy equilibrium for many types of asymmetric auctions among n bidders with unitary demands, interdependent values and independent types. The assumptions require monotonicity only in the own bidder’s type and the payments can be function of all bids. Thus, we provide a new equilibrium existence result for asymmetrical double auctions.

Massimo A. De Francesco

University of Siena

  Wednesday, July 13, 2:30, Session E

Pricing and matching under duopoly with imperfect buyer mobility    [pdf]

Abstract

Recent contributions have explored how lack of (ex-post) buyer mobility affects pricing. For example, Burdett, Shi, and Wright (2001) envisage a two-stage game where, after prices are set, the buyers play a static subgame by choosing independently which firm to visit. Due to the multiplicity of pure strategy equilibria of the buyer subgame, the attention has understandably been focussed on the mixed strategy equilibrium where misallocations occur with positive probability. Relying on this solution, the lack of buyer mobility proves to significantly affect equilibrium prices.
The present paper is concerned with price determination under duopoly with imperfect buyer mobility. Compared to the recent literature, our model intends to capture the fact that goods are often purchased repeatedly over the time period for which prices are set by the firms. The buyers are assumed to play a dynamic game of imperfect information once prices are set: at each stage each buyer selects one firm to visit without observing the choices made by the other buyers in the preceding stages.
To solve the buyer subgame we propose a variant of Kreps and Wilson’s (1982) “sequential equilibrium”. The firms are shown to have an incentive to give service priority to loyal customers. Then, over a wide range of prices it is an equilibrium for the buyers to obey a strategy of conditional loyalty, prescribing loyalty if served by the previously chosen seller. Along the equilibrium path some efficient allocation of buyers is certainly achieved by the second stage of the buyer subgame.
The successful matching between buyers and sellers quickly obtaining at the equilibrium of the buyer subgame has far-reaching implications on pricing. Equilibrium prices are higher than with a static buyer subgame; also, they converge to their value under perfect mobility as the number of stages of the buyer subgame goes to infinity.

Stefano Demichelis

University of Pavia

  Wednesday, July 13, 12:00, Session A

On use and misuse of topology in game theory

Abstract

Formally, the study of Nash equilibria is a special case of parametrized fixed point theory. Topological methods have often been used to investigate them, unfortunately sometimes they have added to the matter more confusion than clarification.
I'll discuss some results I have obtained in the last two years on the topology of the Nash equilibrium correspondence using simple tools from stable homotopy theory and I will show what are their implications for strategic and dynamical stability.

Amrita Dhillon

University of Warwick

  Monday, July 11, 11:30, Session A

Games of Status and Discriminatory Contracts    [pdf]

(joint work with Alexander Herzog)

Abstract

Following recent empirical evidence which indicates the importance of rank for the determination of workers' wellbeing, this paper introduces status seeking preferences in the form of rank-dependent utility functions into a moral hazard framework with one firm and multiple workers, but no correlation in production. Workers' concern for the rank of their wage in the firm's wage distribution may induce the firm to offer discriminatory wage contracts when its aim is to induce all workers to expend effort. Crucial factor for the determination of the profile of optimal wage contracts is the individual worker's valuation of being in front relative to being in the same wage position than another worker.

Tiberiu Dragu

Stanford University

  Tuesday, July 12, 2:30, Session C

Presidential Veto Power and its Consequences for Information    [pdf]

(joint work with Oliver Board)

Abstract

The presidential veto is a vital component of the system of checks and balances established by the American Constitution. To analyze the impact of this veto power on the legislative process, we examine a cheap talk game between three players: an expert committee which drafts a bill and sends it to the floor; legislature, which can modify the bill before voting on a final version; and the president who can then veto it or not. We show that, depending on the strength and direction of the president's bias compared to that of the committee, more or less information may be transmitted then if the president has no veto power. A striking implication of this result is that the president may actually be harmed by his own veto power and suffer an overall loss of utility. Furthermore, the very existence of the veto threat can sometimes induce Congress to use what information they have less effectively.

Kfir Eliaz

New York University

  Tuesday, July 12, 3:00, Session B

A Decision Theoretic Basis for Choice Shifts in Groups    [pdf]

(joint work with Debraj Ray, Ronny Razin)

Abstract

The phenomenon of "choice shifts" in group decision-making has received much attention in the social psychology literature. Faced with a choice between a ``safe" and ``risky" decision, group members appear to move to one extreme or the other, relative to the choices each member might have made on her own. Both risky and cautious shifts have been identified in different situations. This paper demonstrates that from an individual decision-making perspective, choice shifts may be viewed as a systematic violation of expected utility theory. We propose a model in which a well-known failure of expected utility - captured by the Allais paradox - is equivalent to a particular configuration of choice shifts. Thus, our results imply a connection between two well-known behavioral regularities, one in individual decision theory and another in the social psychology of groups.

Peter Engseld

Lund University

  Wednesday, July 13, 2:00, Session A

Choosing Opponents in Games of Cooperation and Coordination    [pdf]

(joint work with Andreas Bergh)

Abstract

We analyze a cooperation game and a coordination game in an evolutionary environment. Agents make noisy observations of opponent's propensity to play dove, called reputation, and form preferences over opponents based on their reputation. A game takes place when two agents agree to play. Socially optimal cooperation is evolutionarily stable when reputation perfectly reflects propensity to cooperate. With some reputation noise, there will be at least some cooperation. Individual concern for reputation results in a seemingly altruistic behavior. The degree of cooperation is decreasing in anonymity. If reputation is noisy enough, there is no cooperation in equilibrium. In the coordination game, the efficient equilibrium is chosen and agents with better skills to observe reputation earn more.

Eduardo Faingold

Univ of Pennsylvania

  Wednesday, July 13, 11:30, Session E

Building a Reputation Under Frequent Decisions    [pdf]

Abstract

I study reputation phenomena in repeated games in which a long-run player faces a sequence of short-run players, each of whom plays the stage game once. There is imperfect monitoring: the long-run players' actions are unobservable to the short-run players, who observe noisy signals of the long-run players' actions. Fudenberg and Levine~(1992) show that reputation effects impose intuitive (upper and lower) bounds on the set of equilibrium payoffs of the long-run player. Provided signals are statistically informative, the upper and lower bounds converge, as the discount factor tends to 1, to the long-run player's Stackelberg payoff.

This paper departs from the received literature in that it studies reputation effects in the limit as players take actions more and more frequently. I consider a model in which, unlike in the canonical repeated-game model, as the length of the period of fixed-action tends to 0, the signaling structure adjusts accordingly, so the overall informativeness of the monitoring is fixed independent of action frequency. I show that for a wide range of games that possess a well-behaved continuous-time limit, powerful reputation effects akin to Fudenberg and Levine's~(1992) arise which hold uniformly over all games with sufficiently short periods of inaction.

Barry Feldman

Prism Analytics and DePaul University

  Monday, July 11, 2:00, Session A

Lost in Translation? Basis Utility and Proportionality in Games    [pdf]

Abstract

A player's basis utility is the utility of no payoff. Basis utility is necessary for the coherent representation of the equal split bargaining solution. Standard axioms for the Nash (1950) bargaining solution do not imply independence from basis utility. Proportional bargaining is the unique solution satisfying efficiency, symmetry, affine transformation invariance and monotonicity in pure bargaining games with basis utility. All existing cooperative solutions become translation invariant once account is taken of basis utility. The noncooperative rationality of these results is demonstrated though an implementation of proportional bargaining based on Gul (1988). Quantal response equilibria with multiplicative error structures (Goeree, Holt and Palfrey (2004)) become translation invariant with specification of basis utility. Equal split and proportional bargaining join the Kalai-Smorodinsky (1975) solution in a family of endogenously proportional monotonic pure bargaining solutions.

Dragan Filipovich

El Colegio de Mexico

  Friday, July 15, 12:00, Session D

Cheap Talk on the Circle    [PDF]

Abstract

In this paper we modify the ‘cheap-talk’ model of Crawford and Sobel 1982 by taking the state space to correspond to a circle (instead of a line). It is shown that in such a setup the relationship between the ‘bias’ (i.e., the parameter capturing the divergence of interests between sender and receiver), and the ‘informativeness’ of equilibria, is reversed from what it is in the original Crawford and Sobel story: Now, a higher bias can be associated with more informative equilibria, rather than the other way around. We also attempt to characterize more generally the equilibria of this modified model.

Francesca Flamini

University of Glasgow

  Tuesday, July 12, 11:30, Session B

Dynamic Accumulation in Bargaining Games    [pdf]

Abstract

In many bargaining situations the decisions that parties take at one point in time affect their future bargaining opportunities. We consider first an ultimatum bargaining game in which parties can decide not only how to share a current surplus but also how much to invest in order to generate future surpluses. We show that there is a unique Markov perfect equilibrium (MPE) in which a proposer consumes the whole surplus not invested. Moreover, when the proposer has a sufficiently high discount factor, his MPE investment level is higher than his opponent’s, for a given capital stock. We also show that bargaining can lead to underinvestment. Finally we extend the analysis to the case in which the bargaining structure is more complex (with an infinite horizon) and we show that some of the results obtained in the ultimatum framework can still hold.

Harry French

 

  Monday, July 11, 12:00, Session A

Property Defining and Property Defying Games (PD)    [pdf]

Abstract

Consider a two person game that provides a good example of property. Only one person (the seller) has a choice, and the seller’s best choice is not the combined best. The second person (the buyer) may attempt to buy that choice. To provide enforcement, embed this game in a stochastic series of games, with payoffs and choice of players taken from some parameterized probability space. For some values of the parameters, the game is property defining: good contract enforcement is expected. For other values of the parameters, the game is property defying: the seller and the seller’s tribe do not expect enough interaction with the buyer and the buyer’s tribe to enforce the contract.

Drew Fudenberg

Harvard University

  Friday, July 15, 9:15

Superstition and Rational Learning

(joint work with David Levine)

Abstract

We argue that some but not all superstitions can persist when learning is rational and players are patient, and illustrate our argument with an example inspired by the code of Hammurabi. The code specified an "appeal by surviving in the river" as a way of deciding whether an accusation was true. This system relies on the superstition that the guilty are more likely to drown than the innocent. If people can be easily persuaded to hold this superstitious belief, why not the superstitious belief that the guilty will be struck dead by lightning? We argue that the former can persist but the latter cannot by giving a partial characterization of the outcomes that arise as the limit of steady states with rational learning as players become more patient. These "subgame-confirmed Nash equilibria" have self-confirming beliefs at information sets reachable by a single deviation. According to this theory a mechanism that uses superstitions two or more steps off the equilibrium path, such as "appeal by surviving in the river," is more likely to persist than a superstition where the false beliefs are only one step off of the equilibrium path.

Andrey Garnaev

Saint Petersburg State University

  Friday, July 15, 12:00, Session B

The Kuhn-Tucker Theorem and Resource Allocation Games    [pdf]

(joint work with Aleksey Solovyev, Tomash Szigyarto)

Abstract

In this paper an application of the Kuhn - Tucker Theorem to two resource allocation games is considered. The first one considered in the paper is a non-zero sum two-sided investment allocation game based on a market share model. Two firms compete against each other in n independent markets. The total sales potential in each market is fixed and known as well as the firm's budgets and the investment costs. The aim of each firm is to plan its budget so that to maximize the total firm profit minus the investment cost. In this game the Nash equilibrium are derived and numerical examples are given. It is shown that in some specific cases the Nash equilibrium is unique. Also the case of one firm game and the Stackelberg equilibrium are investigated. The second game considered in this paper as an application of the Kuhn - Tucker Theorem is a generalization of the Sakaguchi resource allocation game on an integer interval [1,n] where two players want to find an immobile object hidden at one of n points by allocation search efforts in these points. The payoff for the player is 1 if he detects the object but his opponent does not. If both players detect the object each of them gets 1/2. If the player does not find the object than his payoff is 0. In the Sakaguchi game player's payoffs are 0 if they both find the object. In this paper we show that introducing a natural assumption about possibility of sharing the hidden object by the players when it is found by all of them but not only one allows to escape the problem of solution of the overdetermined system of equations which arises when we are looking for the Nash equilibrium using the Kuhn-Tucker Theorem. This assumption about sharing the object reduces the problem of finding the Nash equilibrium to an unambiguous solvable system of non-linear equations. Also the case where the search cost presents is investigated.

Parikshit Ghosh

University of British Columbia

  Tuesday, July 12, 2:30, Session B

Efficient Equilibria and Information Aggregation in Common Interest Voting Games    [pdf]

(joint work with Archishman Chakraborty)

Abstract

We characterize efficient equilibria of common interest voting games with privately informed voters and study the implications of efficient equilibrium selection for Condorcet jury theorems. We show that larger juries can do no worse than smaller ones and derive a simple necessary and sufficient condition for asymptotic efficiency of different voting rules. This condition implies that the unanimity as well as near unanimity rules are asymptotically inefficient regardless of equilibrium selection. However, if the signal distribution fails a non-degeneracy condition, the unanimity rule dominates any other rule. Finally, if signals are conditionally independent, full information equivalence can be exactly achieved for any rule that allows the divisibility of individual votes, and for any finite number of voters.

Andrew G. Gilpin

Carnegie Mellon University

  Thursday, July 14, 11:30, Session C

Mixed-Integer Programming Methods for Finding Nash Equilibria    [pdf]

(joint work with Tuomas Sandholm, Andrew Gilpin, and Vincent Conitzer)

Abstract

We present, to our knowledge, the first mixed integer program (MIP) formulations for finding Nash equilibria in games (specifically, two-player normal form games). We study different design dimensions of search algorithms that are based on those formulations. Our MIP Nash algorithm outperforms Lemke-Howson but not Porter-Nudelman-Shoham (PNS) on GAMUT data. We argue why experiments should also be conducted on games with equilibria with medium-sized supports only, and present a methodology for generating such games. On such games MIP Nash drastically outperforms PNS but not Lemke-Howson. Certain MIP Nash formulations also yield anytime algorithms for epsilon-equilibrium, with provable bounds. Another advantage of MIP Nash is that it can be used to find an optimal equilibrium (according to various objectives). The prior algorithms can be extended to that setting, but they are orders of magnitude slower.

Bertrand Gobillard

University Paris 10 Nanterre

  Tuesday, July 12, 2:30, Session E

Non-Walrasian Equilibria and the Law of One Price: The Wash-Sales Assumption    [PDF]

Abstract

The paper discusses the generality of the failure of the law of one price highlighted by Koutsougeras (2003). This author introduces a market game with multiple trading posts for each commodity and presents an example with price dispersion in equilibrium . We show that such a striking result does not hold when agents are not all owed to buy the goods they are selling on a same post. The failure of the law of one price finally relies on an assumption that is not very intuitive. We explain why and how the result does not come from the possibility for the agent to arbitrage prices difference whenever he faces one.

Srihari Govindan

University of Iowa

  Thursday, July 14, 10:10

Some Recent Results on Computing Equilibria

(joint work with Robert Wilson)

Abstract

We report on two new techniques for computing Nash equilibria of finite normal-form games. The first result establishes a decomposition method. From a game with N players we construct a game with N+1 players whose equilibria yield approximate equilibria of the original game. In the new game the N players interact bilaterally with the new player (the coordinator) but not with each other. In the resulting linear complementarity problem, decentralized calculations for each player separately enable efficient calculation of an equilibrium. The algorithm has been implemented in the APL language. The second result shows in principle that all equilibria of a 2-player game are accessible via the paths of a homotopy algorithm. We convert a 2-player game into a 3-player game with the properties that the equilibria of the 2-player game are the projections of the equilibria of the 3-player game, and these are computable using the Global Newton Method.

Thomas Gresik

University of Notre Dame

  Friday, July 15, 12:00, Session A

The Economics of Yardstick Regulations without External Benchmarks    [wpd]

(joint work with Petter Osmundsen)

Abstract

Yardstick rules are used in a variety of economic settings to evaluate the extent to which an individual's performance assessment is exaggerated by comparing it to the assessments of others. When such rules are applied to a closed group so that each party's assessment is used to judge everyone else's in the group, the lack of an external benchmark creates strong incentives for the group to tacitly coordinate on highly exaggerated reports. This paper uses the example of multinational transfer price regulation in a vertically integrated industry to show that spillover effects within the group and private information effects can be used to limit, and possibly reverse, these exaggeration incentives. Private information is shown to yield an “incentive comparability” effect which gives regulators an additional tool for limiting exaggeration distortions.

Isa Emin Hafalir

Pennsylvania State University

  Thursday, July 14, 11:30, Session B

Stability of Marriage with Externalities    [pdf]

Abstract

In many matching problems, it is natural to consider that agents may have preferences not only over the set of potential partners but over the whole matching. Once such externalities are considered, the set of stable matchings will depend on what agents believe will happen if they deviate. Sasaki and Toda (1996, J. of Econ. Theory, 70, 93) have examined the existence of stable matchings when the beliefs are exogenously specified and shown that stable matchings do not always exist. In this paper, we argue that beliefs should be endogenously generated, that is, they should depend on the preferences. We introduce a particular notion of endogenous beliefs, called sophisticated expectations, and show that with these beliefs, stable matchings always exist.

Oliver Hart

Harvard University and LSE

  Tuesday, July 12, 10:10

Contracts that Rule Out but do not Rule In

(joint work with John Moore)

Abstract

We view a contract as a list of outcomes. Ex ante, the parties commit not to consider outcomes not on the list, i.e., these are "ruled out". Ex post, they freely bargain over outcomes on the list, i.e., the contract specifies no mechanism to structure their choice; in this sense outcomes on the list are not "ruled in". A "loose" contract (long list) maximizes flexibility but may interfere with ex ante investment incentives. When these incentives are important enough, the parties may write a "tight" contract (short list), even though this leads to ex post inefficiency.

Sergiu Hart

Hebrew University of Jerusalem

  Wednesday, July 13, 2:30, Session A

Uncoupled Dynamics and Nash Equilibrium    [pdf]

(joint work with Andreu Mas-Colell)

Abstract

We call a dynamical system "uncoupled" if the dynamic for each player does not depend on the payoff functions of the other players. This is a natural informational restriction. We study convergence of uncoupled dynamics to Nash equilibria, and present a number of possibility and impossibility results.

Penelope Hernandez

University of Alicante

  Tuesday, July 12, 2:30, Session D

Secret correlation with pure automata

(joint work with Olivier Gossner)

Abstract

Let G be a 3-player game with actions sets X(1), X(2), X(3) and payoff function g for player 3. Let v the minmax in correlated strategies for player 3. Let Ai(mi) be the set of automata for player i of size mi such that Ai(mi) inputs at each stage an element of X(j) x X(k), and outputs an element of X(i). An oblivious automaton is an automaton which transitions are independent of other player's actions. An triple of automata (A(1), A(2), A(3)) induces an eventually periodic sequence of actions, and let γ(A(1),A(2),A(3)) be the average payoff of player 3 over a period of this sequence.

A consequence of BenPorath 1993 is that whenever m3 is subexponential in m1 and in m2, there exist correlated automata of 1 and 2 against which cannot obtain significantly more than v. Furthermore, the correlated strategies in the proposition may have support the set of oblivious automata of players 1 and 2.

When players 1 and 2 are limited to rely on pure strategies and m3 is very small compared to m1 and m2 (min(m1,m2) >> m3logm3), the minmax in pure strategies converges as well to v. This result obtains a consequence of Neyman 1997. Furthermore, the automata of players 1 and 2 can be chosen to be oblivious.

Our result, which strengthens the previous one, states that the minmax in pure strategies converges to v but when min(m1,m2 >> m3. The automata of players 1 and 2 we design in the proof of this result are not oblivious, but are oblivious to player 3. The construction of the strategies of players 1 and 2 rely on techniques introduced in Gossner and Hernandez 2003, and the proof that player 3 cannot obtain significantly more than v on large deviation techniques as in Neyman 1997.

Note finally that there is no hope of getting a result of this type if m3 > min(m1,m2).

Sjaak Hurkens

Universitat Pompeu Fabra

  Tuesday, July 12, 2:00, Session E

Bayesian Nash Equilibrium in "Linear" Cournot Models with Private Information About Costs    [pdf]

Abstract

Calculating explicit closed form solutions of Cournot models where firms have private information about their costs is, in general, very cumbersome. Most authors consider therefore linear demands and constant marginal costs. However, even within this framework, some details have been overlooked and the correct calculation of all Bayesian Nash equilibria is slightly more complicated than expected. Moreover, multiple symmetric Bayesian equilibria may exist for an open set of parameters. The reason for this is that linear demand is not really linear. The general ``linear'' inverse demand function is P(Q)=max{a-bQ,0} rather than P(Q)=a-bQ. In particular, price must be nonnegative.

Nathanael Hyafil

University of Toronto

  Monday, July 11, 12:00, Session C

Regret Minimizing Equilibria of Games with Strict Type Uncertainty    [pdf]

(joint work with Nathanael Hyafil and Craig Boutilier)

Abstract

In the standard mechanism design setting,the type (e.g., utility function) of an agent is not known by other agents, nor is it known by the mechanism designer. When this uncertainty is quantified probabilistically, a mechanism induces a game of incomplete information among the agents. However, in many settings, uncertainty over utility functions cannot easily be quantified. We consider the problem of incomplete information games in which type uncertainty is strict or unquantified. We propose the use of minimax regret as a decision criterion in such games, a robust approach for dealing with type uncertainty. We define minimax-regret equilibria and prove that these exist in mixed strategies for finite games. We also briefly discuss mechanism design in this framework, with minimax regret as an optimization criterion for the designer itself, and the automated optimization of such mechanisms.

Elena Iñarra

Basque Country University

  Thursday, July 14, 12:00, Session B

Admissible Hierarchic Sets    [pdf]

(joint work with Concepcion Larrea)

Abstract

In this paper we present a solution concept for abstract systems called the admissible hierarchic set. The solution we propose is a refinement of the hierarchic solution, a generalization of the von Neumann and Morgenstern solution. For finite abstract systems we show that the admissible hierarchic sets and the von Neumann and Morgenstern stable sets are the only outcomes of a coalition formation procedure (Wilson, 1972 and Roth, 1984). For coalitional games we prove that the core is either a vN&M stable set or an admissible hierarchic set.

Elif Incekara

Pennsylvania State University

  Thursday, July 14, 2:00, Session D

Time Inconsistency of Consumers and Excessive Upgrades in the Software Market    [pdf]

Abstract

It is a common observation in the market for upgrades that firms tend to offer small and immature upgrades very frequently instead of significant upgrades less frequently. Evidence of this might be found in software, computer, and personal electronics. For example, people commonly complain about rush and immature upgrades of consumer-oriented word-processing software. In this paper the question we are going to address is why the monopolist offers the immature frequent upgrades instead of significant and less frequent ones. As an explanation, we suggest that if the consumers are time inconsistent in the sense of Phelps and Pollak (1968) and Laibson (1997), the monopolist will offer smaller and more frequent upgrades than if the consumers are time consistent.

Matthew Jackson

California Institute of Technology

  Monday, July 11, 10:10

Social Games: Matching and the Play of Finitely Repeated Games

(joint work with Alison Watts)

Abstract

We examine a new class of games, which we call social games, where players not only choose strategies but also choose with whom they play. A group of players who are dissatisfied with the play of their current partners can join together and play a new equilibrium. This imposes new refinements on equilibrium play, where play depends on the relative populations of players in different roles, among other things.
We also examine finite repetitions of games where players may choose to rematch in any period. Some equilibria of fixed-player repeated games cannot be sustained as equilibria in a repeated social game. Conversely, the set of repeated matching (or social) equilibria also includes some plays that are not part of any subgame perfect equilibrium of the corresponding fixed-player repeated games. We explore existence under different equilibrium definitions, as well as the relationship to renegotiation-proof equilibrium. It is possible for repeated matching equilibria to be completely distinct from renegotiation-proof equilibria, and even to be Pareto inefficient.

Philippe Jehiel

University College London and PSE

  Thursday, July 14, 9:15

Towards a Theory of Deception

(joint work with David Ettinger)

Abstract

This paper proposes an equilibrium approach to deception where deception is defined to be the process by which actions are made to induce erroneous inferences so as to take advantage of them. Specifically, we introduce a framework with boundedly rational players in which agents make inferences based on a coarse information about others' behaviors: Agents are assumed to know only the average reaction function of other agents over bunches of situations. Equilibrium requires that the coarse information available to agents is correct, and that inferences and optimizations are made based on the simplest theories compatible with the available information. We illustrate the phenomenon of deception and how reputation concerns may arise even in zero-sum games in which there is no value to commitment. We further illustrate how the possibility of deception affects standard economic insights through a number of stylized applications including a monitoring game and two simple bargaining games. The approach can be viewed as formalizing into a game theoretic setting a well documented bias in social psychology, the Fundamental Attribution Error.

Mark R. Johnson

Tulane University

  Thursday, July 14, 2:30, Session C

A complexity partial order for strategy implementing automata

Abstract

The use of complexity costs for strategy implementation now is common. Generally, these costs are based on the number of states in an automaton implementing the strategy. While tractable, this approach is incapable of distinguishing between different automata with the same number of states but different capabilities. The specific powers differentiating these automata are counting and sequence detecting abilities. The results in this paper use algebraic techniques and properties to identify a complexity partial ordering that reflects the different powers that an automaton might have. The resulting partial ordering can be used, for example, to identify the “simplest” automaton implementing a Nash equilibrium in a repeated-play game.

Michael A. Jones

Montclair State University

  Tuesday, July 12, 2:00, Session A

Proportional Pie Cutting    [doc]

(joint work with Steven J. Brams and Christian Klamler)

Abstract

David Gale (1993) was perhaps the first to suggest that there is a difference between cake and pie cutting. A cake is often viewed as a line segment or rectangle where cuts are perpendicular to an axis. A pie is often viewed as a circle or disk where cuts are radial to the center of the pie creating wedge-shaped pieces. Individuals’ preferences for cake are represented by nonatomic probability measures over the unit interval, while for pie the measure is over the unit disk. Although a pie can be viewed as a cake with its endpoints connected, this change in geometry is enough to render ineffective many cake-cutting procedures satisfying different fairness criteria.

We provide a counterexample showing that a cake cannot necessarily be divided into a proportional allocation of ratio p:1-p between two players where one player receives p of the cake according to her measure and the other receives 1-p of the cake according to his measure. For rational p, we apply Lucas’ method of markers on a pie and the pigeonhole principle to show that there exists two wedge-shaped pieces such that one player receives exactly p according to her measure while the other player receives 1-p according to his measure. However, different applications of Lucas’ method of markers yield different-sized pieces of surplus pie not awarded to either player. Although we show that for any real p there exists such a proportional allocation without any surplus (thereby using the minimal number of 2 radial cuts), the surplus motivates the existence of an efficient and proportional allocation such that the ratio of the value of the players’ pieces is p:1-p, yet each player receives more than his or her proportional share. We provide a procedure to find a proportional and efficient allocation for any p. For p = 0.5, our procedure constructs an efficient and equitable allocation of pie between two players. For 2 players, we relate the proportional allocation of pie to the Kalai-Smorodinsky bargaining solution.

Yuan Ju

Keele University

  Monday, July 11, 2:30, Session A

The Consensus Value for Games in Partition Function Form    [pdf]

Abstract

This paper studies a generalization of the consensus value (cf. Ju, Borm and Ruys (2004)), a newly introduced solution concept for cooperative games with transferable utility in characteristic function form, to the class of partition function form games. The concepts and axioms, related to the consensus value, are extended. This value is characterized as the unique function that satisfies efficiency, complete symmetry, the quasi-null player property and additivity. By means of the transfer property, a second characterization is provided. Moreover, it is shown that the consensus value satisfies the individual rationality under a certain condition, and well balances the trade-off between coalition effects and externality effects. By modifying the stand-alone reduced game, a recursive formula for the value is established. A further generalization of the consensus value is discussed. Finally, applications of the consensus value to oligopoly games in partition function form and to the participation incentive problems in the free-rider situations are given.

Wei-Torng Juang

Academia Sinica

  Monday, July 11, 2:30, Session A

Learning through Aspiration to Play the Mixed Equilibrium in Zero-Sum Games

(joint work with Tzu-Hou Wang)

Abstract

We construct a model in which two players play a zero-sum game through aspiration. In the super-game where two players interact with each other for infinitely many periods, we may define such a process as a Markov chain. Such a Markov involves players adjust their strategies and aspiration level in each period. We first found that: (i) such a process must converge to some recurrent class within finite periods; (ii) any state that is contained in some recurrent class must satisfy the condition that the sum of these two players’ aspiration levels is less than or equal to zero; (iii) the state that both players play the unique Nash equilibrium in each period and each player has zero aspiration level is also recurrent. We then introduce mutation into the process by assuming that players might adjust their aspiration level even though they are satisfied. Our main result is that the state in which players play the unique Nash equilibrium in each period and each player has zero aspiration level is the only recurrent state that is stochastically stable. That is, players may learn to play the unique mixed Nash equilibrium in zero-sum games and this is the only outcome that will be observed with probability one in the long run.

Eiichiro Kazumori

Stanford University

  Monday, July 11, 12:00, Session B

Auctions with Package Bidding: An Experimental Study    [pdf]

Abstract

This paper reports the results of auction experiments to evaluate auction designs when agents have superadditive values for heterogeneous objects. The first factor of the experimental design is auction choice. We considered generalized Vickrey auctions, simultaneous ascending auctions, and clock-proxy auctions. The second factor is the value structure of agents. In addition to a benchmark case of additive values, we considered superadditive value structures which feature the exposure problem and the coordination problem. The third factor is subject characteristics. We ran experiments with professional traders and university students. We found that clock-proxy auctions outperformed generalized Vickrey auctions. Clock-proxy auctions outperformed simultaneous ascending auctions with the exposure problem value structure, and did statistically equally well with the additive and the coordination problem value structure. The result suggests a trade-off:between efficiency improvements and complexity in package bidding. An ANOVA of outcomes demonstrated that auction designs were significant, and the interaction terms were often significant. We estimated the effect of auction design on efficiency and revenue and found that its magnitude depended on the valuation structure and subject characteristics. The result suggests that market design is not one-size-fits-all but that a successful design builds on an understanding of problem specific issues.

Christian Klamler

University of Graz

  Tuesday, July 12, 3:00, Session A

Better ways to cut a cake    [pdf]

(joint work with Steven J. Brams; Michael A. Jones)

Abstract

Simple cake-cutting procedures used to divide a cake, which could be any heterogeneous good, are analyzed and compared. The well-known 2-person, 1-cut cake-cutting procedure, cut-and-choose, while envy-free and efficient, is not equitable, limiting the cutter to exactly 50% when the chooser, in general, can do better. A new surplus procedure (SP), which induces the players to be truthful in order to maximize their minimum allocations, leads to a more equitable division of the surplus—the part that remains after each person receives exactly 50%. However, SP is more information-demanding than cut-and-choose, requiring that the players report their value functions over the entire cake, not just indicate 50-50 points.
For 3 persons, there may be no envy-free division that is equitable. But there is a simple 3-person, 2-cut squeezing procedure that induces maximin players to make cuts that yield an envy-free division. By contrast, no 4-person, 3-cut envy-free procedure is known to exist. The applicability of the surplus and squeezing procedures to the fair division of a heterogeneous good, like land, are discussed.

Frederic Koessler

Universite de Cergy-Pontoise

  Friday, July 15, 11:30, Session D

Long Persuasion in Sender-Receiver Games    [pdf]

(joint work with Francoise Forges)

Abstract

This paper characterizes the set of all Nash equilibrium payoffs achievable with unmediated communication in sender-receiver games (i.e., games with an informed expert and an uninformed decisionmaker) in which the expert's information is certifiable

Guillaume Lacôte

Université Paris 6

  Tuesday, July 12, 2:00, Session D

Boundedly complex Turing strategies play the repeated Prisoner's Dilemma: some results    [pdf]

Abstract

Bounding the complexity of strategies in a repeated game tremendously affects the set of Nash equilibria : this has been studied extensively when the complexity of a strategy is defined in terms of finite state automata or of bounded recall. We consider the more general model of computable strategies (those which can be implemented by a Turing machine). To define complexity we depart from structural parameters like number of states and symbols and consider the Kolmogorov-Smirnov complexity of a strategy : it is its shortest constructive description (provided a fixed language).

We consider the finitely repeated Prisoner's Dilemma where each player has to chose a pure strategy of bounded Kolmogorov complexity. We discuss on the length of the game and the respective bounds on complexity to identify wether cooperation is a Nash equilibrium. We show that cooperation is indeed sustainable if and only if the complexity of at least one player is significantly smaller than the length of the game.

Krishna Ladha

New York University

  Wednesday, July 13, 2:30, Session C

The Paradox Of Unbiased Public Information

(joint work with Gary J. Miller)

Abstract

Recent game-theoretic literature on juries proposes many reforms including the abandonment of the unanimity rule. Considering the scope of the proposed change, this paper sets out to do one thing: it tests the critical game-theoretic assumption that jurors vote on the basis of being pivotal. The test is devised such that if the groups do well in aggregating dispersed information, they would support the game-theoretic view of juries; if not, they would oppose the game-theoretic view. Here is how. In theory, as shown in the paper, large enough juries remain relatively unaffected when public signals the jurors observe happen to be misleading because theoretical juries do not erroneously overweight the public signals at the expense of the private signals. In reality, however, each individual may overweight misleading public signals leading real juries to a terrible outcome. It is this potential for direct contradiction between theoretical and experimental juries that makes our experimental test sharper than previous tests: given misleading public signals, rational voting would still produce information aggregation; naïve voting would not. In prior research with no public signals, both rational and naïve voting produced information aggregation. Hence, we present a sharper test. Certain public policy implications of our work pertaining to the media are offered.

Claudia M. Landeo

University of Alberta

  Tuesday, July 12, 2:00, Session C

Deterrence, Lawsuits, and Litigation Outcomes under Court Errors    [pdf]

(joint work with Maxim Nikitin)

Abstract

This paper presents a strategic model of liability and litigation under court errors. Our framework allows for endogenous choice of level of care and endogenous likelihood of filing and disputes. We then apply this framework to study the effects of court errors, damage caps and split-awards. Finally, we extend our benchmark model to study the effects of fee-shifting.

Our model consists of three stages. In the first stage, the potential injurer decides between taking due care or being negligent. This decision depends on the cost of preventing accidents and on the expected litigation loss in case of an accident. The level of care determines the probability that an accident occurs. If an accident occurs, the second stage, called the filing stage, starts. Nature decides the merit of the plaintiff's case from a continuum of types and informs the type to the plaintiff. The potential plaintiff then decides whether to file a lawsuit. If a lawsuit is filed, the third stage, called the pre-trial bargaining stage, starts. It consists of a signaling-ultimatum game, where two Bayesian risk-neutral parties, an uninformed plaintiff and an informed defendant, negotiate prior to a costly trial. We derive sufficient conditions for a unique universally-divine (Banks and Sobel, 1987) mixed-strategy perfect Bayesian equilibrium under low court errors. In this equilibrium, some defendants choose to be liable; some cases are filed; and, some lawsuits are dropped, some are resolved out-of-court and some go to trial. We find that court errors in identifying liability of negligent defendants, as well as damage caps and split-awards, reduce the likelihood of dispute but increase filing and reduce the deterrence effect of punitive damages. We derive conditions under which the adoption of the English rule for allocating legal costs reduces filing.

Jean-Francois Laslier

Ecole Polytechnique

  Monday, July 11, 2:00, Session C

Strategic approval voting in a large electorate    [pdf]

Abstract

The paper considers approval voting for a large population of voters. It is proven that, based on statistical information about candidate scores, rational voters vote sincerly. It is also proven that if a Condorcet-winner exists, this candidate is elected.

Ming Li

Concordia University

  Tuesday, July 12, 11:30, Session C

Combining Expert Opinions    [pdf]

Abstract

I analyze a model of advice with two perfectly informed experts and one decision maker. The bias of an expert is her private information. I show that consulting two experts is better than consulting just one. In the simple “peer review” mechanism, the decision maker receives just one report, and the second expert decides whether to block the first expert’s report. A more rigid peer review process improves information transmission. Simultaneous consultation transmits information better than sequential consultation and peer review. However, peer review achieves significant information transmission, with the decision maker receiving only one report. There is an asymmetric equilibrium that is more efficient than the symmetric equilibrium. When given the chance to discover biases of experts, the decision maker may prefer not to do so.

Zhen Liu

Stony Brook University

  Wednesday, July 13, 11:30, Session B

Agreement of opinions and trade with unawareness

Abstract

Unawareness is well observed in the real life. By introducing asymmetric awareness(subjective state spaces), we study the role of unawareness about purely informattive (payoff irrelevant) signals in the models of agreement of opinions and trade.

If the prior beliefs of the state spaces are coherent on the common state space, which implies ex ante agreement of posterior beliefs, we show that ex post agreement is not guaranteed even if the posterior beliefs are common knowledge.

Similarly in an exchange environment, the coherency of the priors is not sufficient for an initially efficient allocation being ex ante and interim efficient, together with the conditions needed to reach the positive result without unawareness.

Xiao Luo

Institute of Economics, Academia Sinica

  Monday, July 11, 2:00, Session B

Iterated Strict Dominance in General Games    [pdf]

(joint work with Yi-Chun Chen and Ngo Van Long)

Abstract

Following Milgrom and Roberts [Econometrica 58(1990), 1255-1278], we offer a definition of iterated elimination of strictly dominated strategies (IESDS*) for games with (in)finite players, (non)compact strategy sets, and (dis)continuous payoff functions. IESDS* is always a well-defined order independent procedure that can be used to solve out Nash equilibrium in dominance-solvable games. We characterize IESDS* by means of a “stability” criterion. We show by an example that IESDS* might generate spurious Nash equilibria in the class of Reny's better-reply secure games. We provide sufficient conditions under which IESDS* preserves the set of Nash equilibria.

Michael Maschler

Hebrew University of Jerusalem

  Friday, July 15, 11:30, Session A

Recent results concerning the core and the nucleolus on a class of the Chinese Postman game

Abstract

This talk reports on a research done by D. Granot, H. Hammers, J. Kuipers and me.The Chinese Postman game is defined by a connected graph with a distinguished vertex, called the Post Office. Each edge has a cost and some of the edges are occupied by a single player.

A postman starts a tour from the Post Office, delivers mail to the players and returns to the Post Office. Obviously, he takes the cheapest route. The question is: How to allocate the cost of the Postman among then players?

We are interested in a class of games which has the property that at each core point, the payoffs of the players in residing in a "road " depends only on the cost function restricted to the edges of the road -- no matter what the cost function is. Here /a road/ is defined to be the maximal path under inclusion, whose inferior vertices have a degree
equal to two.

Having such property has an advantage as well as a disadvantage: The advantage is that if one wants to "sell" the solution to a layman then, in addition to the obvious safety guaranteed by the core, you tell the players that they only have to worry about what happens on their own road. They need not care about costs and residents of players whom live in other roads. The disadvantage is that to serve the players in other roads, the postman must sometimes pass to the current road. Why should they be free riders on this road?

It turns out that the characterization is indeed possible: To have this property, the graph of the game must be Eulerian and have the 4-cut property, which states that in order to separate the graph into two disconnected pieces, by a cut that intersects a road only once, one needs at least four cuts. A similar result hold for the nucleolus.

For the family of such games we show that the computation of the core requires at most 2n+1 inequalities and that the nucleolus of the game can be computed in complexity O(n^2).

Eric Maskin

Institute for Advanced Study and Princeton University

  Monday, July 11, 3:30

Majority Rule and Strategic Voting

(joint work with Partha Dasgupta)

Abstract

We show that there is a precise sense in which simple majority rule is strictly less prone to strategic voting than generalized scoring rules, which include rank-order voting, plurality rule, and approval voting.

Virginie Masson

University of Pittsburgh

  Monday, July 11, 11:30, Session D

You and Your Neighbors: Stubborn or Altruistic?    [pdf]

(joint work with Nicolas Rosenfeld)

Abstract

We develop an evolutionary model with a neighborhood structure in which two types of individuals coexist: A-Type individuals who prefer to coordinate on strategy A and B-Type individuals who prefer to coordinate on strategy B. Players meet to play a 2 × 2 coordination game in which the relevant payoff matrix depends on their types. The selection of a particular decision rule, either imitation or best reply, is conditional on: (i) whether the opponent is a neighbor or a stranger, and (ii) the characteristics of the information they sample. We show that the equilibrium asymptotically selected depends on the distribution of types in the population.

Timothy Mathews

California State University - Northridge

  Friday, July 15, 11:30, Session B

Participation Incentives in Rank Order Tournaments with Endogenous Entry    [pdf]

(joint work with Soiliou Namoro)

Abstract

Rank order tournaments, in which the payment made to an agent is based upon relative observed performance, are a commonly used compensation scheme. Such tournaments induce agents to exert effort when the exact level of effort is not easily observable.
In many situations, agents will potentially compete in a series of such tournaments and must decide which events to enter. The decision of which events to enter will depend upon the prizes offered by each event. The primary focus of the present study is the entry decision of agents in a series of rank order tournaments. Of particular interest is the possibility that a field of higher quality may be attracted to an event with smaller prizes. Conditions are determined under which this counterintuitive outcome could possibly occur.

Jérôme Mathis

Laboratoire THEMA-Univ. de Cergy-Pontoise

  Tuesday, July 12, 12:00, Session C

Consulting an expert with potentially conflicting preferences

(joint work with Thomas Lanzi)

Abstract

We study a situation where a decision maker relies on the report of a self-interested and informed expert prior to decide whether to undertake a certain project. Information contained in the report is verifiable in the sense that the expert can supress favorable information sustaining the project but he cannot exagerate it. An important feature in this interaction is that, depending on the collected information, the two agents may have potentially conflicting preferences. Our results show that this setting favors the agent which is the less eager to undertake the project in that he succeeds to induce his most preferred action.

Alexander Matros

University of Pittsburgh

  Thursday, July 14, 11:30, Session A

Contests with Thresholds    [pdf]

Abstract

In this paper, we consider n-player contest (a ‘la Tullock (1980)) where the contest designer imposes an aggregate threshold level. Players have commonly known budget constraints and (can) value the prize differently. If the total contribution of all n players does not match the total threshold level, the contest designer has a positive probability to keep the prize.

We analyze three cases: (1) the threshold level is not met; (2) the threshold level is met and budget constraints are not bidding; (3) the threshold level is met and some budget constraints are bidding.

We derive the equilibrium spending in all three cases. Well known results of the equilibrium spending if all prizes are the same and/or if two players have different values are corollaries of our results.

Our model has natural connections with lotteries and all-pay auctions. However, our mechanism works even if “the reserve price” (the threshold level) is not met.

Topi Miettinen

University College London

  Wednesday, July 13, 2:30, Session B

Agreeing on Play when Players are Prone to Guilt    [pdf]

Abstract

Experimental evidence suggests that communication increases cooperation in the prisoner's dilemma and contributions in public good games. This paper claims that this efficiency effect may be driven by the guilt that is felt about breaching informal agreements.
Informed by findings in social psychology, we construct a two-player model of preplay negotiation with players who are prone to feelings of guilt. We confirm the efficiency effect in the prisoner's dilemma and in the public good game and derive many other testable predictions for these games.
We call a strategy profile agreeable if agreements to play according to them would not be broken and if both players have an incentive to reach such an agreement. In a large class of symmetric supermodular games, if the outcome where the actions of both are increased by one from the symmetric equilibrium is agreeable, then an outcome in the core of the game is agreeable. Symmetric submodular games, for instance, fail such a property.

Peter Bro Miltersen

University of Aarhus

  Thursday, July 14, 2:00, Session C

Finding Sequential Equilibria Using Lemke's Algorithm

(joint work with Troels Bjerre Sorensen)

Abstract

Koller, Megiddo and von Stengel (Games and Economic Behavior, 1996) and also von Stengel, van den Elzen and Talman (Econometrica, 2002) showed how to apply Lemke's algorithm to find equilibria of two-player extensive-form games. We extend their technique and show how to compute an equilibrium which is sequential as well as normal-form perfect for a given two-player extensive-form game with perfect recall. Our main idea is to apply lexicographic perturbations (which was already a key component of Lemke's algorithm) but to interpret the perturbations game-theoretically as trembles (which wasn't done before in the context of Lemke's algorithm).

Debasis Mishra

Universite Catholique de Louvain

  Friday, July 15, 12:00, Session E

Cost Sharing in a Job Scheduling Problem    [pdf]

(joint work with Bharath Rangarajan)

Abstract

A set of jobs need to be served by a server which can serve only one job at a time. Jobs have processing times and incur waiting costs (linear in their waiting time). The jobs share their costs through compensation using monetary transfers. We characterize the Shapley value rule for this model using two approaches. In one approach, we define a set of reasonable fairness axioms and show how the Shapley value rule can be characterized using these axioms. In the other approach, we use linear programming duality to derive a property called "pairwise no-envy allocation". This gives us a family of allocation rules. We show that the Shapley value rule chooses a pairwise no-envy allocation which minimizes the sum of absolute values of transfers over all pairwise no-envy allocations, whereas a "reverse rule" chooses a pairwise stable allocation which maximizes the sum of absolute transfers over all pairwise stable allocations. We discuss no-envy rules and characterize all no-envy rules for the special case when all jobs have the same processing time.

Stephen Morris

Yale University

  Tuesday, July 12, 9:15

Robust Mechanism Design

Abstract

The mechanism design literature assumes too much common knowledge of the environment among the players and planner, by assuming common knowledge of a common prior on some fixed type space. The talk will survey three recent papers by Bergemann and Morris relaxing this assumption:
1. For partial implementation, we show when Bayesian implementation without common knowledge assumptions is equivalent to ex post incentive compatibility in a "direct" mechanism.
2. There is a characterization of "ex post full implementation," when it is possible ensure that all ex post equilibria deliver a socially desired outcome.
3. There is a characterization of "robust implementation," when it is possible to ensure that all Bayesian equilibria deliver a socially desired outcome. All these results have implications for the robustness properties of classic economic problems, which will also be described.

John Nash

Princeton University

  Friday, July 15, 4:25

Results from Studies on the Reduction of Cooperative Games to Non-Cooperative Form (Using the Agencies Method) and Project Plans for Further Study

Abstract

A method for constructing a model of the process of getting together (into coalitions or into the grand coalition) by the players in a (cooperative) game is found in terms of "agencies" (by means of which players can electively assign their separately held strategic options to other players or agents). And this method works for the construction of models where the cooperation has the form of "evolved cooperation" (as in the examples of the studies of iterated "Prisoners' Dilemma" games that have been studied in Theoretical Biology).

We have results from the study of simple three person games of the form describable by a "characteristic function" in which study the agents were derived from the original three players if and when any of these became "elected" as agents by any of the other players.

And these results, since they indicate derived apportionings, among the three players, of the total payoff possible from the game, can be compared with the analogous indications derivable from the Shapley value or the nucleolus.

The initial appraisal of the study would be that (when the two player sub-coalitions are relatively weak) that the Shapley value assigns to them much too much influence (compared to the amounts suggested by the nucleolus).

But we feel that further study, with some variations in the modelling, MAY lead to payoff results that are comparatively closer to the Shapley value.

And, specifically, we now see an apparently nice method for introducing "attorney agents" which could represent coalitions (rather than having a coalition, for example of two players, being representable only by one of the two players or the other, dependent on which of them had electively assigned his strategic powers to the other).

It is conceivable that this variation in the modelling might enhance the effective influence of weak two-player coalitions. (And in general the basic idea is that as cooperation is "evolved", like in Nature, that refinements in the (evolved) "reactive behavior" of the cooperating parties may lead to more refined and therefore better results of the modelling, in relation to the payoff predictions.)

Abraham Neyman

Hebrew University of Jerusalem

  Tuesday, July 12, 4:00

Optimal Use of Communication Resources

(joint work with Olivier Gossner and Penelope Hernandez)

Abstract

We study a repeated game with asymmetric information about a dynamic state of nature. In the course of the game, the better informed player can communicate some or all of his information with the other. Our model covers costly and/or bounded communication. We characterize the set of equilibrium payoffs, and contrast these with the communication equilibrium payoffs, which by definition entail no communication costs.

Yuichi Noguchi

Kanto Gakuin University

  Monday, July 11, 2:00, Session D

Merging with a Set of Probability Measures    [pdf]

Abstract

We give a characterization of a set of probability measures with which a prior weakly merges. For that purpose, we introduce the concept of conditioning rules which represent regularities of probability measures, and then we define a probability measure eventually generated by a family of conditioning rules. Then we show that a set of probability measures is learnable, i.e., all probability measures in the set are weakly merged with by a prior, if and only if the set is included in a set of probability measures eventually generated by a countable family of conditioning rules. We also demonstrate that quite similar results obtain for almost weak merging. In addition we will argue that our characterization is associated with the impossibility result in Nachbar (1997) and (2004).

Daijiro Okada

Rutgers University

  Tuesday, July 12, 3:00, Session D

Growth of Strategy Sets, Entropy and Nonstationary Bounded Recall    [pdf]

(joint work with Abraham Neyman)

Abstract

In the existing literature on bounded rationality in repeated games, sets of feasible strategies are assumed to be independent of time (i.e. stage). In this paper we consider a time-dependent description of strategy sets, growing strategy sets. A growing strategy set is characterized by the way the set of strategies available to a player at each stage expands, possibly without bound, but not as fast as it would in the case of full rationality. Growing strategy sets are defined without regard to any specific complexity measure such as the number of states of automata or the length of recall. Rather, we focus on the number of distinct strategies available to a player up to stage t and how this number grows as a function of t.

We then study two-person zerosum games where one player's feasible strategy set grows slowly while playing against a fully rational player. We characterize the value of such games as a function of the growth rate of the strategy set. This function is related to the one that gives the minimax value of the stage game under the constraint that the payoff maximizing player chooses a mixed action with a given upper bound on entropy.

As an application of growing strategy set to strategic complexity, we also study repeated games with non-stationary bounded recall strategies where the length of recall is allowed to grow as the game progresses. We will show that a player with bounded recall can guarantee the value of the stage game even against a player with full recall so long as he can remember, at stage t, at least K log t stages back for some constant K>0. Thus, in order to guarantee the value, it suffices to remember only a vanishing fraction of the past.

Frank H. Page, Jr.

University of Alabama

  Wednesday, July 13, 11:30, Session C

Strategic Basins of Attraction, the Farsighted Core, and Network Formation Games    [pdf]

(joint work with Myrna Wooders)

Abstract

We make four main contributions to the theory of network formation. (1) The problem of network formation with farsighted agents can be formulated as an abstract network formation game. (2) In any farsighted network formation game the feasible set of networks contains a unique, finite, disjoint collection of nonempty subsets having the property that each subset forms a strategic basin of attraction. These basins of attraction contain all the networks that are likely to emerge and persist if individuals behave farsightedly in playing the network formation game. (3) A von Neumann Morgenstern stable set of the farsighted network formation game is constructed by selecting one network from each basin of attraction. We refer to any such von Neumann-Morgenstern stable set as a farsighted basis. (4) The core of the farsighted network formation game is constructed by selecting one network from each basin of attraction containing a single network. We call this notion of the core, the farsighted core. We conclude that the farsighted core is nonempty if and only if there exists at least one farsighted basin of attraction containing a single network.
To relate our three equilibrium and stability notions (basins of attraction, farsighted basis, and farsighted core) to recent work by Jackson and Wolinsky (1996), we define a notion of pairwise stability similar to the Jackson-Wolinsky notion and we show that the farsighted core is contained in the set of pairwise stable networks.
Finally, we introduce, via an example, competitive contracting networks and highlight how the analysis of these networks requires the new features of our network formation model.

Sergio Parreiras

The University of North Carolina at Chapel Hill

  Thursday, July 14, 2:30, Session E

Smooth Ex-Post Implementation with Multi-Dimensional Information    [pdf]

(joint work with Claudio Mezzetti)

Abstract

This paper provides sufficient conditions for ex-post implementation of social choice rules. The main feature of our approach is that the set of outcomes of the social choice function include randomization of alternatives and attention is restrict attention to smooth, regular social choice functions.

David G. Pearce

New York University

  Friday, July 15, 3:30

Reputational Wars of Attrition with Complex Bargaining Postures

Abstract

Consider a two-person intertemporal bargaining problem in which players choose actions and collect payoffs while bargaining proceeds. Theory is silent regarding how the surplus is likely to be split, because a folk theorem applies. Perturbing such a game with a rich set of behavioral types for each player yields a specific asymptotic prediction for how the surplus will be divided, as the perturbation probabilities approach zero. Behavioral types may follow nonstationary strategies and respond to the opponent’s play. How much should a player try to get, and how should she behave while waiting for the resolution of bargaining? In both respects she should build her strategy around the advice given by the “Nash bargaining with threats” theory developed for two-stage games. The results suggest that there are forces at work in some dynamic games that favor certain payoffs over all others. This is in stark contrast to the classic folk theorems, to the further folk theorems established for repeated games with two-sided reputational perturbations, and to the permissive results obtained in the literature on bargaining with payoffs-as-you-go.

Andrew Postlewaite

University of Pennsylvania

  Wednesday, July 13, 9:15

Aggregation of Expert Opinions

(joint work with Dino Gerardi and Richard McLean)

Abstract

Conflicts of interest arise between a decision maker and agents who have information pertinent to the problem because of differences in their preferences over outcomes. We show how the decision maker can extract the information by distorting the decisions that will be taken, and show that only slight distortions will be necessary when agents are "informationally small". We further show that as the number of informed agents becomes large the necessary distortion goes to zero. We argue that the particular mechanisms analyzed are substantially less demanding informationally than those typically employed in implementation and virtual implementation. In particular, the equilibria we analyze are "conditionally" dominant strategy in a precise sense. Further, the mechanisms are immune to manipulation by small groups of agents.

Philip J. Reny

University of Chicago

  Monday, July 11, 9:15

On the Existence of Monotone Pure Strategy Equilibria in Bayesian Games    [pdf]

Abstract

We extend and strengthen both Athey's (2001) and McAdams'(2003) results on the existence of monotone pure strategy equilibria in Bayesian games. We allow action spaces to be compact locally-complete metrizable semilatttices and can handle both a weaker form of quasisupermodularity than is employed by McAdams and a weaker single-crossing property than is required by both Athey and McAdams. Our proof -- which is based upon contractibility rather than convexity of best reply sets -- demonstrates that the only role of single-crossing is to help ensure the existence of monotone best replies. Finally, we do not require the Milgrom-Weber (1985) absolute continuity condition on the joint distribution of types.

Jamele Rigolini

University of Warwick

  Thursday, July 14, 11:30, Session E

On the Role of Formal and Informal Institutions in Development    [pdf]

(joint work with Amrita Dhillon)

Abstract

We consider an economy with firms producing goods of high or low quality, where quality is unobservable to consumers, and low quality can stem from a bad productivity shock or low effort. We then link the degree of development of a country to the probability of a bad productivity shock, and compare two institutions that solve the moral hazard problem: an informal mechanism, reputation, achieved via consumers boycotting firms that produce bad quality, and a formal mechanism, contract enforcement, whose effectiveness can be reduced by firms by means of lobbying. In our model perfect contract enforcement is the first best mechanism sustaining high quality. However, firms’ incentives to lobby and to decrease the quality of the legal system increase with the probability of a bad productivity shock, so that to sustain high quality in developing countries consumers have to rely more on the informal reputation mechanism. Developing countries therefore suffer both from the direct effect of more frequent bad productivity shocks, as well as from the indirect effect of higher difficulties to build good institutions.

Gabriel Darren Rosenberg

London School of Economics

  Friday, July 15, 11:30, Session C

Computation of Nash Equilibria for Bimatrix Games with Integer Pivoting

Abstract

This paper describes an integer pivoting implementation of the "EEE" algorithm of C. Audet et al. (2001), "Enumeration of all extreme equilibrium strategies of bimatrix games," SIAM J. Scientific Computing 23, 323-338. The algorithm employs the best response condition of Nash equilibria by exploring the feasibility of searches where variables are forced to be either a best response to the other player's strategy or played with zero probability. At a Nash equilibrium, all variables must fall into one (or both) of these two categories. Previous implementations of the algorithm solve the parameterized linear programs in each step with the standard simplex method. This employs division at each step in the algorithm, with no guarantee that the results will be integers. In large bimatrix games, small rounding errors due to the use of floating-point numbers may add up over the course of the large number of pivot steps needed to solve a large linear program. Implementation using integer pivoting, which uses division only where the quotient is guaranteed to be an integer, allows solution of large games without error. Experimental tests are reported that compare this implementation with previous results and determine the effect on running time of changes in game size, range of payoffs, and objective function of the feasibility search. The results imply potential improvements to the algorithm and are supplemented with a geometric approach to the algorithm tying it to other, polyhedral-search based, algorithms.

Joachim Rosenmuller

IMW, University of Bielefeld

  Wednesday, July 13, 12:00, Session D

Convex Geometry and Superadditive Solutions

(joint work with Diethard Pallaschke)

Abstract

The Maschler--Perles (1981) bargaining solution is a mapping defined on 2--dimensional bargaining problems; its essential property is superadditivity.

The solution is based on the observation that a polyhedral bargaining problem in R^2 is an algebraic ('Minkowski') sum of 'elementary' bargaining problems. These are generated by line segments or, equivalently, they are algebraic ('Minkowski') sums of triangles.

Perles (1982) proved that a superadditive bargaining solution does not exist for more than 2 players.

To further analyse this problem first of all leads to the study of convex polyhedra in R^n that are algebraic sums of 'prisms'. A prism is a bargaining problem with a plane Pareto surface. A sum of prisms is called a 'cephoid' -- this can be seen as a lottery over prisms.

We analyse the nature of cephoids, their maximal faces and the structure of the poset of all faces. This is a topic in convex geometry.

Based on this, we produce a class of cephoids which admits of a superaditive bargaining solution. The solution generalizes the superadditive solution of Maschler and Perles as follows.

In two dimensions, the 'speed of concessions' is determined by the 'area of the prisms' generated by line segments. If instead, one regards this as a density of a surface measure, then the generalization to arbitrary dimensions is at hand. We can then map the Pareto surface onto a simplex such that the surface measure is translated into a measure absolutely continuous with respect to Lebesgue measure. The pre-image of the center of gravity with respect to this measure is the generalized Maschler--Perles solution and it behaves superadditively.

Marzena J. Rostek

Yale University

  Monday, July 11, 2:30, Session B

Fatalistic Choice Rules    [pdf]

Abstract

We formally model fatalistic reasoning and study its implications in strategic and non-strategic settings. A decision maker reasons fatalistically if, given her beliefs about opponents’ choices, she evaluates an action by a given quantile of the corresponding distribution. Thus, the framework unifies and generalizes a number of existing concepts (maxmin and maxmax) in a continuous way.
We axiomatize the class of preferences identified by quantile maximization. Further, we characterize properties of fatalistic best-response correspondences and use them to derive testable restrictions on choice behavior (without restricting beliefs). We apply our decision-theoretic framework to games and study outcomes of interactions where individuals reason fatalistically. Specifically, we look for identification conditions, investigate common knowledge of fatalistic reasoning, and jointly model fatalistic and expectation-based choice.

Anna Rubinchik-Pessach

University of Colorado at Boulder

  Thursday, July 14, 12:00, Session E

Does it Take a Tyrant to Implement a Good Reform?    [pdf]

(joint work with Ruqu Wang)

Abstract

In our model a reform is a switch from one norm of behavior (equilibrium) to another and agents have to endure private costs of transition in case of a reform. A (local) authority, which coordinates the transition, can enforce transfers across the agents and is capable of imposing punishments upon them. A transfer/tax is limited, however, by an agent's equilibrium payoff, and a punishment can not exceed an upper bound monitored by a "third party" (international community). Implementing a good (Pareto improving) reform can be hindered by asymmetric information about the costs of transition, which are privately known to the agents and can not be observed by the authority. In this case even a benevolent authority may need to credibly threaten agents with a punishment to induce both the desired behavior and the truthtelling about the costs, as otherwise some good reforms will not be implementable, even with Bayesian mechanisms. Allowing for harsher punishments in that framework reduces to `softening' the individual rationality constraint, thus widening the range of implementable reforms. The flip-side of increasing the admissible punishment is making `bad' reforms feasible. With the international community setting a uniform standard of (negative) human rights (or maximal level of punishment) across countries, some will be unable to implement good reforms, while others will be prone to undesirable transitions. We, thus, formulate a trade-off between a successful implementation of good reforms from the utilitarian perspective and well-being of selected individuals in the society.

Samer Salame

City University of New York

  Tuesday, July 12, 11:30, Session D

Some Results on Adjusted Winner    [pdf]

(joint work with Rohit Parikh and Eric Pacuit)

Abstract

We study the Adjusted Winner procedure of Brams and Taylor for dividing goods fairly between two individuals, and prove several results. In particular we show rigorously that as the differences between the two individuals become more acute they both benefit. We study some rather odd knowledge-theoretic properties of strategizing. We introduce a geometic approach which allows us to give alternate proofs of some of the Brams-Taylor results and which gives some hope for understanding the many-agent case also. We also point out that while honesty may not always be the best policy, it is as Parikh and Pacuit [PPsv] point out in the context of voting, the only safe one. Finally, we also show that provided that the assignments of valuation points are allowed to be real numbers, the final result is a continuous function of the valuations given by the two agents.

Dov Samet

Tel Aviv University

  Tuesday, July 12, 12:00, Session A

Deriving Knowledge from Belief

(joint work with Ella Segev)

Abstract

Is it possible to attribute knowledge to an agent who has only beliefs? It has been argued that correct belief, the natural candidate for describing knowledge in terms of belief, is not knowledge. Here we prove it, and show that the negative introspection property of knowledge is the only reason why knowledge is not correct belief. We further show that it is impossible to express knowledge in terms of belief in any other way. Yet we demonstrate that for a rich enough family of models each belief operator can be associated with a unique knowledge operator.

Rahul Savani

London School of Economics

  Friday, July 15, 12:00, Session C

Challenge Games for Computing a Nash Equilibrium

(joint work with Bernhard von Stengel)

Abstract

Given a pair of integer matrices that define a bimatrix game, how long does it take to compute at least one Nash equilibrium? In theoretical computer science, this has been called "one of the most important open problems on the boundary of polynomial-time computability today". This work presents classes of games for which it takes exponential time to find one equilibrium for TWO standard methods. One of them is the classical Lemke-Howson method, which is similar to the simplex algorithm for linear programming. In a sense, these games are comparable to linear programs where the simplex method takes exponential time. The second is a trivial "support guessing" method, which tries out the possible supports for a Nash equilibrium. The constructed games are hard to solve for this method since they are not square, with an exponential number of supports that need to be tested on average. The groundwork explaining these algorithms, and the geometry behind them, is given in a plenary talk by Bernhard von Stengel. This contribution explains the construction, based on elegant geometric-combinatorial properties, of the "challenge games".

Marco Scarsini

Universita di Torino

  Friday, July 15, 11:30, Session E

Large Newsvendor Games    [pdf]

(joint work with Luigi Montrucchio)

Abstract

We consider a game, called newsvendor game, where several retailers, who face a random demand, can pool their resources and build a centralized inventory that stocks a single item on their behalf. The inventory costs have to be allocated in a way that is advantageous to all the retailers. A game in characteristic form is obtained by assigning to each coalition its optimal expected cost. Mueller, Scarsini, and Shaked (2002) proved that the anticore of this game is always nonempty for every possible joint distribution of the random demands.

In this paper we consider newsvendor games with possibly an infinite number of newsvendors. We generalize some results contained in Mueller, Scarsini, and Shaked (2002) and we show that in a game with a continuum of players, under a nonatomic condition on the demand, the core is a singleton. For a particular class of demands we show how the core shrinks to a singleton when the number of players increases.

Karl H. Schlag

European University Institute

  Wednesday, July 13, 2:00, Session E

Robust Monopoly Pricing - The Case of Regret    [pdf]

(joint work with Dirk Bergemann)

Abstract

We consider a robust version of the classic problem of optimal monopoly pricing with incomplete information. The robust version of the problem is distinct in two aspects: (i) the seller minimizes regret rather than maximizes revenue, and (ii) the seller only knows that the true distribution of the valuations is in a neighborhood of a given model distribution. The robust pricing policy is characterized as the solution to a minimax problem for the case of small and large uncertainty faced by the seller. In the case of small uncertainty, the robust pricing policy prices closer to the median at a rate determined by the curvature of the static profit function. Regret increases linearly in the uncertainty.

Jeff Shamma

University of California, Los Angeles

  Monday, July 11, 2:30, Session D

Calibrated forecasts: Efficiency versus Universality    [pdf]

(joint work with Gurdal Arslan, Shie Mannor)

Abstract

One approach to learning in repeated matrix games is to have each player compute some sort of forecast of opponent actions and play a best response to this forecast. Accordingly, the limiting behavior of player actions strongly depends on the specific method for forecasting. For example in fictitious play, forecasts are simply the empirical frequencies of opponent actions. In special classes of games, player strategies converge to a Nash equilibrium, but, as is well known, the limiting behavior need not exhibit convergence in general.

Placing more stringent requirements on the forecasts can result in stronger convergence properties for general games. In particular, if players use “calibrated” forecasts (as in Foster & Vohra, 1997), then player strategies asymptotically converge to the set of correlated equilibria. When players use calibrated forecasts of joint actions (Kakade & Foster, 2004), then player strategies converge to the convex hull of Nash equilibria.

A drawback of calibrated forecasts is the computational requirement. In particular, existing methods of computing calibrated forecasts require a state space that evolves over a discretized grid of points extracted from a probability simplex of appropriate dimension. Furthermore, the asymptotic convergence results cited earlier require a progressive refinement of this grid. As a result, calibration for a multi-move opponent can be a considerable computational burden.

This talk presents some work in progress that explores the possibility of calibration without discretization. We introduce a “tracking forecast” that sacrifices universality in favor of a significantly reduced computational burden. Specifically, the tracking forecast has the same computational requirement as computing empirical frequencies. We show that the tracking forecast is calibrated for special classes of sequences, and we discuss possible consequences of tracking forecasts in repeated matrix games.

Lloyd Shapley

University of California, Los Angeles

  Thursday, July 14, 4:25

Stable Sets Revisited: Some Reduction Theorems

Abstract

The classical definitions are extended to arbitrary subsets of the standard imputation simplex "A". Thus if we define a set X to be "B-stable" iff X=B\Dom X. The classical solutions are the A-stable sets. We shall show that if B is contained in A and strictly contains C and if the set-difference B\C satisfies certain conditions of "inertia" or "inferiority", there exists an explicit 1-1 correspondence between the B-stable sets and the C-stable sets. The reduction from B to C may create new “inert” or “inferior” imputations in C, enabling us to set up another reduction from C to some D, a subset of C, and so on… perhaps to infinity; a startling example of the latter will be presented.

Martin Shubik

Yale University

  Tuesday, July 12, 4:55

On Endogenizing Bureaucracy in a Strategic Market Game

(joint work with Eric Smith)

Abstract

The use of a means of payment leads naturally to the introduction of credit. Problems with repayment require the specification of the rules of the game concerning repayment. These laws require enforcement if they are to succeed. The enforcement requires enforcers, hence the emergence of a legal system together with a sufficient enforcement mechanism is required.

Marilda Sotomayor

Universidade de São Paulo

  Thursday, July 14, 2:00, Session A

An Elementary Non-Constructive Proof of the Non-Emptiness of the Core of the Housing Market of Shapley and Scarf    [pdf]

Abstract

Shapley and Scarf, by using the theory of balanced games, prove, in a well-known paper of 1974 (Journal of Mathematical Economics, 1, 23-28), the non-emptiness of the core of the Housing Market. This paper provides a non-constructive, simple and short proof that gives some intuition about how blocking can be done by players who have not traded.

Giorgos Stamatopoulos

University of Cyprus

  Wednesday, July 13, 12:00, Session B

Cognitive hierarchy and two-stage location games

Abstract

In this paper the idea of cognitive hierarchy is first extended to multistage games and then applied to a Hotelling duopoly. The rationality level of a firm indicates the number of stages of the game where it calculates how to play. It is shown that any firm that has the cognitive ability to calculate its location chooses to locate in the center of a set of locations; so minimum differentiation (across different rationality levels) is achieved.

Tristan Tomala

Université Paris Dauphine

  Wednesday, July 13, 10:10

Measuring the value of monitoring in repeated decision problems and in repeated games

(joint work with Olivier Gossner)

Abstract

We present a new probabilistic tool to analyse decision or game theoretic problems where different agents have different quality of signals on past play. Consider a stochastic process $(x_n)$ with values in a finite set $X$ and an agent who observes at stage $n$ a signal $y_n=f(x_n)$. The distribution of the next outcome given the past of the process is $p_n=P(x_{n+1}\vert x_1,\ldots, x_n)$ and belongs to $\Delta(X)$, the set of probabilities on $X$. The agent knowing the distribution of the process $P$, holds a belief on $p_n$, $b_n=P(p_n\vert y_1,\ldots, y_n)$ which belongs to $\Delta^2(X)$. Our object of study is the empirical distributions of beliefs, defined as the empirical frequency of the stochastic sequence of $(b_n)$, thus an element of $\Delta^3(X)$.

Given a belief $b$, we define the entropy variation of $b$ as: $\Delta H(b)=\int H(p)db(p) - H(f(\int pdb(p)))$ where $H$ is Shannon's entropy. Our main result is that a distribution $\delta$ is the empirical distribution of beliefs associated to a process $P$ if and only if $\int \Delta H(b)d\delta(b)\geq 0$.

We shall present two applications: first we derive a bound on the value of monitoring in repeated decision problems and second we present a characterization of minmax values in repeated games with imperfect monitoring. Both write as values of optimization problems on sets of probabilities under entropy constraints.

Elias Tsakas

University of Göteborg

  Monday, July 11, 11:30, Session C

Mixed Quantal Response Equilibria for Normal Form Games    [pdf]

Abstract

We introduce the mixed quantal response equilibrium as an alternative statistical approach to normal form games with random utility function and prove its existence. Then we extend the quantal response equilibrium to payoff functions with disturbances outside the family of admissible distributions. Finally, we define the mixed logit quantal response equilibrium, we draw the correspondence between it and the multinomial mixed logit model and prove that any random utility game has a quantal response equilibrium, which additionally is the limit of a parametric mixed logit quantal response equilibrium.

Amparo Urbano

University of Valencia

  Wednesday, July 13, 12:00, Session C

Isolation and redundancy on information dissemination in dynamic networks    [doc]

(joint work with Jose Vila)

Abstract

Information flows among agents within any kind of organization or network, to reach the points where it can be efficiently analyzed and integrated in decision-making. The strategic decision of setting or deleting links is usually modeled from a local viewpoint, but, in many cases, some network global properties such as isolation and redundancy are important to agents’ decisions. To include these global properties into agents’ decision-making we model a directed “information sharing” network where isolation is understood as the existence of more than one connected component and redundancy is related with the existence of cycles. In our model, agents make decision in terms both locally (deleting or proposing to create a link with another agent around him) and globally (by considering the existence of isolated information or redundancy) considerations.

To easily summarize both isolation and redundancy we propose to use homology theory. There are different reasons for this suggestion. First, a graph can have, at the very most, two non-trivial homology spaces: those of order zero and one. Homology spaces of degree zero show the number of connected components of the graph. Hence, a non-trivial space of degree zero is a quite good indicator of an isolation problem and its dimension (the “Betti” number of order zero) can be understood as a measure of the size of this problem. Moreover, a homology space of degree one informs of the existence of cycles in the graph or, in other words, of redundancy problems in the organization. Again, its “Betti” number summarizes how big this problem actually is. Second, homology spaces, as opposed to many other topological measures able to summarize isolation or/and redundancy, can be easily computed by finite algorithms that only consider local descriptions of the network (specifically, of the simplicial complex induced by the network).

Federico Valenciano

Universida del Pais Vasco

  Tuesday, July 12, 12:00, Session B

Noncooperative foundations of bargaining power in committees    [pdf]

(joint work with Annick Laruelle)

Abstract

In this paper we explore the non cooperative foundations of the bargaining power that a voting rule confers to its users in a 'bargaining committee'. That is, a committee that bargains in search of consensus over a set of feasible agreements under a voting rule. Assuming complete information, we model a variety of bargaining protocols whose stationary subgame perfect equilibria are investigated. It is also shown how previous results obtained by us from a cooperative approach, which provided axiomatic foundations for an interpretation of the Shapley-Shubik index and other power indices as measures of 'bargaining power' appear in this light as limit cases.

Johannes Rene Van den Brink

Free University Amsterdam

  Thursday, July 14, 2:30, Session A

Harsanyi power solutions for graph-restricted games    [pdf]

(joint work with Gerard van der Laan, Vitaly Pruzhansky)

Abstract

We consider cooperative TU-games with limited communication structure in which the edges or links of an undirected graph on the set of players represent binary communication links between the players. Following Myerson (1977) we assume that players can cooperate if and only if they are connected in the communication graph.

A solution assigns a payoff distribution to every such graph-game. We discuss Harsanyi power solutions which are obtained by applying specific Harsanyi solutions to the Myerson restricted game. The worth of a coalition in the Myerson restricted game is the sum of the worths of its components. A Harsanyi solution for TU-games distributes the Harsanyi dividends of the game over the players in the corresponding coalitions according to a sharing system. A sharing system assigns to every coalition S a sharing vector specifying for every player in S its share in the dividend of S. In a Harsanyi power solution sharing systems are associated with some power measure for communication graphs. A power measure for communication graphs assigns a nonnegative real number to every node in any communication graph. These numbers represent the strength or power of the nodes in the graph. Given a power measure we define a sharing system in the corresponding Harsanyi power solution such that the share vectors of coalitions are proportional to the power measure of the corresponding subgraphs.

Besides characterizing the Harsanyi power solutions on the class of cycle-free graph games, we give special attention to the one that is based on the degree measure that assigns to every player in a graph the number of players with whom it is directly connected. On the class of cycle-free graph games, the corresponding Harsanyi power solution is equal to the position value. Applying the equal power measure that assigns equal power to all players, we obtain the Myerson value. We discuss some applications a.o. assignment games, ATM-games and auction games.

Bernhard Von Stengel

London School of Economics

  Friday, July 15, 10:10

Geometry of Nash equilibria for two-player games

Abstract

This talk gives a survey of the geometric aspects of Nash equilibria for two-player games. Starting point is the division of the mixed strategy simplex into best-reply regions. With suitable labels for these regions, all equilibria can be easily visualized as "completely labeled" pairs of points. This visualization has been proposed by Shapley (1974) for the algorithm of Lemke and Howson (1964), which gives an elementary proof that every two-player game has a Nash equilibrium, and shows that nondegenerate games have an ODD number of equilibria. Related views use polyhedra and polytopes, which are easier in computational terms, and in order to bound the number of Nash equilibria, for example. We also give a new construction that allows to visualize the INDEX of an equilibrium, typically a very technical concept, in a low-dimensional picture, for example the plane for a 3 x n game, for any n. The index is related to stability of equilibria, and can be characterized in strategic terms: an equilibrium has index +1 if and only if it can be made the unique equilibrium of the game by adding suitable strategies. With the help of these geometric insights, one can construct games with desired properties (for example, a certain number of equilibria) starting from simple qualitative pictures, from which the payoffs are easily derived.

Fan Wang

Stony Brook University

  Tuesday, July 12, 11:30, Session E

Adaptive Learning and Evolutionary Dynamics in Financial Market    [pdf]

Abstract

Speculation in asset market is modelled as a stochastic betting game played by finite number of players and repeated infinite times. With stochastic asset return and unkown quality of public signal, a generic adaptive learning rule is proposed and the corresponding evolutionary dynamics is analyzed. The impact of historical events on players' belief decays over time. It is proved to be a robust approach to adapt to stochastic regime shifts in the market. The market dynamics has characteristics, i.e. endogenous boom-bust cycle, positive correlation in return and volume, and negative first order autocorrelation in return series, commonly observed in financial market but inexplicable by conventional rational expectations theory.

Leonard Wantchekon

New York University

  Tuesday, July 12, 2:00, Session B

Does Ethnic Solidarity Facilitate Electoral Support for Nation-Building Policies?

(joint work with Yves Atchade)

Abstract

This paper investigates of the effect of ethnic ties between voters and candidates on electoral support for "nation-building" policies.

The paper provides estimates of these effects using data collected in selected (non-competitive) districts that were randomly assigned to "purified" national public goods platforms candidates competing in the 2001 presidential elections in Benin.

We find that ethnic ties strengthen voters's support to national public goods platforms. We also find that the positive effect of ethnic ties would have remained, has the experiment taken place in more competitive and ethnically diverse districts. The results suggest that ethnic ties can help secure electoral support for nation-building policies so long as such policies are adopted by candidates.

David Wettstein

Ben-Gurion University

  Wednesday, July 13, 2:30, Session D

Efficient Bidding with Externalities    [pdf]

(joint work with Inés Macho-Stadler, David Pérez-Castrillo)

Abstract

We implement a family of efficient proposals to share benefits generated in environments with externalities. These proposals extend the Shapley value to games with externalities and are parametrized through the method by which the externalities are averaged. We construct two slightly different mechanisms: one for environments with negative externalities and the other for positive externalities. We show that the subgame perfect equilibrium outcomes of these mechanisms coincide with the sharing proposals.

Myrna Wooders

Vanderbilt University and University of Warwick

  Thursday, July 14, 3:30

Market games, inequality and the equal treatment property of the core of a game

Abstract

Paper addresses the question of when inequality can persist in large economies. The economies are modelled as cooperative games satisfying boundedness of per capita payoffs (PCB). PCB simply ensures that the supremum of average payoff over all games considered is finite and thus rules out asymptoticallly unbounded per capita payoffs. Other conditions are also investigated.

Chun-Hsien Yeh

Academia Sinica

  Tuesday, July 12, 12:00, Session D

Reduction-consistency and the Condorcet principle in collective choice probelms    [pdf]

(joint work with Yan-An Hwang)

Abstract

We study the implications of reduction-consistency and the Condorcet principle in the context of choosing alternatives from a set of feasible alternatives over which each agent has a strict preference. We show that reduction-consistency is incompatible with a weaker version of the Condorcet principle. On the domain for which majority rule is always non-empty and agents' preferences are strict, we provide two characterizations of majority rule: (1) it is the only efficient rule satisfying reduction-consistency and (2) it is the only single-valued and efficient rule satisfying the converse of reduction-consistency.

Peyton Young

Johns Hopkins University & University of Oxford

  Wednesday, July 13, 3:30

Learning and Equilibrium in Games

Abstract

It is surprisingly difficult to devise decentralized learning rules that converge to Nash equilibrium in general games. In this lecture we shall survey what is currently known about this issue and suggest some open problems. In particular we shall contrast various forms of bounded rationality with Bayesian learning, and examine their implications for long run equilibrium (and disequilibrium) behavior.

Shmuel Zamir

Hebrew University of Jerusalem

  Monday, July 11, 4:25

A Model of Bargaining with Incomplete Information

(joint work with Edi Karni)

Abstract

We study the equilibrium outcomes of two-person bargaining problems in which each party has "outside option" known only to himself. We examine two game forms, a sequential-move game and a simultaneous-move game. In this context we discuss the failure to reach agreements and the loss of efficiency thereof. Invoking the analogy between the sequential-move game and the familiar ultimatum game we also provide new interpretation of the experimental evidence regarding the players behavior in the ultimatum game.

Andriy Zapechelnyuk

Stony Brook University

  Thursday, July 14, 12:00, Session A

Internet Auctions: Sellers, Bidders, and Auction Houses    [pdf]

(joint work with Alexander Matros)

Abstract

We consider a second-price auction with a possibility of resale through re-auction. There are three types of agents in our model: a seller, a set of potential buyers, and an auction house. The auction house runs auctions and collects listing and closing fees from sellers.
We find the optimal strategy for each agent. In particular, we show that for an auction house the optimal listing fee is zero. Our findings are consistent with the policies of eBay, Amazon and Yahoo auctions.

Back