Back

Speakers

Ahmet Alkan

Sabanci University, Istanbul

On School Formation

(joint work with Mehmet Yorukoglu)

Abstract

In an environment where students differ in ability and endowments, equilibrium school matches are studied. A student's after school human capital depends positively on the ability of his peer in school ---positive peer effect. Efficiency of the competitive equilibrium is discussed. If the high-ability low-endowment student is not too severely liquidity constrained, it is shown that the competitive equilibrium is Pareto efficient and rich students pay some of the school cost of the poor students, i.e. peer financing of education occurs.

Rabah Amir

Université catholique de Louvain

  Friday, July 14, 11:50

Discounted Supermodular Stochastic Games: Theory and Applications    [pdf]

Abstract

This paper considers a general class of discounted Markov stochastic games characterized by multidimensional state and action spaces with an order structure, and one-period rewards and state transitions satisfying some complementarity and monotonicity conditions. Existence of pure-strategy Markov (Markov-stationary) equilibria for the finite (infinite) horizon game, with nondecreasing –and possibly discontinuous –strategies and value functions, is proved. The analysis is based on lattice programming, and not on concavity assumptions. Selected economic applications that fit the underlying framework are described: dynamic search with learning, long-run competition with learning-by-doing, and resource extraction.

Kurt Annen

University of Guelph

  Tuesday, July 11, 12:10, Session D

Efficiency out of Disorder -- Contested Ownership in Incomplete Contracts    [pdf]

Abstract

This paper studies the role of contested ownership in a situation where two players have to make a person- and asset-specific investment, and when no complete contracts can be written. It compares contested ownership to the various ex ante ownership structures typically discussed in the literature (following the influential work by Grossman, Hart, and Moore). The paper shows that contested ownership mitigates the inefficiency of investments due to the incompleteness of contracts generating an exchange surplus which comes closer to the first-best surplus as compared to any other ex ante distribution of ownership. For example, if the contest is perfectly discriminatory, each player makes a transaction-specific investment as if he or she owns the asset.

Krzysztof Apt

CWI

  Tuesday, July 11, 10:55, Session E

Stable partitions in coalitional games    [pdf]

(joint work with Tadeusz Radzik)

Abstract

We propose a notion of stable partition in a coalitional game that is parametrized by the concept of a defection function. This function assigns to each partition of the grand coalition a set of different coalition arrangements for a group of defecting players. The alternatives are compared using their social welfare.
We characterize the stability of a partition for a number of most natural defection functions and investigate whether and how so defined stable partitions can be reached from any initial partition by means of simple transformations.
The approach is illustrated by analyzing an example in which a set of stores seeks an optimal transportation arrangement.

Miguel Aramendia

Universidad del Pais Vasco, Spain

  Wednesday, July 12, 11:20, Session C

Forgiving-Proof Equilibrium    [pdf]

(joint work with Luis Ruiz)

Abstract

In repeated games, when a player deviates from the agreed strategy this may result in losses for other players, but it may also result in gains. If the profits of the betrayed players drop they will react to prevent such actions from recurring. But if their profits increase, where is the sense in punishing? We explore a new equilibrium concept that rules out strategies in which punishments are carried out for deviations that have not harmed any of the other players. The goal is to prevent potential renegotiations that could end up in an agreement to forgive. We show that this definition significantly reduces the set of equilibrium outcomes in many games.

Aloísio Araújo

IMPA and FGV/RJ

  Wednesday, July 12, 2:45

Assymetric Information without a Single Crossing

Abstract

The single crossing condition, besides to be a natural assumption for many applications, it provides a full characterization of incentive compatibility contract space: the convex set of monotone contracts. It also transforms complex problems into simple optimal control programming. Our aim is to give a tractable first step in the characterization of incentive compatibility when this condition fails. We then derive new necessary global incentive compatibility constraints with the respective optimality conditions of a more complex non-convex problem. The novelty is the possibility of non-monotone incentive compatible contracts. We develop some applications like manager compensation schemes, insurance contracts, labor contracts, dividend signaling, etc. These applications show how restrictive the single crossing condition can be.

Javier Arin

The Basque Country University

  Friday, July 14, 4:00, Session A

Coalitional games with veto players: sequential proposals, nucleolus and Nash outcomes    [pdf]

(joint work with V. Feltkamp and M. Montero)

Abstract

The paper is based on a mechanism presented by Dagan, Serrano and Volij (1997) for bankruptcy problems. According to this mechanism a player, the proposer, makes a proposal that the rest of the players should respond saying yes or not. In this paper we present a model where the proposer can make sequential proposals. that is, the game is played in n stages and each stage starts with a proposal. We investigate the subgame perfect equilibria of this game and we relate them to a cooperative solution concept: the nucleolus of TU veto balanced games.

Georgy Artemov

Brown University

  Thursday, July 13, 5:10, Session C

Imminent Nash Implementation    [pdf]

Abstract

This paper studies the complete information simultaneous-move implementation problem on the domain extended by time: the extended domain is the Cartesian product of the set of physical outcomes and positive real numbers, interpreted as a delay in the delivery of that outcome. The designer implementing some social choice correspondence (SCC) is allowed to approximate the desired SCC by delaying the outcome. The delay is infinitesimal; hence the name: imminent implementation. This extension is different from the lottery-based extension of virtual implementation in that it does not allow mixing of the outcomes but similar in that it makes the set of outcomes dense and makes approximation possible. The result is, though, almost the opposite of universal implementability of virtual implementation: impossibility results of Nash implementation extend to imminent implementation with little modification. It is, thus, suggesting the crucial role of mixing in the virtual implementation success. The paper also provides characterization of the imminently implementable SCC and shows that there are some SCC that are not Nash implementable but imminently implementable.

Robert John Aumann

Hebrew University of Jerusalem

  Tuesday, July 11, 5:00

An Index of Riskiness

(joint work with Roberto Serrano)

Abstract

We develop an index of riskiness of an investment or gamble. The well-known and widely applied indices of Arrow and Pratt measure risk AVERSION -- absolute and relative -- but not riskiness as such. We think of riskiness as arising from a comparison of potential gains to possible losses; roughly, the larger -- and more likely -- the potential gains relative to the possible losses, the less risky the gamble. On the whole, one may expect a more risk averse individual to reject riskier gambles, and to accept less risky ones. The index that we propose is "objective;" it does not depend on any specific utility function, but only on the distribution of gains and losses.

Such an index is of considerable practical importance. Much is said and written about risky investments; for example, a front-page article in the New York Times in March of 2004 reported that managers of (and consultants for) state operated pension funds often invest in (or recommend) investments that are "too risky." What exactly does this mean?

To be sure, the index we propose applies, in the first instance, only to gambles in which the probabilities are numerically known. In applications, say to investments, a major practical problem is that these probabilities are difficult to assess. However, we first wish to develop an appropriate definition of "riskiness" in principle, when the probabilities ARE known. Once such an index is developed, one can think about how to apply it to practical problems.

Ana Babus

Erasmus University Rotterdam

  Monday, July 10, 12:10, Session A

A Model of Network Formation in the Banking System    [pdf]

Abstract

Modern banking systems are highly interconnected. Despite their various benefits, the linkages that exist between banks carry the risk of contagion. In this paper we investigate how banks decide on direct balance sheet linkages and the implications for contagion risk. In particular, we model a network formation process in the banking system. The trade-off between the gains and the risks of being connected shapes banks' incentives when forming links. We show that banks manage to form networks that are resilient to contagious effects. Thus, in an equilibrium network, the probability of contagion is virtually 0.

Aniruddha Bagchi

Vanderbilt University

  Friday, July 14, 4:50, Session C

A Laboratory Test of an Auction with Negative Externalities    [pdf]

(joint work with Mike Shor)

Abstract

We examine experimentally an auction model with externalities in which competing firms bid for licenses to a cost-reducing technology. Since winning bidders impose a negative externality on the losers, bids must account for both the value of winning the auction and the negative value of losing brought about by rivals reducing their costs. Experimental treatments di®er in the severity of the negative externality (based on the substitutability of competitors' products), and the number of licenses being auctioned. We ¯nd that subjects underbid relative to theoretical benchmarks for auctions of one license, but overbid when two licenses are auctioned. Nevertheless, mean revenues in the experiment are consistent with the predicted revenues. However, there are some di®erences between the distributions of experimental and predicted revenues. We propose a possible explanation for these di®erences rooted in a simple bidding heuristic.

Coralio Ballester

Universidad de Alicante

  Monday, July 10, 4:05, Session E

Interaction Patterns with Hidden Complementarities    [pdf]

(joint work with Antoni Calvó-Armengol)

Abstract

We consider a finite population simultaneous move game with heterogeneous interaction modes across different pairs of players. We allow for general interaction patterns, but restrict our analysis to linear-quadratic payoffs so that we can formulate the Nash equilibrium problem as the solution to a linear complementarity problem. More generally, our results potentially hold in any set up where equilibrium conditions boil down to a set of piece-wise linear conditions.
We introduce the new class of games with hidden complementarities. Games with hidden complementarities are such that a suitable linear transformation of the interaction matrix produces an induced game with complementarities. We provide general conditions on the interaction matrix such that the equilibrium is unique and/or interior, in which case we characterize equilibrium actions by means of a closed-form expression that involves a generalized version of the Katz-Bonacich network measure of node centrality.

Marco Battaglini

Princeton University

  Thursday, July 13, 4:00, Session D

The Swing Voter's Curse in The Laboratory    [pdf]

(joint work with R. Morton and T. Palfrey)

Abstract

This paper reports the first laboratory study of the swing voter's curse and provides insights on the larger theoretical and empirical literature on "pivotal voter" models. Our experiment controls for different information levels of voters, as well as the size of the electorate, the distribution of preferences, and other theoretically relevant parameters. The design varies the share of partisan voters and the prior belief about a payoff relevant state of the world. Our results support the equilibrium predictions of the Feddersen-Pesendorfer model, and clearly reject the notion that voters in the laboratory use naive decision-theoretic strategies. The voters act as if they are aware of the swing voter's curse and adjust their behavior to compensate. While the compensation is not complete and there is some heterogeneity in individual behavior, we find that aggregate outcomes, such as efficiency, turnout, and margin of victory, closely track the theoretical predictions.

Jeremy Bertomeu

Carnegie Mellon University

  Thursday, July 13, 11:45, Session C

Coordination and the Non-Cooperative Bargaining Problem    [pdf]

(joint work with Edwige Cheynel)

Abstract

The non-cooperative decentralization of the axiomatic bargaining solution requires agreement to a particular solution, which may be difficult to reach in the absence of a selection criterion or bargaining institutions. Building on this premice, we propose an analysis of bargaining with imperfect coordination. First, the non-cooperative problem is characterized by the agreement set, defined as the set of proposals that are strategically (rather than axiomatically) acceptable. Second, for a given agreement set, we describe the distribution of proposals and thus the coordination problem that may appear when moving from an axiomatic to a non-cooperative framework. We prove two main results. We show that the game-theoretic concept of equilibrium provides very few restrictions on possible agreement sets; then, rationality does not resolve the problem of imperfect coordination. Second, we show that the bargaining interaction is fully determined by the initial agreement set. Imperfect coordination occurs because of weak ex-ante proposers. A class of simple equilibria is interpreted as a standard financial market, such that agents choose market orders with strictly positive probability and a continuum of limit orders. Finally, we consider several comparative statics, involving the final allocation rule, the agreement set, repeated bargaining, insurance and deviations from risk-neutrality. In particular, we show that costless insurance and repeated bargaining, although ex-post desirable, are always ex-ante undesirable.

Sushil Bikhchandani

UCLA

  Friday, July 14, 11:20, Session D

Ex Post Implementation in Environments with Private Goods    [pdf]

Abstract

We prove by construction that ex post incentive compatible mechanisms exist in auctions when buyers have multi-dimensional signals and interdependent values. The mechanism shares features with the generalized Vickrey auction of single dimensional signal models; thus, ex post equilibrium in these models is robust to departures from a single dimensional information assumption. The construction implies that for environments with private goods, informational externalities (i.e., interdependent values) are compatible with ex post equilibrium in the presence of multi-dimensional signals.

Ken Binmore

University College London

  Tuesday, July 11, 9:00

Rational Decision Theory in a Large World

Abstract

Leonard Savage said that it would be both ridiculous and preposterous to use Bayesian decision theory in a large world. So what do we do instead? And are there implications for strategic play in games?

Péter Biró

Budapest University of Technology and Economics

  Tuesday, July 11, 11:20, Session A

On the dynamics of stable matching markets    [pdf]

(joint work with Katarína Cechlárová, Tamás Fleiner)

Abstract

We study the dynamics of stable marriage and stable roommates markets. Our main tool is the incremental algorithm of Roth and Vande Vate and its generalization by Tan and Hsueh. Beyond proposing alternative proofs for known results, we also generalize some of them to the nonbipartite case. In particular, we show that the lastcomer gets his best stable partner in both of these incremental algorithms. Consequently, we confirm that it is better to arrive later than earlier to a stable roommates market. We also prove that when the equilibrium is restored after the arrival of a new agent, some agents will be better off under any stable solution for the new market than at any stable solution for the original market. We also propose a procedure to find these agents.

Steven Brams

New York University

  Monday, July 10, 5:15, Session A

Voting Systems That Combine Approval and Preference    [pdf]

(joint work with M. Remzi Sanver)

Abstract

Information on the rankings and information on the approval of candidates in an election, though related, are fundamentally different--one cannot be derived from the other. Both kinds of information are important in the determination of social choices. We propose a way of combining them in two hybrid voting systems, preference approval voting (PAV) and fallback voting (FV), that satisfy several desirable properties, including monotonicity. Both systems may give different winners from standard ranking and nonranking voting systems. PAV, especially, encourages candidates to take coherent majoritarian positions, but it is more information-demanding than FV. PAV and FV are manipulable through voters’ contracting or expanding their approval sets, but a 3-candidate dynamic poll model suggests that Condorcet winners, and candidates ranked first or second by the most voters if there is no Condorcet winner, will be favored, though not necessarily in equilibrium.

Felix Brandt

University of Munich

  Tuesday, July 11, 3:40, Session B

On Strictly Competitive Multi-Player Games    [pdf]

(joint work with Felix Fischer, Yoav Shoham)

Abstract

We embark on an initial study of a new class of strategic normal-form) games, so-called ranking games, in which the payoff to each agent solely depends on his position in a ranking of the agents induced by their actions. This definition is motivated by the observation that in many strategic situations such as parlour games, competitive economic scenarios, and some social choice settings, players are merely interested in performing optimal relative to their opponents rather than in absolute measures. A simple but important subclass of ranking games are single-winner games where in any outcome one agent wins and all other players lose. We investigate the computational complexity of a variety of common game-theoretic solution concepts in ranking games and deliver hardness results for iterated weak dominance and mixed Nash equilibria when there are more than two players and pure Nash equilibria when the number of players is unbounded. This dashes hope that multi-player ranking games can be solved efficiently, despite the structural restrictions of these games.

Felix Brandt

University of Munich

  Friday, July 14, 4:25, Session E

Symmetries and Efficient Solvability in Multi-Player Games    [pdf]

(joint work with Felix Fischer, Markus Holzer)

Abstract

There are various ways in which strategic games may exhibit forms of symmetry. A common aspect of symmetry, which enables the compact representation of games even when the number of players is unbounded, is that players are incapable of distinguishing between the other players. We define four classes of symmetric games by additionally considering the following two characteristics: the availability of identical payoff functions and the ability to distinguish oneself from the other players. Based on these varying notions of symmetry, we investigate the computational complexity of finding pure Nash equilibria. It turns out that in all four classes of games equilibria can be found efficiently when the number of actions available to each player is held constant. For most succinct representations of multi-player games, the same computational problem has been shown to be intractable. Furthermore, we show that the availability of identical payoff functions greatly simplifies the search for equilibria.

Régis Breton

University of Orléans

  Wednesday, July 12, 4:00, Session B

Robustness of equilibrium price dispersion in finite market games    [pdf]

(joint work with Bertrand Gobillard)

Abstract

We show that equilibrium price dispersion in the strategic market game with multiple trading posts per commodity of Koutsougeras (1999,2003) is not robust to the introduction of arbitrarily small transaction costs. More precisely, we perturb the market game by introducing transaction costs, and we obtain the following. (i) When transaction costs are positive, any equilibrium must satisfy the law of one price..(ii) No equilibrium with price dispersion of the game with costless transactions can be approached by equilibria with positive transaction costs as costs get arbitrarily small. (iii) Further, when this type of perturbation is considered, the set of Nash equilibria is not affected by the number of trading posts. More generally, the paper proposes an approach to restricting the set of equilibria in a market game.
JEL classification: C72, D43, D50.
Keywords: market games, law of one price, equilibrium selection.

Mauricio Bugarin

Universidade de Brasília

  Wednesday, July 12, 11:20, Session B

Political Budget Cycles in a Fiscal Federation: The Effect of Partisan Voluntary Transfers    [pdf]

(joint work with Ivan Ferreira)

Abstract

This article first presents an econometric study suggesting that intergovernmental transfers to Brazilian municipalities are strongly partisan motivated. In light of that stylized fact, it develops an extension to Rogoff (1990)’s model to analyze the effect of partisan motivated transfers into sub-national electoral and fiscal equilibria. The main finding is that important partisan transfers may undo the positive selection aspect of political budget cycles. Indeed, partisan transfers may, on one hand, eliminate the political budget cycle, solving a moral hazard problem, but, on the other hand, they may retain an incompetent incumbent in office, bringing about an adverse selection problem.

David Cantala

El Colegio de Mexico

  Tuesday, July 11, 12:10, Session E

Welfare and stability in senior matching markets    [pdf]

(joint work with David Cantala Francisco Sanchez)

Abstract

We consider matching markets at senior level, where workers might be assigned to firms at an unstable matching- the status- quo- which might not be Pareto efficient. It might also be the case that none of the matchings Pareto superior to the status- quo is Core- stable. We propose two weakenings of Core- stability: status- quo stability and weakened stability, and the respective mechanisms which leads any status- quo to matchings meeting the stability requirements above mentioned. The fist one is inspired by the top trading cycle procedure, the other one belongs to the family of Branch and Bound algorithms. Last procedure find a core stable matching in many-to-one markets whenever it exists, dispensing on the assumption of substitutability.

Eliane Catilina

American University

  Thursday, July 13, 12:10, Session B

What is the Game?

(joint work with Amos Golan)

Abstract

The objective of this paper is to relax some of the more traditional assumptions on rationality and ex-ante beliefs in games with incomplete information. Building on the traditional game structure, we start by relaxing the assumption that in a game of incomplete information the ex-ante probability distribution of players’ types (or preferences) is common knowledge among all the players. This is similar to assuming that players have no prior information/beliefs about the other players’ types. In this context, players have to “choose” their belief/s to maximize their expected utility. The players’ strategies are consistent with their beliefs but, unlike the classic assumptions in Bayesian games, beliefs and actions are chosen simultaneously. We then proceed to relax some of the assumptions behind players’ “rationality”. To do so, rather than solve the game from the players’ point of view, we analyze the game from an observer point of view. The observer is not involved in the process of the game; however she/he observes the action taken by each player. With that observation and with the knowledge of the different possible games (or types), we reconstruct the “optimal” (pure or mixed) strategy taken by each player as it seen from the observer’s eyes. This optimal strategy is a composite of the probability each player assigns simultaneously to each possible game (possible types) and action. Comparison of the observer’s game with the more traditional games allows us to evaluate the impact of the relaxed assumptions on the suggested equilibria.

Yutian Chen

Stony Brook University

  Monday, July 10, 3:15, Session B

Entry Deterrence through Strategic Sourcing    [pdf]

Abstract

We show that a downstream firm may source to an upstream firm (the
potential entrant) with the pure purpose of entry deterrence. The
reason is, on one hand, a supplier is forced to be a Stackelberg
follower upon its entry into the downstream market; on the other
hand, the total surplus from keeping the downstream market
concentrated and the saving of entry cost is shared through their
transaction in the upstream market, making each better off. Under
many circumstances, strategic entry-deterring sourcing improves
social welfare. For some range of parameters, it even benefits
consumers.

Hsien-Hung Chiu

Stony Brook University

  Tuesday, July 11, 10:55, Session C

An Optimal Budget-Constrained Mechanism with Multiple Liquidity-Constrained Agents    [pdf]

Abstract

We study a mechanism design problem when a mechanism designer (the seller) is facing agents (buyers) who are budget constrained. The budget constraint is a hard constraint and represents the maximum amount of payment that the buyer can afford to pay, i.e. there is no financial market for financing. Both valuation and budgets are private information and in order to make the analysis tractable, we restrict our attention to a 2x2 discrete type space. We characterize an optimal budget-constrained mechanism in this environment, i.e. a mechanism that generates the highest expected revenue.

We first study a single agent environment in Che and Gale (1999, 2000) and develop a more general approach to analyze this problem. Next, we extend this approach to a multiple agents environment. We show that no exclusion of low valuation types might be an optimal mechanism even though the budget level is extremely low. The condition on no exclusion as an optimum is independent of the level of budget. If we focus on the case with full participation, when the budget level is sufficiently low then the seller is more willing to sell the object to the type with low valuation but high budget. We also show it is never optimal to exclude all low budget types, since the exclusion of low budget types is not beneficial for the seller to extract more surplus from the types with high valuation.

Vincent Conitzer

Carnegie Mellon University

  Monday, July 10, 4:40, Session A

Nonexistence of Voting Rules That Are Usually Hard to Manipulate    [pdf]

(joint work with Tuomas Sandholm)

Abstract

Aggregating the preferences of self-interested agents is a key problem for multiagent systems, and one general method for doing so is to vote over the alternatives (candidates). Unfortunately, the Gibbard-Satterthwaite theorem shows that when there are three or more candidates, all reasonable voting rules are manipulable (in the sense that there exist situations in which a voter would benefit from reporting its preferences insincerely). To circumvent this impossibility result, recent research has investigated whether it is possible to make finding a beneficial manipulation computationally hard. This approach has had some limited success, exhibiting rules under which the problem of finding a beneficial manipulation is NPhard, #P-hard, or even PSPACE-hard. Thus, under these rules, it is unlikely that a computationally efficient algorithm can be constructed that always finds a beneficial manipulation (when it exists). However, this still does not preclude the existence of an efficient algorithm that often finds a successful manipulation (when it exists). There have been attempts to design a rule under which finding a beneficial manipulation is usually hard, but they have failed. To explain this failure, in this paper, we show that it is in fact impossible to design such a rule, if the rule is also required to satisfy another property: a large fraction of the manipulable instances are both weakly monotone, and allow the manipulators to make either of exactly two candidates win. We argue why one should expect voting rules to have this property, and show experimentally that common voting rules clearly satisfy it. We also discuss approaches for potentially circumventing this impossibility result.

John Conley

Vanderbilt University

  Monday, July 10, 11:20, Session A

Leadership and Coalition Formation

Abstract

Our objective is to explore the role of leadership in situations in which agents choose projects and then volunteer contributions to help in their completion. Among other things, we have in mind the open source approach to software creation such as the Linux and Apache projects. We approach this by considering a modification of a Hoteling model in which projects have characteristics drawn from the unit interval. Agents' preferences over project characteristics are uniformly distributed, but they also prefer to join well supported projects. Project leaders also care about a projects characteristics and level of support it attracts. We assume that an incumbent leader chooses a characteristic first and that an entrant follows and chooses a project characteristic taking the incumbent's position as given. We explore how the nature of the leaders and agents preferences affect the equilibrium outcomes.

David Cooper

Case Western Reserve

  Monday, July 10, 5:15, Session D

Non-Linear and Asymmetric Contracts: An Experimental Study of Overcoming Coordination Failure

(joint work with Jordi Brandts)

Abstract

In previous work, we have studied how subject-managers use symmetric linear incentive contracts in trying to overcome coordination failure. A notable feature of this work is that performance is quite poor. A possible cause of this poor performance is the limited class of contracts available to subjects -- from a theoretical point of view the optimal contract is both non-linear and asymmetric. We therefore study experiments in which managers have an expanded class of contracts available. Performance is significantly better when non-linear and asymmetric contracts are allowed, although substantial instances of coordination failure remain. We also characterize the best-performing contracts from the experimental data.

Christopher Cotton

Cornell University

  Thursday, July 13, 12:10, Session A

Informational Lobbying and Access When Talk Isn't Cheap    [pdf]

Abstract

I develop a model in which interest groups (IGs) have private, verifiable information in support of their preferred policy positions that in aggregate determine the set of policies that maximizes citizen welfare. An uninformed policy maker (PM) is concerned with both implementing a set of policies that maximize citizen welfare, and collecting contributions from IGs. I model the interaction between the PM and the IGs as an all-pay auction where IGs provide contributions to the PM, and the PM grants access to the groups that gave the largest contributions. The IGs with access can present their information to the policy maker before he chooses a policy set. In equilibrium, because contributions are chosen endogenously, the PM learns about the information quality of all IGs, even when he grants access to only a subset of the groups. When there is no limit to the size of contributions, the welfare maximizing policy set is implemented in equilibrium. Limiting the size of contributions strictly reduces expected citizen welfare.

Luciano De Castro

Carlos III University

  Tuesday, July 11, 12:10, Session A

Affiliation, Positive Dependence and Linkage Principle    [pdf]

Abstract

We give necessary and sufficient conditions for existence of a pure strategy equilibrium for first price private value auctions. The signals of the players may have any kind of dependence. The conditions are given for a set of distributions which is dense in the set of all symmetric distributions. The approach allows numerical simulations, which show that affiliation is a very restrictive assumption, not satisfied in many cases that have pure strategy equilibrium. We also show that neither existence nor the revenue ranking implied by affiliation (superiority of the English auction) generalizes for positively dependent distributions. Nevertheless, the revenue ranking is valid in a weak sense (on average) for the dense set of distributions.

Geoffroy De Clippel

Rice University

  Monday, July 10, 5:15, Session C

Impartial Division of a Dollar    [pdf]

(joint work with Herve Moulin and Nicolaus Tideman)

Abstract

For impartial division, each participant reports only her opinion about the fair relative shares of the other participants, and this report has no effect on her own share. If a specific division is compatible with all reports, it is implemented. We propose a natural method meeting these requirements, for a division among four or more participants. No such method exists for a division among three participants.

Geoffroy De Clippel

Rice University

  Tuesday, July 11, 11:20, Session B

Axiomatic Solutions to a Simple Commons Problem    [pdf]

(joint work with Manipushpak Mitra)

Abstract

The paper studies a simple commons problem, as in Moulin (2001). A set of agents collectively own a technology that produces many identical units of an indivisible object. Each agent needs and has use for only one unit of the object. A solution specifies how many units to produce, who should receive the object, and defines monetary transfers to compensate the agents that cannot consume the object. Simple axioms, adapted from the recent literature on queueing (see Maniquet, 2003), characterize a unique solution. It is the benefit analogue of the serial cost sharing rule already discussed in the literature, and is therefore called the serial surplus sharing rule. Assuming decreasing returns to scale, we show that it coincides with the Shapley value of the coalitional form transferable utility game where the worth of a coalition is the efficient outcome in the absence of the complement coalition. We also develop a dual analysis, as Chun (2004) did for queueing problems.

Massimo De Francesco

University of Siena

  Tuesday, July 11, 3:15, Session B

Endogenous entry under Bertrand-Edgeworth and Cournot competition with capacity indivisibility    [pdf]

Abstract

Strategic market interaction is modelled as a two-stage game where potential entrants choose capacities and active firms compete in prices or quantities. Due to capital indivisibility, the capacity choice is made from a finite grid. In either strategic setting, the equilibrium of the game depends on the size of total demand at a price equal to the minimum average cost. With a sufficiently large market, the long-run competitive price emerges at a subgame-perfect equilibrium of either game. Failing the large market condition, equilibrium outcomes are quite different in the two games, and neither game reproduces the competitive equilibrium.

Silvia De la Sierra

ITAM

  Monday, July 10, 11:45, Session C

Factors contribution to poverty index FGT2: An application of Cooperative Games    [pdf]

Abstract

In this paper we apply the methodology proposed by Shorrocks (1999) to estimate which factor contributes more to poverty. This paper makes an attempt in this direction. It examines deficiencies on food consumption; assess which payoffs affects poverty index for population subgroups category, per adult equivalence unit. The question in this paper is: Which factor of the value of consumption, fruits and vegetables, cereals and grains, meat and chicken, industrialized food, makes to poverty index across seven states in Mexico?

André De Oliveira

Universidade de Brasília

  Monday, July 10, 3:15, Session C

Leading by Example: A Bi-population Approach

(joint work with Gil Riella)

Abstract

The vast majority of investigations of cooperative outcomes in non-cooperative international collective action settings are based upon the iterated Prisoner’s Dilemma. Yet actors lack the type of property rights and power that is commensurate with punishment strategies that implement cooperation. A new perspective on the problem claims that cooperative outcomes can be achieved through a special type of leadership, called leading by example, where the leader commits to a minimal level of contribution to the provision of the public good and to matching higher contributions there beyond. It’s been shown that it can generate welfare improvements for different types of public goods. The existing results for leading by example models come from symmetric single population games, and the equilibrium concept used is evolutionary stability. This concept fits well problems requiring international collective action, for they are rarely single-shot and the benefits produced are often designed to be passed on to future generations. The use of symmetric games, however, although possible to justify in international contexts where agents have a symmetric set of property rights, is less appealing in such a framework. The main purpose of this paper is to improve upon the existing literature by investigating more general settings and conditions under which leading by example can support cooperative outcomes. More precisely, it uses a bi-population model where players have the same choices regarding how much to contribute to the provision of the public good but their costs of provision are different. In such a setting, the use of the concept of evolutionary stability is not appr

Maria Dementieva

Univ. of Jyvaskyla, Finland

  Thursday, July 13, 11:20, Session B

Solutions Comparing in Multistage Cooperative Games    [pdf]

(joint work with Victor Zakharov, Anna Gan’kova, Pekka Neittaanmaki)

Abstract

The comparison problem of TU-cooperative game values is introduced and treated. The idea of the proposed method is based on multicriteria methodology and ASPID technique. The Shapley value and the nucleolus are compared under different information about excess preferences.

Thomas Demuynck

University of Ghent

  Wednesday, July 12, 4:00, Session A

On the Potential of State Dependent Mutations as an Equilibrium Refinement Device    [pdf]

(joint work with Arne Schollaert)

Abstract

This paper focuses on modelling the mutation process in evolutionary models. First, we develop a link between the nature of the mutation process, the detailed balance property and the nature of the game: we show that a game has a detailed balanced and utility monotonic perturbation if and only if it is an ordinal potential game. Then, we show that for ordinal potential games the utility monotonicity property is insufficient to generate robust equilibrium predictions. Therefore, we argue that the mutation induced solution concept only has limited potential as an equilibrium refinement device.

Dinko Dimitrov

Bielefeld University

  Monday, July 10, 12:10, Session C

Coalition Formation in Simple Games: The Semistrict Core    [pdf]

(joint work with Claus-Jochen Haake)

Abstract

We consider the class of proper monotonic simple games and study coalition formation when an exogenous share vector and a solution concept are combined to guide the distribution of coalitional worth. Using a multiplicative composite solution, we induce players' preferences over coalitions in a hedonic game, and present conditions under which the semistrict core of the game is nonempty.

Irinel Dragan

University of Texas

  Thursday, July 13, 5:10, Session A

An alternative coalitional rationality concept for Semivalues of TU games

(joint work with Juan Enrique Martinez-Legaz)

Abstract

In an earlier paper (Dragan/Martinez,IGTR,2001), we introduced a concept of coalitional rationality for non efficient values, which was applied to the Semivalues. We got necessary and sufficient conditions of coalitional rationality for Semivalues. In the present paper, we introduce an alternative concept, connected to the concept of quasi-core due to Shapley/Shubik (1966). As both the new and the old concepts reduce to the usual coalitional rationality concept for efficient values, they have equal right to exist. To derive the conditions we need the Average per capita formula for the Shapley value (Dragan,1992), the Average per capita formula for Semivalues from our previous paper, the Efficient normalization of a Semivalue and the relationship between the Semivalues and the Shapley value.

Pradeep Dubey

SUNY at Stony Brook

  Friday, July 14, 9:45

Competing for Customers in a Social Network

Abstract

There are many situations in which a customer's proclivity to buy the product of any firm depends not only on the classical attributes of the product such as its price and quality, but also on who else is buying the same product. We model these situations as games in which firms compete for customers located in a ``social network''. Nash Equilibrium (NE) in pure strategies exist in general. In the quasi-linear version of the model, NE turn out to be unique and can be precisely characterized. If there are no a priori biases between customers and firms, then there is a cut-off level above which high cost firms are blockaded at an NE, while the rest compete uniformly throughout the network.

We also explore the relation between the connectivity of a customer and the money firms spend on him. This relation becomes particularly transparent when externalities are dominant: NE can be characterized in terms of the invariant measures on the recurrent classes of the Markov chain underlying the social network.

Finally we consider convex (instead of linear) cost functions for the firms. Here NE need not be unique as we show via an example. But uniqueness is restored if there is enough competition between firms or if their valuations of clients are anonymous.

Federico Echenique

California Institute of Technology

  Thursday, July 13, 4:00, Session B

What Matchings can be Stable? The Refutability of Matching Theory    [pdf]

Abstract

When can a collection of matchings be stable, if preferences are unknown? This question lies behind the refutability of matching theory. A preference profile rationalizes a collection of matchings if the matchings are stable under the profile. Matching theory is refutable if there are observations of matchings that cannot be rationalized. I show that the theory is refutable, and provide a characterization of the matchings that can be rationalized.

Ezra Einy

Ben Gurion University

  Tuesday, July 11, 4:05, Session D

Equilibrium in a Cournot Duopoly with Asymmetric Information    [doc]

(joint work with Ori Haimanko, Diego Moreno, Benyamin Shitovitz)

Abstract

Novshek (1985) provides a condition which guarantees the existence of equilibrium in a Cournot game with complete information. In this work we study the existence and uniqueness of equilibrium in a Cournot duopoly with asymmetric information under the Novshek condition. It is shown that if one firm is better informed than the other, there exists a unique Cournot equilibrium. Our model of asymmetric information is general, and imposes no restriction on the space of states of nature or on the information fields of firms. The proof of our main result is based on techniques from the theory of ordered lattices.

Kfir Eliaz

New York University

  Wednesday, July 12, 10:55, Session A

A Mechanism-Design Approach to Speculative Trade    [pdf]

(joint work with Rani Spiegler)

Abstract

When agents hold non-common priors over an unverifiable state of nature which affects the outcome of their future actions, they have an incentive to bet on the outcome. We pose the following question: what are the limits on the agents’ ability to realize gains from speculative bets when their prior belief is private information? We apply a “mechanism design”approach to this question, in the context of a pair of models: a principal-agent model in which the two parties bet on the agent’s future action, and a market model in which traders bet on the future price. We characterize interim-e¢ cient bets in these environments, and their implementability as a function of fundamentals. In general, implementability of interim-efficient bets diminishes as the costs of manipulating the bet’s outcome become more uneven across states or agents.

Peter Engseld

Lund University

  Friday, July 14, 4:00, Session C

Conventions in a Spatial Environment    [pdf]

Abstract

We consider a n by n evolutionary symmetric coordination game played in a finite discrete homogenous spatial space where the number of agents increases over time. Each location can only sustain a limited number of agents. Agents can migrate to an adjacent location. An agent is matched to play the game with opponents in her neighborhood. The fitness of an agent is increasing in the average payoff, the number of possible matches, and decreasing in the number of opponents at the same location as the agent. If the migration procedure is such that the number of agents is unimodal distributed in a population, the total population will be portioned into isolated sub-populations where each sub-population will be in a Nash-equilibrium. The agents will eventually be forced to interact with other sub-populations as the total population grows. If there exist an equilibrium that is both risk-dominant and efficient, this equilibrium always will prevail. The slower the absolute growth is the more likely is it that an efficient equilibrium will prevail.

Ita Falk

Harvard University

  Tuesday, July 11, 12:10, Session B

War and Evolution    [doc]

Abstract

The study proposes a class of differential Markov games (wars) where an arbitrary number of players utilize a pool of two inter-related stocks. Let a state be given by the stock vector. We identify the bias in marginal profits facing each player, compared with the efficient marginal profits, for each state. And use this bias function to relate the social value of a war, at a Markov-Nash equilibrium, to the number of players.
The study further constructs a stochastic evolutionary game to simulate the dynamics of the number of players in a war. It identifies conditions under which, a neutrally stable, and a globally stable number is a number for which the social value is maximal.

Marco Faravelli

University of Edinburgh

  Friday, July 14, 10:55, Session A

The Important Thing Is not (Always) Winning but Taking Part: Funding Public Goods with Contests    [pdf]

Abstract

This paper considers a public good game with incomplete information affected by extreme free-riding. We overcome this problem through the implementation of a contest in which several prizes can be awarded. For any possible distribution of wealth we identify the necessary and sufficient conditions for the equilibrium allocations to be interior for all players. At interior solutions the social planner sets the last prize equal to zero and the total expected welfare is independent of the distribution of the total prize sum among the other prizes. We prove that private provision via a contest Pareto-dominates both public provision and private provision via a lottery.

Dragan Filipovich

El Colegio de Mexico

  Tuesday, July 11, 3:15, Session A

Constitutions as Self-Enforcing Redistributive Schemes    [pdf]

(joint work with Jaume Sempere)

Abstract

We present a model of a fiscal constitution (i.e., a transfer scheme between income classes) that is self-enforcing against a background in which predatory activities (‘revolutions’) are feasible. In this environment, a constitution self-enforces by structuring society’s interests in such a way that non-compliance necessarily results in a revolution which society would rather avoid.

Francoise Forges

Universite Paris Dauphine

  Tuesday, July 11, 2:00

Revealed Preferences in Market Games

Abstract

Afriat (1967) showed the equivalence of the strong axiom of revealed preference and the existence of a solution to a set of linear inequalities. From this solution he constructed a utility function rationalizing the choices of a competitive consumer. We extend Afriat’s theorem to a class of nonlinear budget sets. We thereby obtain testable implications of rational behavior for a wide class of economic environments, and a constructive method to derive individual preferences from observed choices. In an application to market games, we identify a set of observable restrictions characterizing Nash equilibrium outcomes.

Guilherme Pereira de Freitas

IMPA

  Monday, July 10, 2:50, Session B

Collusion and entry deterrence in a patent-thicket industry    [pdf]

Abstract

When patent standards are low and almost any relevant technology is not likely to be covered by a single patent, firms may create precautionary patent portfolios. This gives rise to the patent thicket problem: firms have to deal with innumerous uncertainties or negotiations in order to develop a new product or technology. We build a simple repeated game model that shows how the patent thicket may allow incumbent firms to keep away the competition through litigation threats. The model displays a subgame perfect equilibrium where incumbents are cooperative towards each other, but aggressive towards potential entrants.

Hideki Fujiyama

Dokkyo University

  Friday, July 14, 5:15, Session C

Decisions on Exits : A Social Dilemma Experiment with Intergroup Mobility    [pdf]

(joint work with Jun Kobayashi; Yusuke Koyama; Hirokuni Oura)

Abstract

Using experimental data from a social dilemma experiment, we examine the differences in exit behavior between cooperators and non-cooperators. Our results show that the cooperators have a higher probability for exit choices than the non-cooperators. The non-cooperators are more sensitive to the cooperation rate of others in their groups. We also investigated intentions of the exit choices in terms of predicted payoffs and predicted group sizes. For the predicted payoffs, the non-cooperators try to move into groups with higher predicted payoffs. For the predicted group sizes, the cooperators try to move into smaller groups. These facts are consistent with Ehrhart and Keser (1999} that non-cooperators try to enjoy free riding, and cooperators try to escape from free riding.

Emiko Fukuda

National Defense Academy in Japan

  Friday, July 14, 11:20, Session B

Cores for cooperative investment games    [pdf]

(joint work with Shigeo Muto)

Abstract

This paper deals with games where players invest fractions of their own resources to some joint projects and share the profit produced from the projects among them. In the case of linear production function, Molina and Tejada (2004) and Fukuda et al. (2005) studied a fuzzy game that incorporates players’ partial investment into Owen’s linear production game (LP game). However, in such games, players’ utilities from the remaining resources were not taken into account. Azrieli and Lehrer (2005) defined a cooperative investment game, which is similar to a fuzzy game. They focused on resources players leave in their hands after investment, and newly defined the comprehensive core of an investment game. While in the comprehensive core payoff allocations are assumed to be linear, Fukuda et al. (2005) showed that the core elements of a fuzzy game are linear only when the game is positively homogeneous of degree one. Then Muto et al. (2006) considered non-linear payoff schemes and generalized the core of Aubin. In this paper, we model a cooperative investment situation where each player can privately gain utility from his/her own resources. Specifically we define a new game where all players can invest a fraction of their own resources and simultaneously make profits from their own remaining resources. This game can be interpreted as an extended fuzzy game (or multi-choice game) in which players may gain profits from their remaining resources. Note that in this game it is not necessarily the case that full investment yields the maximum profit. Next we define efficiency and individual rationality for this game, and study its core. We also formulate this situation as a non-cooperative strategic form game and study its Nash equilibria. We find a condition under which efficient investment can be achieved as a Nash equilibrium. Finally we study relations between core elements and refinements of Nash equilibria.

Douglas Gale

New York University

  Thursday, July 13, 9:00

Structural Models of Boundedly Rational Behavior

Abstract

In an experimental study of choice under uncertainty, Choi, Fisman, Gale, and Kariv (CFGK) found that most subjects used one or more underlying “prototypical” heuristics, which they call types. These heuristics correspond to the behavior of individuals who are infinitely risk averse, risk neutral, and expected utility maximizers with intermediate (constant) relative risk aversion. Where several of these prototypical heuristics are used, consistent behavior requires subjects to choose among heuristics in a consistent manner as well as behaving consistently in applying a given heuristic.
Motivated by these patterns, CFGK proposed and estimated a type-mixture model (TMM) in which boundedly rational individuals use heuristics in their attempt to maximize an underlying preference ordering. In implementing this framework, they assume that individual preferences have an expected utility representation. Individuals are assumed to choose the heuristic that offers the highest payoff in a given decision, taking into account the possibility of making mistakes. Thus, the probability of choosing a particular heuristic is a function of the parameters of the budget set. CFGK found that a TMM, employing only the three heuristics mentioned above, helps to explain the choice of heuristics and allows us to estimate measures of risk aversion that agree with estimates from other studies. Although the TMM allows for the possibility of errors and the use of heuristics, there remains an important role for Expected Utility Theory in analyzing choice under uncertainty, because the choice of heuristics is motivated by the underlying expected utility representation.

Filomena Garcia

ISEG Universidade Tecnica de Lisboa

  Monday, July 10, 11:45, Session B

Endogenous heterogeneity in strategic models: symmetry breaking via strategic substitutes and nonconcavities    [pdf]

(joint work with Amir, Rabah and Malgorzata Knauff)

Abstract

This paper is an attempt to develop a unified approach to endogenous heterogeneity by constructing general class of two-player symmetric games that always possess only asymmetric pure-strategy Nash equilibria. These classes of games are characterized in some abstract sense by two general properties: payoff non-concavities and some form of strategic substitutability. We provide a detailed discussion of the relationship of this work with Matsuyama's symmetry breaking framework and with business strategy literature. Our framework generalizes a number of models dealing with two-stage games, with long term investment decisions in the first stage and product market competition in the second stage. We present the main examples that motivate this study to illustrate the generality of our approach.

Filomena Garcia

ISEG Universidade Tecnica de Lisboa

  Monday, July 10, , Session A

Technology adoption with forward looking agents    [pdf]

(joint work with Paolo Colla)

Abstract

We investigate the effects of forward looking behavior in technology adoption. Within an overlapping generations model agents choose between two alternative networks taking into consideration both the installed base and the expected base. The latter element is the distinctive feature of our approach, and in general brings in multiple equilibria. We use results from the supermodular games literature to guarantee equilibrium existence and we prove uniqueness. We consider both the cases of incompatible and compatible technologies and show that technologies cannot lock-in, while the adoption path exhibits hysteresis. Network choices are characterized both in terms of their long run properties and expected time of adoption.

Andrey Garnaev

Saint Petersburg State University

  Monday, July 10, , Session B

On an Inspection Game with a Fine    [pdf]

Abstract

In this paper we consider an inspection game generalizing a Rothenstein and Zamir game. In this game there are two players: the operator and the inspector and the game is played over the time interval [0,1]. The operator plans to perform an illegal action (say, dispatch from the pollution) and the inspector is allowed to have one inspection to detect the violation.

The profit of the operator depends on the time during which he was undetected after performing the illegal action. The operator will have a choice either to act legally or to act illegally during the game. The inspection system detects a violation which occurred before the inspection with probability $1-\beta$. With probability $\alpha$ a false alarm is called, i.e., the inspector calls an alarm although no violation took place before the inspection. It is assumed that the inspection is silent, i.e. the event that an inspection took place is not known to the operator unless the inspector calls an alarm. If the inspector detects the violation the operator will be fined. The payoff to the operator is his total profit.

The difference of our plot from the scenario studied by Rothenstein and Zamir (Imperfect Inspection Games Over Time. Annals of Operations Research 109, 2002, p.175-192) is that we allow the operator to act legally as well as illegally (so, it is also allowed for the operator to reject from the illegal action at all) and the operator could be fined. The plot considered in this paper is modeled by a zero-sum timing game as well as by a non-zero-sum timing game. It is shown that the optimal players behavior depends on the fine essentially.

Andrew Gilpin

Carnegie Mellon University

  Friday, July 14, 4:00, Session E

A competitive Texas Hold'em poker player via automated abstraction and real-time equilibrium computation    [pdf]

(joint work with Tuomas Sandholm)

Abstract

We present our game theory-based heads-up Texas Hold'em poker player. To overcome the computational obstacles stemming from Texas Hold'em's gigantic game tree, our player employs automated abstraction techniques to reduce the complexity of the strategy computations. In addition to this state-space abstraction, our player uses round-based abstraction in conjunction with both offline and real-time equilibrium approximation. Texas Hold'em consists of four betting rounds. Our player solves a large linear program (offline) to compute strategies for the abstracted first and second rounds. After the second betting round, our player updates the probability of each possible hand based on the observed betting actions in the first two rounds as well as the revealed cards. Using these updated probabilities, our player computes in real-time an equilibrium approximation for the last two abstracted rounds. We demonstrate that our player, which does not directly incorporate any poker-specific expert knowledge, is competitive with leading poker-playing programs which do incorporate such domain-specific knowledge, as well as with advanced human players.

Bertrand Gobillard

PSE and University Paris 10 Nanterre

  Tuesday, July 11, 10:55, Session A

How large to be on a market? On (in)effective price dispersed arbitrage opportunities.    [pdf]

Abstract

In this paper we explore the market game with multiple trading posts per commodity type, in order to explain on what principles hinges the
failure of the law of one price and clarify the role of liquidity constraints. The market game without wash-sales is used as a tool, so as to stress the key role of agents' relative weights on di®erent posts (when being too large on a given trading-post, an agent may rather enter an other post with a less interesting price), and the critical in°uence of wash-sales. Then, we show the absence of liquidity considerations in not critical and extend to these situations the convergence to price uniformity when the number of agents increases and the impact of the market structure. Mainly, multiplicity of trading posts is not of relevance if wash-sales are precluded.

Ziv Gorodeisky

The Hebrew University of Jerusalem

  Wednesday, July 12, 4:25, Session E

Stability of Mixed Equilibria    [pdf]

Abstract

We consider stability properties of equilibria in stochastic evolutionary dynamics. In particular, we study the stability of mixed equilibria in strategic form games. In these games, when the populations are small, all strategies may be stable. We prove that when the populations are large, the unique stable outcome of best-reply dynamics in 2 x 2 games with a unique Nash equilibrium that is completely mixed is the mixed equilibrium. The proof of this result is based on estimating transition times in Markov chains.

Olivier Gossner

Paris-Jourdan Sciences Economiques

  Tuesday, July 11, 11:45, Session C

Ascertaining irrationality: modeler's vs agent's prospective

(joint work with Jeremiah Juts)

Abstract

From a modeler's prospective, an agent exhibiting a non partitional possibility correspondence may be considered as not fully rational. We show that such correspondences arise when the agent's asserted set of possible states differs from the modeler's set of considered states, so that a modeler cannot distinguish an irrational agent from an agent having a partitional possibility correspondence over a larger state space than considered by the modeler.

Veronika Grimm

University of Cologne

  Tuesday, July 11, 4:05, Session A

Capacity Choice under Uncertainty: The Impact of Market Structure    [pdf]

(joint work with Gregor Zoettl)

Abstract

We analyze a market game where firms choose capacities under uncertainty about future market conditions and make output choices after uncertainty has unraveled. We show existence and uniqueness of equilibrium under imperfect competition and provide an intuitive characterization of equilibrium investment. We show that investment in oligopoly, in the first and second best solution can be unambiguously ranked, in particular investment is highest in the First Best solution and lowest under imperfect competition. We finally demonstrate that intervention of a social planer only at the production stage leads to strategic uncertainty at the investment stage and moreover decreases total investment below the level obtained under imperfect competition.

Brit Grosskopf

Texas A&M University

  Monday, July 10, 4:05, Session D

Is Reputation Good or Bad? An Experiment    [pdf]

(joint work with Rajiv Sarin)

Abstract

We design and conduct an experiment to determine whether reputation is good or bad. Our design nests traditional models that that were designed to test whether reputation is good with more recent theoretical work, in which reputational concerns can actually be harmful. We also study behavior in these two situations when no reputation building is possible. Our results suggest that reputation is neither as bad as theoretical predicted nor is it as good. We propose trust to be the major component that describes our observed behavior, and suggest trust to be a substitute for reputation.

Claus-Jochen Haake

IMW / Bielefeld University

  Friday, July 14, 10:55, Session C

Monotonicity and Nash Implementation in Matching Markets with Contracts    [pdf]

(joint work with Bettina Klaus)

Abstract

We consider general two-sided matching markets, so-called matching with contracts markets as introduced by Hatfield and Milgrom (2005) and analyze (Maskin) monotonic and Nash implementable solutions. We show that for matching with contracts markets the stable correspondence is monotonic and implementable (Theorems 1 and 3). Furthermore, any solution that is Pareto efficient, individually rational, and monotonic is a supersolution of the stable correspondence (Theorem 2). In other words, the stable correspondence is the minimal solution that is Pareto efficient, individually rational, and implementable.

Sergiu Hart

Hebrew University of Jerusalem

  Wednesday, July 12, 2:00

Surely You're Using the Sure-Thing Principle!    [pdf]

(joint work with Robert J. Aumann, Motty Perry)

Abstract

Two papers will be discussed:
"Conditioning and the Sure-Thing Principle" (R. J. Aumann, S. Hart and M. Perry), which undertakes a careful examination of the concept of conditional probability and its use. The ideas are then applied to resolve a conceptual puzzle related to Savage's "Sure-Thing Principle."

and

"Agreeing on Decisions" (R. J. Aumann and S. Hart), which presents a coherent formalization of the Decision Agreement Theorem. The Decision Agreement Principle, originally promulgated by Cave and Bacharach (independently), asserts that if like-minded decision makers commonly know each other's decisions, then the decisions are the same. Subsequently, Moses and Nachum discovered a flaw in the reasoning underlying the principle. Here we provide a careful, coherent, and correct formulation of the principle, which avoids the Moses-Nachum flaw, and enables us to understand just when the principle applies, and when it doesn't.

Kevin Hasker

Bilkent University

  Friday, July 14, 4:25, Session C

Learning to play (Mixed) Equilibrium using Best Response Learning Dynamics.

(joint work with Kivanc Akoz)

Abstract

Since analysts have begun studying models of stochastic evolution (like Kandori, Mailath, and Rob (1993, KMR hereafter) and Young (1993)) it has been an open question if populations using these models can learn to play mixed equilibria. Oechssler (1997) has answered in the affirmative for some games using the learning model developed by KMR but in general this is an open question.

We answer this question by determining the possible successors of a distribution given each learning rule. For KMR's model the answer depends critically on the assumed tie-breaking rule. If we assume (like KMR) that players do not change their strategy when it is a best response then populations can learn to play any mixed equilibrium. If we only assume that they do switch their strategy if it is not a best response then learning converges to the mixtures over a minimal CURB set.

We also consider a "noisy subsampling" model inspired by Young (1993).In this model players take a random sample from the population instead of observing the entire distribution. We show that independent of the coarseness of the subsample play still converges to the mixtures over a minimal CURB set.

If there is more than one population of players then these are the final answers. When their is only one population a further refinement of our learning models may lead to convergence to equilibrium. In the birth-death model studied by Blume (2003) and in Young's model only a fraction of the population can switch strategies in any given period. We are still in the process of proving that for a class of equilibria as this fraction goes to zero play will converge to a (mixed) equilibria that is in the interior of a minimal CURB set.

Magnus Hennlock

Gothenburg University

  Monday, July 10, 3:15, Session E

Coasean Bargaining Games with Stochastic Stock Externalities    [pdf]

Abstract

The recent breakthrough on ‘subgame consistency’ in cooperative stochastic differential games by Yeung and Petrosjan (2006) and Yeung and Petrosjan (2004) is applied on the classical Coase theorem in the presence of stochastic stock externalities. The dynamic Coasean bargaining solution is identified involving a negotiated plan of externality trade over time as well as subgame consistent Coasean liability payments flow under different assignments of property rights. The agent with the right to determine the externality has the advantage to choose his own private equilibrium as the initial condition in the dynamic system of the Coasean bargaining solution. The dynamic Coasean bargaining solution is formulated followed by an illustration showing an analytical tractable solution.

Dorothea Herreiner

Loyola Marymount University

  Friday, July 14, 5:15, Session E

The Relevance of Envy Freeness as Fairness Criterion    [pdf]

Abstract

This paper evaluates how relevant envy freeness (Foley 1967) is as an empirical concept of fairness. Several versions of an indivisible-good fair division problem are evaluated in a survey questionnaire. Participants had to determine the fairest allocations of the objects among individuals with different preferences. Each problem features two allocations that are identical in all aspects but envy freeness. Across all treatments and versions of the problem, the envy free allocation is chosen 3.5 times as frequently as the allocation with envy. However, as is shown, only some these choices are based on a conscious use of the criterion of envy freeness. The relevance of other criteria for the choice of the envy free allocation is evaluated.

Daniel Hojman

Harvard University

  Monday, July 10, 4:05, Session B

Core and Periphery in Endogenous Networks    [pdf]

(joint work with Adam Szeidl)

Abstract

Many economic and social networks share two common organizing features: (1) a core-periphery structure; (2) positive correlation between network centrality and payo¤s. In this paper, we build a model of network formation where these features emerge endogenously. In our model, the unique equilibrium network architecture is a periphery-sponsored star. In this equilibrium, one player, the center, maintains no links and achieves a high payo¤, while all other players maintain a single link to the center and achieve lower payo¤s. With heterogeneous groups, equilibrium networks are interconnected stars. We show that small minorities tend to integrate while large minorities are self-su¢ cient. Although any player can be the center in a static equilibrium, evolution selects the agent with most valuable resources as the center in the long run. In particular, even small inequalities in resources can lead to large payo¤ inequality because of the endogenous social structure. Our main results are robust to the introduction of transfers and bargaining over link costs.

Ed Hopkins

Edinburgh University

  Thursday, July 13, 4:00, Session E

Learning in Games with Unstable Equilibria    [pdf]

Abstract

We propose a new concept for the analysis of games, the TASP, which gives a precise prediction about non-equilibrium play in games whose Nash equilibria are mixed and are unstable under fictitious play-like learning processes. We show that, when players learn using weighted stochastic fictitious play and so place greater weight on more recent experience, the time average of play often converges in these “unstable” games, even while mixed strategies and beliefs continue to cycle. This time average, the TASP, is related to the best response cycle first identified by Shapley (1964). Though conceptually distinct from Nash equilibrium, for many games the TASP is close enough to Nash to create the appearance of convergence to equilibrium. We discuss how these theoretical results may help to explain data from recent experimental studies of price dispersion.

Farhad Husseinov

Bilkent University, Ankara

  Tuesday, July 11, 2:50, Session B

Existence of equilibrium, core and fair allocation in a heterogeneous divisible commodity exchange economy    [pdf]

Abstract

We consider the problem of exchange and allocation of a heterogeneous divisible commodity. A heterogeneous divisible commodity is modelled as an abstract measure space with nonatomic characteristics measures.
We call a model of exchange of such a commodity the Land Trading Economy. We show that a competitive equilibrium and a fair allocation exists in this economy with rather general unordered preferences. We derive from the existence of an equilibrium the existence of a weak core allocation. These results generalize many results in literature.

Nicole Immorlica

Microsoft Research/MIT

  Friday, July 14, 5:15, Session A

Discriminatory pricing schemes in ascending auctions with anonymous bidders    [pdf]

(joint work with A. Karlin, M. Mahdian, and K. Talwar)

Abstract

A single auctioneer is offering identical copies of an indivisible good with fixed production cost and zero marginal cost to bidders with unit demand. Equivalently, one can imagine the auctioneer is offering an excludable public good of fixed cost. We suppose the auctioneer is restricted to sell the goods through the use of an ascending auction and would like to maximize his profit. If the bidders are distinguishable and the auctioneer has complete information about their types, then he can clearly extract the full surplus of the market with a discriminatory pricing scheme which charges each bidder according to his valuation. We study how much surplus can be extracted when bidders are ex-ante identical. It is natural to wonder whether the auctioneer can again employ discriminatory pricing to increase his profit beyond that of the simple uniform pricing scheme. We show that even in the case of interdependent values (i.e., values drawn according to an arbitrary symmetric joint distribution), no ascending auction extracts more than a constant times the revenue of the uniform pricing scheme.

Elena Inarra

University of the Basque Country

  Thursday, July 13, 4:35, Session B

Absorbing sets for roommate problems with strict preferences    [pdf]

(joint work with C. Larrea, E. Molis)

Abstract

The purpose of the paper is to determine the abssorbing sets, a solution that generalizes the notion of core-stability, for the entire class of roommate problems with strict preferences.

Maxim Ivanov

Pennsylvania State University

  Tuesday, July 11, 12:10, Session C

Optimal Strategic Communication: Can a Less Informed Expert be More Informative?    [pdf]

Abstract

This paper investigates an extended version of Crawford-Sobel's (1982) communication game in which the principal can control the quality of the expert's information. We prove that the optimal quality of information is always bounded away from the full information and characterize the optimal information structure that maximizes players' ex-ante payoffs. Based on this, we show that our mechanism provides a superior ex-ante payoff for the principal, compared to both Crawford-Sobel's most informative equilibrium and optimal delegation. We then study multi-stage communication. This modification results in further ex-ante Pareto improvement since it allows for the step-by-step refinement of the expert's information, preserving truth-telling communication at every stage. Finally, we construct a mechanism in which approximately full information is revealed for a large sub-interval of the state space.

Antonio Jimenez-Martinez

Universidad de Guanajuato

  Monday, July 10, , Session C

A Model of Interim Information Sharing under Incomplete Information    [pdf]

Abstract

We propose a two-person game-theoretical model to study information sharing decisions at an interim stage when information is incomplete. The two agents have pieces of private information about the state of nature, and that information is improved by combining the pieces. Agents are both senders and receivers of information.
There is an institutional arrangement that fixes a transfer of wealth from an agent who lies about her private information.
In our model we show that (i) there is a positive relation between information revelation and the amount of the transfers, (ii) information revelation has a collective action structure, in particular, the incentives of an agent to reveal are decreasing with respect to the amount of information disclosed by the other.

James Schuyler Jordan

Penn State

  Wednesday, July 12, 11:20, Session D

Power and legitimacy in pillage games    [pdf]

Abstract

A pillage game is a formal model of Hobbesian anarchy as a coalitional game. The technology of pillage is specified by a power function that determines the power of each coalition as a function of its members and their wealth. A coalition can despoil any other coalition less powerful than itself. The present paper studies the extent to which the exercise of power can be constrained by a shared concept of legitimacy. The basic pillage game is augmented by a set of extrinsic variables that can convey information about past behavior. Depending on the power function, the illegitimate use of power can be inhibited by legitimizing the subsequent use of power against the transgressors. Legitimacy is modeled in a static sense, called quasi-legitimacy, using the stable set (von Neumann-Morgenstern solution) of the augmented pillage game, and in an explicitly dynamic sense, called simply legitimacy, using a concept of farsighted core. Quasi-legitimacy is shown to be a necessary but not sufficient condition for legitimacy. The sets of quasi-legitimate wealth allocations are characterized, and an iterative process is developed for constructing the largest quasi-legitimate set of allocations for each pillage game. If the power function gives enough weight to coalition size that no individual can be as powerful as the coalition of everyone else, then a natural augmentation of the basic pillage game can legitimize the set of all allocations. However, if the power of each coalition is determined by its total wealth alone, then even the weaker concept of quasi-legitimacy cannot stabilize anything other than the stable set of the basic pillage game.

Ruben Juarez

Rice University

  Monday, July 10, 4:40, Session C

The worst absolute surplus loss in the problem of commons: random priority vs. average cost    [pdf]

Abstract

A good is produced with increasing marginal cost. A group of agents want at most one unit of that good. The two classic methods that solve this problem are average cost and random priority. In the first method users request a unit ex ante and every agent who gets a unit pay average cost of the number of produced units. Under random priority users are ordered without bias and the mechanism successively offers the units at price equal to marginal cost. We compare these mechanisms by the worst absolute surplus loss and find that random priority unambiguously performs better than average cost for any cost function and any number of agents. Fixing the cost function, we show that the ratio of worst absolute surplus losses will be bounded by positive constants for any number of agents, hence the above advantage of random priority is not very large.

Ehud Kalai

Northwestern University

  Tuesday, July 11, 9:45

An Approach to Bounded Rationality

(joint work with Eli Ben-Sasson, Adam Tauman Kalai)

Abstract

Bounded rationality was studied mostly in the context of repeated games. But the problem is important (and conceptually difficult) in many strategic games, including one-move games. We discuss examples that suggest a natural formulation of games with bounded rationality, and offer observations that are useful for 2-person constant-sum games and for n-person potential games.

Andrei P Karavaev

The Pennsylvania State University

  Friday, July 14, 4:25, Session A

Information Trading in Social Networks    [pdf]

Abstract

It has been shown that social network structure plays an important role in technology sharing and diffusion. For example, Foster A. and M. Rosenzweig ("Learning by Doing and Learning from Others: Human Capital and Technical Change in Agriculture") demonstrate that for farmers in India, imperfect knowledge about the management of new high-yielding seed varieties is a significant barrier to adoption, and the farmers rely not only on official directions provided by the producer, but rather on their own experience and the experience of the people they know. The issue we address in the paper is how the information diffuses in a social environment, how people trade and the strategies they employ. The basic example we consider is an infinite linear network of agents without cycles. The main assumptions are that any agent is always aware if his neighbor has the information and that the transfer of knowledge takes some time (or involves some cost), and therefore the information will be traded. The key property of the information is the possibility to resell it to other agents without any loss of utility. We show that in a linear network there exists a non-trivial equilibrium with the following properties:
1. The strategies do not depend on time;
2. This equilibrium is a limit of equilibria with finite horizon;
3. If the information is not rare then we may observe some dispersion of prices.
We also show that there is a possibility of "informed neighbors trap": some agents may never get the information because their neighbors simultaneously make them offers which require reselling for a non-negative payoff (although there are no uninformed neighbors left). We compare equilibria in this social environment with those when the agents are matched randomly and show that the fixed network structure leads to higher prices.

Eiichiro Kazumori

The University of Tokyo

  Thursday, July 13, 11:20, Session E

A Strategic Theory of Markets    [pdf]

Abstract

A strategic theory of the market investigates existence of an equilibrium, price formation, and design of a market mechanism in an environment where each player has private information and can significantly affect the market outcome. In this paper we evaluate the performance of a double auction in a general exchange setting with multi-unit demand and supply and affiliated values. Our first result shows (by approximating the market with a continuum of players) that there exists a pure strategy equilibrium in nondecreasing strategies when there are sufficiently many players. The second result is that a player's equilibrium bid converges as the number of players increases to the value of the unit at the state where the bid is pivotal. Finally, the double auction mechanism is incentive efficient in the sense that there exists weights for the players and units such that there is no other incentive-compatible mechanism that outperforms the double auction mechanism.

Flip Klijn

Institute for Economic Analysis (CSIC)

  Monday, July 10, 12:10, Session E

Fairness in a Student Placement Mechanism with Restrictions on the Revelation of Preferences    [pdf]

(joint work with Guillaume Haeringer)

Abstract

We study situations of assigning students to schools based on exogenously fixed priorities (e.g., entrance exams). It is known that Gale-Shapley's (1962) student-optimal Deferred Acceptance algorithm yields a widely used fair mechanism that is (i) Pareto superior to any other fair mechanism (Balinski and Sönmez, 1999), and (ii) Pareto-efficient when priorities are acyclic (Ergin, 2002). When students can submit any preference list it is in their best interest to act truthfully (Dubins and Freedman, 1981; Roth, 1982). If the school assignment procedure, however, impedes students to submit a preference list that contains all their acceptable schools, then simply submitting a preference list that consists of the first schools may not be a weakly dominant strategy for a student. Thus, the student-optimal mechanism induces a non-trivial preference revelation game where students can only declare up to a fixed number (quota) of schools to be acceptable. We show that, except for the extreme quotas, even strong Nash equilibria in undominated "truncation" strategies may yield unfair assignments. Our main result identifies acyclicity as a necessary and sufficient condition on the priorities to guarantee fair Nash equilibrium outcomes. In particular, as a policy implication, our result suggests that fairness in the restrictive procedure is recovered through strategic interaction if the assignment of students is based on a centralized entrance exam.

Malgorzata Knauff

Warsaw School of Economics

  Tuesday, July 11, 2:50, Session A

Market transparency and Bertrand competition    [pdf]

Abstract

We investigate the effects of market transparency on prices in the Bertrand duopoly model for both the cases of strategic complementarities and strategic substitutes. For the former class of games "conventional wisdom" concerning prices is confirmed, since they decrease. The consumers are always better off with higher transparency but changes in firm's profits are ambiguous. For the latter class of games, an increase in market transparency may lead to an increase in one of the prices, which implies ambiguity in consumers' utility and firms' profits.

Fuhito Kojima

Harvard University

  Thursday, July 13, 5:10, Session B

Incentives and Stability in Large Two-Sided Matching Markets    [pdf]

(joint work with Parag A. Pathak)

Abstract

The paper analyzes the scope for manipulation in many-to-one matching markets (college admission problems) under the student-optimal stable mechanism when the number of participants is large and the length of the preference list is bounded. Under a mild independence assumption on the distribution of preferences for students, the fraction of colleges that have incentives to misrepresent their preferences approaches zero as the market becomes large. We show that truthful reporting is an approximate equilibrium under the student-optimal stable mechanism in large markets that are sufficiently thick, a condition that allows for certain types of heterogeneity in the distribution of student preferences.

Tatiana Kornienko

University of Stirling

  Thursday, July 13, 10:55, Session A

Methods of Social Comparison in Games of Status    [pdf]

(joint work with Ed Hopkins (Edinburgh))

Abstract

This paper considers the effects of changes in the income distribution in an economy where agents’ utility depends both on consumption and on their rank in the distribution of consumption of a positional good. We introduce a new methodology to compare the behavior of agents that occupy the same rank in the two different income distributions but typically have different levels of incomes, and analyze equilibrium choices and welfare of every member of the society for continuous distributions with arbitrary, even disjoint, ranges. If an income transformation raises incomes at the lower end of the income distribution, the poor will typically be better off. But because such an income transformation also increases the degree of social competition, the middle class will typically be worse off - even if they have higher incomes as well. An increase in incomes can make all better off, but only if it is accompanied by an increase in income dispersion. Our new techniques highlight the importance of density of social space as we demonstrate that one can have an increase both in income and relative position but still be worse off.

Maurice Koster

University of Amsterdam

  Monday, July 10, 2:50, Session A

Consistent cost sharing and rationing    [pdf]

Abstract

A new concept of consistency for cost sharing models is discussed, analyzed, and related to the homonymous property within the rationing context. Central in the discussion is the Moulin-Shenker (1994) characterization of cost sharing mechanisms in terms of rationing methods. It is used to characterize the class of consistent incremental mechanisms, which includes most of the prevalent solutions such as average, serial, and Shapley-Shubik mechanisms.

Michal Krawczyk

University of Amsterdam

  Tuesday, July 11, 10:55, Session D

It hurts more to lose an unfair game. On dynamic psychological games of fairness.    [pdf]

Abstract

In this paper I present a new model aimed at predicting behavior in games involving risk. Departing from the standard consequentialist perspective, I look beyond sheer outcomes of interactions by incorporating also expected payoffs, given strategies. To account for this strategy-dependence, I make use of dynamic psychological game theory (Battigalli and Dufwenberg 2005). I assume that players maximize motivation function depending on own payoff, actual share (i.e. ratio of own payoff to the sum of all payoffs at given endnode, proxy for distributive justice) and expected share (i.e. ratio of the expected own payoff to sum of all expected payoffs, whereby expectation is taken over all strategies of other players that admit given endnote and all possible moves of nature). The latter is used as a proxy for procedural justice, as an unbiased random mechanism is a prototypical fair procedure. I assume that the motivation function is increasing in money and concave in actual and expected share. I also allow for an interaction between procedural and distributive justice, namely marginal utility of actual share is non-increasing in expected share (see Brockner & Wiesenfeld (1996) for an psychological evidence supporting this assumption). The model correctly predicts giving behavior (including effect of beliefs) in the "solidarity game" by Selten&Ockenfels (1998). It also accounts for the importance of intended offer in randomly perturbed ultimatum game (Kramer et al. 1995) and responses to randomly generated offers (Blount 1995, Bolton et al. 2005, Cox&Deck 2005), depending on (the direction of) the bias of the randomization procedure. No outcome-based models can explain these results and most of them are also difficult to account for in intention-based models. Some field applications, including explanation of link between equality of opportunities in a society and support for redistribution, are presented.

Vijay Krishna

Penn State University

  Wednesday, July 12, 9:45

Auctions with Resale

(joint work with Isa Hafalir)

Abstract

We study equilibria of first- and second-price auctions with resale in a model with independent private values. With asymmetric bidders, the resulting inefficiencies create a motive for post-auction trade. In our basic model, resale takes place via monopoly pricing---the winner of the auction makes a take-it-or-leave-it offer to the loser after updating his prior beliefs based on his winning. We show that a first-price auction with resale has a unique monotonic equilibrium. Our main result is that with resale, the expected revenue from a first-price auction exceeds that from a second-price auction. The results extend to other resale mechanisms: monopsony and, more generally, probabilistic k-double auctions. The inclusion of resale possibilities thus permits a general revenue ranking of the two auctions that is not available when these are excluded.

Takashi Kunimoto

McGill University

  Friday, July 14, 10:55, Session D

On the Non-Robustness of Nash Implementation    [pdf]

Abstract

I consider the implementation problem under complete information and employ Nash equilibrium as a solution concept. This paper asks the maximal amount of incomplete information which allows the canonical game form of Maskin (1999) to be robust. I establish a general impossibility result on robust Nash implementation. More precisely, under some mild condition on the social choice rules, one can construct a canonical perturbation of the complete information structure under which a sequence of Nash equilibria of the Maskinian game form converges to a non-Nash equilibrium outcome in the limit. Therefore, there is a precise sense in which the Maskinian game form is not robust to a very small amount of incomplete information.

Krishna Ladha

University of Mississippi

  Tuesday, July 11, 3:15, Session C

The Origin of Elections: An Economic Explanation

Abstract

What Madison said about liberty as being critical for the existence of factions, could be said about decentralization of power as well. Decentralization of political power is essential for the self-preservation of competing factions; absolute power tends to be used to annihilate the opposition. The question that arises is this: of the various forms of decentralization, when is democracy with elections a preferred form? When do competing factions adopt democracy with elections as a means to sharing power rather than some other form of decentralization that does not involve elections? Decentralization in the absence of elections or voting is commonplace in large organizations and also within government. So why do we have elections? What fundamental purpose do elections serve that would not be served by alternative institutions? To answer these questions, this paper returns to 508 BCE, the year the Athenian democracy was established. At that time the contending factions had two main alternatives: decentralization through popular elections (the one they chose) and decentralization through bargaining giving each faction the power to veto any change. This paper shows that the elections are crucial as a means to decentralization for certain types of factions but not for others. The paper characterizes factions on the basis of attributes which are beyond the control of the factions. The characterization appears to correspond to reality both when decentralization is accompanied with elections and when it is not.

Ernest Kong-Wah Lai

University of Pittsburgh

  Friday, July 14, 4:00, Session B

Contest Architecture with Performance Revelation    [pdf]

(joint work with Alexander Matros)

Abstract

In this paper, we consider a two-stage elimination contest and ask how the revelation of the first-stage performance changes players' performance. First, we find a monotonic equilibrium. Second, we show that revelation of the first-stage performance always increases the expected individual and total effort in the first round and decreases the expected individual and total effort in the second round.

Claudia Landeo

University of Alberta

  Monday, July 10, 11:45, Session A

Split-Award Tort Reform, Firm's Level of Care, and Litigation Outcomes    [pdf]

(joint work with Maxim Nikitin)

Abstract

We investigate the effect of the split-award tort reform, where the state takes a share of the plaintiff's punitive damage award, on the firm's level of care, the likelihood of trial and the social costs of accidents. A decrease in the plaintiff's share of the punitive damage award reduces the firm's level of care and therefore, increases the probability of accidents. Conditions under which a decrease in the plaintiff's share of the punitive damage award reduces the probability of trial and the social cost of accidents are derived.

Stephan Lauermann

Bonn University

  Wednesday, July 12, 10:55, Session E

The Efficiency of Decentralized Trading    [pdf]

Abstract

In a decentralized market traders are matched into pairs and sellers make price offers. Traders have a finite life expectancy, exiting the market with a constant hazard rate d. With vanishing d it is shown that an equilibrium exists and that the market converges to the effcient competitive outcome. Additional assumptions that can be found in the literature and that are favorable to the effcient outcome are not needed

Ron Lavi

California Institute of Technology

  Monday, July 10, 11:20, Session E

Online Ascending Auctions for Gradually Expiring Items    [pdf]

(joint work with Noam Nisan)

Abstract

We consider dynamic auction mechanisms for the allocation of multiple items. Items are identical, but have different expiration times, and each item must be allocated before it expires. Buyers are of dynamic nature, and arrive and depart over time. We are interested in situations where players act strategically and may mis-report their private parameters. Our goal is to design mechanisms that maximize the social welfare. We obtain three results. First, we design two detail-free auction mechanisms and prove that an approximately optimal allocation is obtained for a large class of ``semi-myopic'' selfish behavior of the players. Second, we provide a game-theoretic rational justification for acting in such a semi-myopic way. We suggest a notion of ``Set-Nash'' equilibria, where we cannot pin-point a single best-response strategy, but rather only a set of possible best-response strategies. We show that, in our setting, these strategies are all semi-myopic, hence our auctions perform well on {\em any} combination of these. Third, to further justify the shift to this new notion, we prove that no ex-post implementation can obtain a constant fraction of the optimum.

Duozhe Li

Chinese University of Hong Kong

  Friday, July 14, 4:50, Session B

Coalition-Proof Bargaining    [pdf]

Abstract

It is well-known that a natural generalization of the Rubinstein bilateral bargaining game to the N-player case leads to indeterminacy, by which every agreement can be sustained as a subgame perfect equilibrium outcome. Different authors have explored alternative bargaining procedures that give a unique SPE. A common feature among them is that the final agreement consists of a series of bilateral agreements that are reached either sequentially or simultaneously. In other words, unanimity is not respected in these procedures. It makes them vulnerable to collusion.

In this project we explore three different approaches. The first approach can be viewed as a refinement of the SPE of the generalized Rubinstein game. We make a “self-interest” restriction on strategies. A player’s strategy is self-interest if, whenever in the role of responder, he specifies a threshold value, and accepts a proposal if and only if it gives him a share not lower than the threshold value. The self-interest restriction is much weaker than the restriction of stationarity. Surprisingly, there is only one SPE that satisfies it.

Then we propose a bargaining procedure that is similar to the one studied by Krishna and Serrano (1996). At each round, a player may leave the bargaining with the proposed share if it is unanimously endorsed by all players. The game is more complicated than KS’s game. Yet we manage to show that it also has a unique SPE, which generates the same outcome as that of KS’s game. Since unanimity is well respected, our bargaining procedure is robust to collusion.

We also consider a sequential demand game with commitment. In each round, players take turns to announce their demands. The game ends if it is feasible to grant all demands; otherwise the game proceeds to next round, in which the order of announcements is rotated. Once a player has made a demand, he cannot subsequently increase it. We conjecture a unique and efficient SPE.

Ines Dagmar Lindner

Utrecht School of Economics

  Wednesday, July 12, 10:55, Session B

A Generalization of Condorcet’s Jury Theorem    [pdf]

Abstract

We extend Condorcet’s Jury Theorem (1785) to weighted voting games
with voters of two kinds: a fixed (possibly empty) set of ‘major’ voters with fixed
weights, and an ever-increasing number of ‘minor’ voters, whose total weight is also
fixed, but where each individual’s weight becomes negligible. As our main result,
we obtain the limiting probability that the jury will arrive at the correct decision as
a function of the competence of the few major players. As in Condorcet’s result the
quota q = 1/2 is found to play a prominent role.

Zhen Liu

Stony Brook University

  Thursday, July 13, 11:45, Session B

On fair information disclosure considering asymmetric information and awareness

Abstract

The US Security and Exchange Comission implemented Regulation Fair Disclosure in 2000. It requires that relevent information has to be made public as soon as a company wants to disclose its information to anyone.

We analyze the welfare effect of this regulation. In particular, we emphasize the economic implications of unawareness of small investers. Given fair disclose regulation, professional investors with full awareness do not acquire partial information publicly in order to avoid others free-riding their awareness. Thus in contrast to standard models assuming full awareness, the optimality of Fair Disclosure may not hold in terms of the cost of capital and the cost of information acquisition.

The assumption of unawareness is not only supported by survey evidences, but also enable our model to explain interesting empirical findings in a way different from other papers on this topic.

Humberto Llavador

Universitat Pompeu Fabra

  Tuesday, July 11, 11:20, Session D

Voting with Preferences over Margins of Victory    [pdf]

Abstract

This paper analyzes a two-alternative voting model with the distinctive feature that voters have preferences over the support that each alternative receives, and not only over the identity of the winner. The main result of the paper is the existence of a unique equilibrium outcome in pure strategies with a very intuitive characterization: in equilibrium voters who prefer a higher support for one of the alternatives vote for such alternative. Its computation is equally simple: the equilibrium outcome is the unique fixed point of the 'connected' survival function associated to the distribution of the electorate. This characterization works for electorates with a finite number of citizens as well as with a continuum of agents, and for scenarios with and without abstention. Finally, 'strategic' voting (voting for the least preferred alternative) is common for a fraction of the electorate who favor electorally ``balanced" results.

Pei-yu Lo

Yale University

  Tuesday, July 11, 2:50, Session E

Common Knowledge of Language and Iterative Admissibility in a Sender-Receiver Game    [pdf]

Abstract

Cheap talk games usually possess a large class of equilibria. In particular, babbling equilibria always exist. In a babbling equilibrium, the Receiver always takes the same action, regardless of the message sent by the Sender. In this paper we show that this severe multiplicity of equilibria is caused largely by ignoring the role of literal meanings, because, for example, in traditional analysis the strategy where the word “yes” means “no” and the word “no” means “yes” is regarded as equally natural as the strategy where the word “yes” means “yes” and the word “no” means “no.” We propose a general framework as follows to formally model implications of common knowledge of language on cheap talk games. We first model language as a direct restriction on players’ strategy sets, assume that this language restriction is common knowledge, and then characterize the predictions of this restricted game under iterative admissibility (IA). We apply this framework to sender-receiver games a la Crawford and Sobel (1982), where the Receiver’s actions are linearly ordered. We incorporate two observations about natural language into the language assumption: 1) there always exists a natural expression to induce a certain action, if that action is indeed inducible by some message, 2) messages that are more different from each other induce actions that are weakly more different. This procedure eliminates outcomes where only a small amount of information is transmitted. Under certain conditions, all equilibrium outcomes are eliminated except the most informative one. However, the normal form procedure gives the highest priority to language, and hence may conflict with sequential rationality. To address this issue, this paper proposes an extensive form procedure and characterizes the solution.

Jingfeng Lu

National University of Singapore

  Thursday, July 13, 10:55, Session E

Auctions Design with Private Costs of Valuation Discovery    [pdf]

(joint work with Jingfeng Lu)

Abstract

This paper considers auctions design in a general independent private value (IPV) setting where each potential bidder has a valuation discovery cost which is his private information. First, the revenue-maximizing auction generally involves individual entry fee for each bidder which equals the hazard rate of his entry cost distribution, evaluated at the optimal entry-threshold for him. The second-price auction with no entry fee and a reserve price equal to seller's valuation remains the ex ante efficient auction. Second, even for the symmetric setting, it is in general an auction implementing asymmetric rather than symmetric entry across bidders that maximizes the total expected surplus or the seller's expected revenue. This result holds no matter the entry costs are the bidders' private information or common knowledge. Third, if the distributions of entry costs are degenerated (common-knowledge costs), there is no loss of generality in considering entry patterns where every bidder participates in probability of either 0 or 1, for the revenue-maximizing (also total-expected-surplus-maximizing) auction. The corresponding revenue-maximizing auction generally employs positive entry fees to extract the surplus of the participants.

Jingfeng Lu

National University of Singapore

  Friday, July 14, 4:50, Session A

When and How to Dismantle Nuclear Weapons    [pdf]

Abstract

This paper shows that allowing the option of destroying the auctioned item by the seller improves the optimal auction when identity-dependent externalities exist between the seller and bidders. The sufficient and necessary conditions for item destruction to be optimal both in terms of nonparticipating threat and allocation outcome are also provided in this paper. A modified second-price sealed-bid auction with appropriately set nonnegative entry fees and reserve price is established as the optimal auction. The optimal auction induces full participation of bidders, and a feature of the optimal auction is that each losing bidder's payment includes a component (positive or negative) equal to the externality on him at the outcome of the auction. These components eliminate the impact of externalities on strategic bidding behavior. It thus follows intuitively that a second-price auction with these additional payments is optimal. The above findings hold when players have private information on externalities they create for others, or when externality to every bidder is proportional to total payments of other bidders or all bidders.

Xiao Luo

Institute of Economics, Academia Sinica

  Wednesday, July 12, 4:25, Session A

(Bayesian) Coalitional Rationalizability    [pdf]

(joint work with Chih-Chun Yang)

Abstract

We extend Ambrus' [QJE, 2006] concept of "coalitional rationalizability (c-rationalizability)" to situations where, in seeking for mutual beneficial interests, players in groups (i) make use of Bayes rule in expectation calculations and (ii) contemplate various deviations - i.e. the validity of deviation is checked not only against restricted subsets of strategies, but also against arbitrary sets of strategies. Following the Bernheim [Econometrica 52(1984), 1007-1028] and Pearce [Econometrica 52(1984), 1029-1051] approach, we offer an alternative notion of c-rationalizability suitable for such complicated interactions, and show that it possesses similar nice properties as the conventional rationalizability. We also provide its epistemic foundation. JEL Classification: C70, C72, D81

Mohammad Mahdian

Microsoft Research

  Friday, July 14, 11:20, Session C

Marriage, Honesty, and Stability    [pdf]

(joint work with Nicole Immorlica)

Abstract

Many centralized two-sided markets form a matching between participants by running a stable marriage algorithm. It is a well-known fact that no matching mechanism based on a stable marriage algorithm can guarantee truthfulness as a dominant strategy for participants. However, as we will show in this paper, in a probabilistic setting where the preference lists of one side of the market are composed of only a constant (independent of the the size of the market) number of entries, each drawn from an arbitrary distribution, the number of participants that have more than one stable partner is vanishingly small. This proves (and generalizes) a conjecture of Roth and Peranson. As a corollary of this result, we show that, with high probability, the truthful strategy is the best response for a given player when the other players are truthful. We also analyze equilibria of the deferred acceptance stable marriage game. We show that the game with complete information has an equilibrium in which a (1-o(1)) fraction of the strategies are truthful in expectation. In the more realistic setting of a game of incomplete information, we will show that the set of truthful strategies form a (1+o(1))-approximate Bayesian-Nash equilibrium. Our results have implications in many practical settings and were inspired by the work of Roth and Peranson on the National Residency Matching Program.

Michael Mandler

Royal Holloway College, University of London

  Thursday, July 13, 4:35, Session A

Strategies as states    [pdf]

Abstract

We define rationality and equilibrium when states specify agents’ actions and agents have arbitrary partitions over those states. Although it has been suggested that this natural modeling step necessarily leads to paradox, we argue that Bayesian equilibrium is well-defined and that natural conditions on information partitions eliminate any difficulties. The potential problem is that when agents know the equilibrium strategies of other players, an agent i’s move can reveal information to i about how other agents move. Specifically, if j’s partition informs j of i’s move and i knows j’s strategy, then the Bayesian inference that i makes about j’s move will vary as a function of i’s own move. It then follows that i can rationally play a dominated action. We categorize different routes by which the play of dominated actions can arise and specify plausible conditions that rule them out. But these conditions fall short of assuming that agents learn nothing substantive from their moves. Existence of equilibrium is more complicated than in standard game theory and there are robust nonexistence examples. But ε equilibria exist in a large class of cases and models with the more important information structures necessarily possess equilibria.

Alessandro Marchesiani

University of Tor Vergata

  Tuesday, July 11, 2:50, Session C

Search, bargaining and prices in an enlarged monetary union    [pdf]

(joint work with Pietro Senesi)

Abstract

This paper studies existence and characterization of monetary equilibria of an enlarged monetary union within a model of search with commodities divisibility. An unbiased degree of integration between each member-country pair ensures existence of accession equilibria, and is a necessary and suffcient condition for both monies to be perfect substitutes for each country's resident, and for no arbitrage to exist from using the same money in different countries. Furthermore, monies are perfect substitutes within each single participating country in every accession equilibrium. While prices in each country are increasing in the amount of money issued, they are decreasing in the degree of integration between any country-pair.

Cesar Martinelli

ITAM

  Monday, July 10, 4:05, Session A

Rational Ignorance and Voting Behavior    [pdf]

Abstract

We model a two-alternative election in which voters may acquire information about which is the best alternative for all voters. Voters differ in their cost of acquiring information. We show that as the number of voters increases, the fraction of voters who acquire information declines to zero. However, if the support of the cost distribution is not bounded away from zero, there is an equilibrium with some information acquisition for arbitrarily large electorates. This equilibrium dominates in terms of welfare any equilibrium without information acquisition--even though generally there is too little information acquisition with respect to an optimal strategy profile.

Ana Paula Martins

Universidade Catolica Portuguesa

  Friday, July 14, 5:15, Session B

Ideals in Sequential Bargaining Structures    [pdf]

Abstract

This note suggests possible extensions of the baseline Rubinstein sequential bargaining structure - applied to the negotiation of stationary infinitely termed contracts - that incorporate a direct reference to the “ideal” utilities of the players. This is a feature of the Kalai-Smorodinsky cooperative solution – even if not of the generalized Nash maximand; it is usually not encountered in non-cooperative equilibria.

Firstly, it is argued that different bargaining protocols than conventionally staged are able to incorporate temporary all-or(and)-nothing splits of the pie. We advance scenarios where such episodes are interpreted either as – out of bargaining - war or unilateral appropriation events, or free experience contracts.

Secondly, we experiment with some modifications to the Rubinstein infinite horizon paradigm allowing for mixed strategies under alternate offers, and matching or synchronous decisions in a simultaneous (yet, discrete) bargaining environment. We derive solutions where the reference to the winner-takes- it-all outcome arises as a parallel – out-of-the-protocol - outside option to the status quo point.

In some cases, we thrived to derive the limiting maximand for instantaneous bargaining.

Ana Paula Martins

Universidade Catolica Portuguesa

  Wednesday, July 12, , Session A

Calls and Couples: Communication, Connections, Joint –Consumption and Transfer Prices    [pdf]

Abstract

This research proceeds to the formal characterization of general equilibrium and efficient allocation of an exchange economy where individuals value a pure private good and mixed one(s) the fractions of which must be shared with one and only one other individual in the community. Such “shared” good – not necessarily attached to an externality: both individuals may have to pay or spend resources to enjoy it – involves joint consumption and reproduces private calls, one-to-one communication or information sharing. The initiating – “proposing” - party is identified, (potentially) not irrelevantly valued by individuals, and there is continuous veto power at the end-side of a match. A decentralized equilibrium requires two general prices – adding up to a uniquely determined full-price -, and pair-(and direction-)specific transfer prices between intervening consumers for the shared good. Efficiency requires the Samuelson condition over marginal utilities.

Agent multiplicity – utility patterns and corner solutions - sheds light on match making and mating. Specific functional forms (two and three-stage CES special cases, allowing for taste for variety as for unicity) generate interpretable conclusions, namely, regarding the qualification of assortative mating.

Contrast with a multiple external effect good – one-to-many communication; (or) shared by a fixed number of, more than two, individuals; common property - and with a pure public good is also provided. If paired consumption with end-point specificity generates, under reasonable assumptions, a unique decentralized equilibrium solution, supporting an efficient allocation, multiple agent sharing among more than two individuals and individual types requires, along with excludability, perfect differentiation of a larger number of consumption – partnership - roles.

Principles behind the theory are also applicable to input and cost sharing and pricing in partnerships, co-operative societies and joint-ventures.

Andreu Mas-Colell

Universitat Pompeu Fabra

  Monday, July 10, 5:55

Multilateral bargaining from the strategic form

(joint work with Sergiu Hart)

Michael Maschler

Hebrew University of Jerusalem

  Wednesday, July 12, 11:50

Solved, Partly Solved and Not Yet Solved Issues in Cooperative Game Theory.

Abstract

As the tile suggests, I will describe several issues, as time permits, describing challenging problems that occurred to me during the years. Topics included are "coalition formation", "meaningful interpretation of some solution concepts", and "which solution concepts to use in any particular case".

Virginie Masson

University of Pittsburgh

Location, Information and Coordination    [pdf]

(joint work with Alexander Matros)

Abstract

In this paper, we consider a finite population of boundedly rational agents whose preferences differ. The interaction level among agents allows us to partition the population into local networks. In each local network, there exists a fixed agent, as defined by Glaeser and al., who shares, directly or indirectly, her information with all agents within the local network. Time is discrete and in each period agents are paired to play a battle of the sexes game. We show that in the short run, all fixed agent plays a particular strategy, but only neighboring fixed agents need to coordinate on the same strategy. In the long run however, all fixed agents coordinate on the same strategy, leading to a uniform convention, as defined by Young. Our main result shows that location leads to information access which in turn leads to coordination. In particular, it shows that the outcome that prevails in a population of heterogeneous agents facing asymmetric information is decided by those agents who share the most widely their information.

Laurent Alexandre Mathevet

California Institute of Technology

  Monday, July 10, 10:55, Session A

Nomination Processes and Policy Outcomes    [pdf]

(joint work with Matthew Jackson, Kyle Mattes)

Abstract

We model and compare three different processes by which political parties nominate candidates for a general election: nominations by party leaders, nominations by a vote of party members, and nominations by a spending competition among potential candidates. We show that in equilibrium, non-median outcomes can result when two parties compete using nominations via any of these processes. We also show that more extreme outcomes can emerge from spending competition than from nominations by votes or by party leaders. When voters (and potential nominees) are free to switch political parties, then median outcomes ensue when nominations are decided by a vote but not when nominations are decided by spending competition.

Alexander Matros

University of Pittsburgh

  Monday, July 10, 3:15, Session D

Contest when the winner gets her effort reimbursed    [pdf]

Abstract

We analyze n-player contests with one main prize. Suppose that the design of the contest is fixed, but the contest designer can transfer players' contributions from one player to another. How can the designer extract the most rent from the players? We find all such mechanisms if players have the same value for the prize. One of these mechanisms is the contest in which the winner gets her effort reimbursed.
We analyze contests in which the winner gets her effort reimbursed in detail. First, we consider stochastic (Tullock’s) asymmetric model. All equilibria in pure strategies are found. Equilibria can be of two types: i-type equilibria (only player i exerts (high) positive effort and all other players exert zero effort) and internal type (at least two players exert positive effort). Some n-player contests can have an internal equilibrium where all players exert positive effort. The simplest example of this kind is a contest where all players have the same value for the main prize. We demonstrate that the players' expected payoffs are zero in any internal equilibrium and a higher value (stronger) player always exerts less effort than a lower value (weak) player and therefore has a lower chance to win the contest in any internal equilibrium.
Second, we consider deterministic model. The players' valuations for the main prize are commonly known among the players. Since players' strategy space is unbounded, the existence of pure and mixed strategy equilibria is not guaranteed by any of the existing theorems for discontinuous games since they all rely on the compactness of the strategy space. We derive a class of mixed strategy equilibria. These equilibria have “similar” properties with the internal equilibria in the stochastic model: the players' expected payoffs are zero in any mixed-strategy equilibrium and a higher value (stronger) player exerts less expected effort than a lower value (weak) player and therefore has a lower expected chance to win the auction.

Ana Mauleon

Facultés Universitaires Saint-Louis

  Tuesday, July 11, 11:20, Session E

Contractually Stable Networks

(joint work with J.F. Caulier, J.J. Sempere-Monerris and V. Vannetelbosch)

Abstract

The aim of this paper is to develop a theoretical framework that allows us to study which bilateral links and coalition structures are going to emerge at equilibrium. We define the notion of Coalitional Network to represent a network and a coalition structure, where the network specifies the nature of the relationship each individual has with his coalition members and with individuals outside his coalition. A new solution concept is introduced: contractual pairwise stability. The idea of contractual pairwise stability is that adding or deleting a link needs the consent of coalition partners. Moreover, the formation of new coalition structures needs the consent of original coalition partners.

George McMillan

Impact Analytics

  Monday, July 10, , Session D

An Integrated Causal Model for the Social and Psychological Sciences    [doc]

Abstract

This paper introduces the methodology to create a baseline equation for the philosophical and social sciences in the behavioral-political-economic-demographic sequence. It shows that the two major political economic philosophies (Hume-Smith and Marx-Engel) systematized into competing integrated three dimensional behavioral-political-economic models. It argues that Hume-Smith’s empathy-sympathy behavioral assumptions are a sufficient starting point to create the integrated causal model sought by Tooby and Cosmides. The author then shows that the prerequisite advances in psychology and demographic studies now exist to generate the universal economic theory sought by von Neumann-Morgenstern and the integrated behavioral-economic method of Camerer, Loewenstien and Rabin – a psychological (i.e., behavioral) social economic model. By updating Hume-Smith’s work with a modern understandings of psychology, as presented by Fromm and others, a new, integrated societal model as postulated by Harsanyi can be created that intertwines the social and psychological sciences. The author argues that this fundamentally psychology-based model also can serve as a baseline equation for all social sciences as desired by Leibniz-Wolf, Kant, and Mach, as well as the ahistorical philosophic model noted by Husserl, Heidegger, Tillich, and Strauss. The author concludes with a discussion of the necessary next steps to generating a detailed model that fuses these disciplines.

Jean-Francois Mertens

Université catholique de Louvain

  Monday, July 10, 9:00

Intergenerational Equity and the Discount Rate for Cost-Benefit Analysis.    [pdf]

Abstract

Current OMB guidelines use the interest rate as a basis for the discount rate, and have nothing to say about an intergenerationally fair discount rate. A traditional utilitarian approach leads to too high values for the latter, in a wide range. We propose to apply Relative Utilitarianism to derive the discount rate, and find it should equal the growth rate of real per-capita consumption, independent of the interest rate.

Matteo Migheli

University of Torino and Catholic University of Leuven

  Tuesday, July 11, 3:15, Session E

The Importance of Formal and Informal Networks on Generalized Trust in Flanders: an Experimental Approach to Social Capital    [pdf]

Abstract

This paper analyzes how social capital affects people’s behaviour. In particular we know that in several experiments the Nash equilibrium is almost never observed. We consider here a trust game, and the deviations from the Nash equilibrium. Our indicator for social capital is composed by memberships in voluntary organizations (formal networks) to which we add the time spent out with friends, phone calls and active Internet usage (informal networks). These last components constitute our innovative contribution to classical measures. The subjective ethnic affiliation is also taken into account. We implemented our experiment at the Catholic University of Leuven. Participants were all undergraduate students, recruited at the beginning of a usual class and asked by the professor to participate in a project (not defined more precisely). No further intervention of the professor took place. We ended up with 129 couples. Each of them received written instructions, and they were told that this information was common knowledge and identical for both groups. First they played the game and then filled the questionnaire in. We investigate the link between the declared stock of social capital and the shown behaviour through both Hotelling’s tests and econometrics (GMM due to simultaneity). Our fundamental hypothesis to be tested is that higher values of social capital (either formal or informal) correspond to higher trust and reciprocity, and thus generate (stronger) deviations from the Nash equilibrium. When examining these, we account also for the possible existence of anger. Our analysis suggests that social capital contributes to create deviations from the Nash equilibrium. In particular it appears to increase the passed amounts of money in both senses.

Fausto Mignanego

Catholic University of Milan

  Monday, July 10, 2:50, Session C

American Options in Incomplete Markets    [pdf]

(joint work with Sabrina Mulinacci)

Abstract

It is well-known that in an incomplete market for any contingent claim is defined a set of arbitrage-free prices given by an interval of the real line. Depending on the behaviour of the buyer, his exercise time may be different for different arbitrage-free prices. In the paper, we have studied what happens if the seller knows perfectly the stopping time chosen by the buyer for each possible price. What will be shown is that both for the buyer and the seller is convenient if the seller knows perfectly the stopping strategy of the buyer. The main contribute of the work is in fact in showing that the Stackelberg game is an appropriate tool for studying American options in an incomplete market.

Nolan Miller

Harvard University

  Tuesday, July 11, 11:20, Session C

Efficient Design with Multidimensional Continuous Types and Interdependent Valuations    [pdf]

(joint work with Scott Johnson, John Pratt, Richard Zeckhauser)

Abstract

We consider the mechanism design problem when agents' types are multidimensional and continuous, and their valuations are inderdependent. If there are at least three agents whose types satisfy a weak correlation condition, then for any decision rule there exist balanced transfers that render truthful revelation a Bayesian ε-equilibrium. A slightly stronger correlation condition ensures balanced transfers exist that induce a Bayesian Nash equilibrium in which agents' strategies are nearly truthful.

Daniel Monte

Yale University

  Tuesday, July 11, 11:45, Session B

Reputation and Bounded Memory    [pdf]

Abstract

This paper is a study of bounded memory in a reputation game. In particular, in a repeated cheap talk game with incomplete information on the sender’s type. The receiver is assumed to be constrained by a finite number of memory states and the memory rule is itself part of his strategy. First we show that in this reputation game the updating rule will be rather simple: monotonic and increasing. Second, we show that when memory constraints are severe the updating rule will involve randomization before reaching the extreme states. The key intuition is that in a two-player game with incomplete information randomization is used as a memory saving device and also as a strategic element: to test the opponent and give incentives for types to be revealed early in the game. The results in this paper extend to general reputation games where the normal type is a zero-sum player and the commitment type is playing a pure strategy.

Joao Montez

 

  Tuesday, July 11, 3:40, Session A

Downstream mergers and producer's capacity choice: why bake a larger pie when getting a smaller slice?    [pdf]

Abstract

We study the effect of downstream horizontal mergers on the upstream producer's capacity choice. Contrary to conventional wisdom, we find a non-monotonic relationship: such mergers induce a lower upstream capacity if the cost of capacity is high; a higher upstream capacity if this cost is low. We explain the result by decomposing the total effect into two distinct effects: a change in hold-up and a change in bargaining erosion.

Antonio Morales

University of Malaga

  Monday, July 10, 4:40, Session E

Complexity constraints in two armed bandit problems: an example    [pdf]

(joint work with Tilman Börgers)

Abstract

This paper derives the optimal strategy for a two armed bandit problem under the constraint that the strategy must be implemented by a finite automaton with an exogenously given, small number of states. The idea is to find learning rules for bandit problems that are optimal subject to the constraint that they must be simple. Our main results show that the optimal rule involves an arbitrary initial bias, and random experimentation. We also show that the probability of experimentation need not be monotonically increasing in the discount factor, and that very patient decision makers suffer almost no loss from the complexity constraint.

Humberto Moreira

EPGE

  Thursday, July 13, 4:00, Session C

Common Agency with Informed Principals    [pdf]

(joint work with David Martimort)

Abstract

We analyze a symmetric common agency game between two privately informed principals. Principals offer contributions to a common agent who produces a public good on their behalf. Asymmetric information introduces incentive compatibility constraints which replace the requirement of truthfulness found in the earlier common agency literature under complete information. There exists a large class of differentiable equilibria which are ex post inefficient. Inefficiencies come from the fact that each principal wants to reduce public good production to induce the agent to reveal the types of others which have been learned from observing their contributions. This screening problem in games with voluntary contributions highlights a new source of inefficiency in public good provision which differs from the usual free-riding problem. For distributions having a linear hazard rate, closed-form equilibria are obtained. Those equilibria are interim efficient for some distributions of social weights on the different types of principals. Introducing asymmetric information on the agent’s cost of producing the public good might also help to select a unique equilibrium under some circumstances.

John Morgan

University of California, Berkeley

  Thursday, July 13, 4:35, Session D

Efficient Information Aggregation with Costly Voting

Abstract

In this paper, we analyze the common values case under costly voting. In our basic model, voters each have privately known and independently and identically distributed voting costs. In a variation of the model we analyze the case where voters have commonly known, identical and fixed voting costs. As we show, the distinction matters to the informational efficiency of an election when voters are strategic. In particular,
1.With private voting costs, majority-rule elections are informationally efficient in the limit: as the size of the electorate grows large, the expected number of voters is infinite and so the correct candidate is elected with probability one.
2.With common and fixed voting costs, majority-rule elections are informationally inefficient even in the limit: as the size of the electorate grows large, the expected number of voters converges to a finite limit and so the wrong candidate is elected with positive probability.

Scott Moser

Cargenie Mellon University

  Monday, July 10, 11:20, Session D

Efficiency, Networks and Evolution of Conventions    [pdf]

(joint work with Alexander Matros)

Abstract

In this paper we present a simple evolutionary model of mobile agents where different 2x2 games exist at different locations. The role of information, mobility, and the payoff structure is examined for achieving global efficiency. We apply our results to network design and institutional choice. Our model is an extension of Ely's (2002) model in two directions: (1) we explicitly consider different network structures; (2) we model heterogeneous payoff structures. While reproducing previous results, we find several new results as well. Consistent with the previous literature, we find that players coordinate in each game. However, we differ from earlier models by showing that not all short-run predictions are conventions. Additionally, we find that more information purifies the short-run predictions. In the long-run, we depart from the literature by showing under what conditions the globally efficient outcome is stable. That is, we observe Pareto dominant outcomes being stochastically stable in some informational settings, and not in others. In addition, we examine the role of the spread of information on efficiency. As it turns out, there is a trade-off between knowledge of the payoff structure (that is, knowledge of which game is at which location), and allowing the spread of information: A social planner may have minimal knowledge about the details of the payoff structure and still achieve `near-efficiency’ by allowing information to flow maximally between locations. Alternatively, we show how a planner may achieve near-efficiency using a sparse network, provided she has sufficient knowledge about the payoff structure.

Awni Mufleh

Associate Research Scientist

  Monday, July 10, 10:55, Session C

Decision Making in Presence of Risk: Prospect Theory Applications in Kuwaiti Students Case    [doc]

(joint work with May AL-Asfoor)

Abstract

This paper will use prospect theory as a descriptive model of decision making under risk to assess decision made and the behavior of a sample of 36 economic students from intermediate microeconomic class that was taken as a case study, from the Kuwait University for Science and Technology GUST in Dec, 2005 for two session experiments (each session consists of 18 students). Main objective of the paper is to investigate whether the subjects (students) were risk adverse, risk neutral, or risk lovers, and if student behavior will be consistent with prospect theory assumption that students tend to overweight small probabilities and underweight moderate and high probabilities. The paper argued that if GUST students follow the expectation of prospect theory then they will have an irrational tendency to be less willing to gamble with profits than with losses. This means selling quickly when they earn profits but not selling if they are running losses. The experiment in session one revealed that majority of subjects (students) were risk averse because they over-weighted the outcomes of simple lottery over compound lottery with the same expected value Ls = ( 0, 2 ; ½ , 1/2) relative to LC1= ( 0 , 4, 3/4, 1/4 or LC2 = ( 0 ,12 ;11/12, 1/12).The students in this session comply with the certainty effect which contributed to risk aversion in choices involving sure gains. And they were risk seeking with the negative lotteries (i.e., they preferred compound lottery to simple lottery with larger risk).
In session two 18 students were required to make a decision between the following: LS= (0, -2; 1/2, 1/2) ; LC1=(0,-4; 3/4, 1/4) ; LC2= (0,-12; 11/12; 1/12).The majority of subjects (students) on this session prefer LC2 over LC1 over Ls; the subjects were seeking risk in choices between negative prospects, and also comply with certainty effects so they were risk seeking in choices involving sure losses. Both preferences between positive prospects and the preferences between negative prospects violate the expectation principle in the same manner. This paper found that the majority of the students tried to maximize his/her utility by minimizing risk involve in each lottery.

Felix Munoz-Garcia

University of Pittsburgh

  Tuesday, July 11, 11:45, Session D

Information gathering in common agency games    [pdf]

Abstract

Most advances in the common agency literature assume that the agent always decides to acquire her private information before the contract is offered by the principals. I find this assumption unrealistic, since acquiring this information long time before the contract is even offered is normally very costly for the agent.
I propose a common agency model where the agent can strategically decide whether or not to gather information before receiving the contract offers from two principals. The results of this paper show some differences with the common agency model resulting from allowing the agent to gather such information, and relevant differences with respect to the information gathering models, due to the introduction of a second principal.

Felix Munoz-Garcia

University of Pittsburgh

  Wednesday, July 12, , Session B

'Rising stars' should shine    [pdf]

Abstract

The literature on sabotage in contests allows the players (workers) to exert both productive and negative (sabotaging) activities. By doing so promotion tournaments where the workers' promotion is based on relative performance are enriched with the common practice of sabotaging in organizations. This literature assumes, however, that workers' abilities in productive activities are common knowledge among the players.
On the contrary, I assume that worker's productive ability is unknown by his job colleagues. I construct a previous stage where workers signal their productive ability to their job colleagues, and a second stage where workers are called to participate in a promotion tournament with sabotage. In this paper, I show that, by creating this first stage, the tournament designer (the firm manager) can minimize the occurrence of the so-called "rising star" paradoxes, where the ablest worker is not finally the one promoted. Instead, if there is at least one able worker in the firm, the design of our tournament makes that he will be the player with the highest probability of being promoted. In other words, the "raising star" of the firm should finally shine.

Roger Myerson

University of Chicago

  Wednesday, July 12, 9:00

On the Foundations of Institutions    [pdf]

Tymofiy Mylovanov

University of Bonn

  Tuesday, July 11, 4:05, Session E

Negative value of information in an informed principal problem with independent private values    [pdf]

Abstract

This note demonstrates that in a principal-agent environment with independent private values and generic payoffs, the mechanism implemented by an informed principal is not ex-ante optimal. This result implies that in (generic) settings where the principal can covertly acquire private information before selecting a mechanism, she will fail to select an ex-ante optimal mechanism. Furthermore, the principal is indifferent whether to become informed before or after selecting a mechanism.

Keisuke Nakao

Boston University

  Thursday, July 13, 10:55, Session C

The Construction of Social Orders and Inter-Ethnic Conflict    [pdf]

Abstract

This paper seeks to explain variations in inter-ethnic conflict by variations in intra-ethnic 'social order', represented by the effectiveness of intra-group sanctions for inter-ethnic transgressions. This is in contrast to the widely accepted (e.g., Fearon and Laitin (1996)) approach which relates inter-ethnic conflict to inter-ethnic social order, represented by lack of information among members in the victim's group concerning the identity of transgressors, as a consequence of which transgressions results in all-out inter-ethnic conflict. In contrast, in our theory inter-ethnic transgressions are disciplined primarily by intra-group sanctions, with inter-ethnic conflict resulting only when these intra-group sanctions fail to be implemented. The success of inter-ethnic cooperation then hinges heavily on the efficacy of intra-group policing. As a consequence, groups with weak internal social controls tend to have more frequent and longer disputes with other groups.

John Nash

Princeton University

  Thursday, July 13, 5:45

Continued Studies of the Agencies Method for Modeling Coalitions and Cooperation in Games

Abstract

My work in this area really goes back, in inspiration, to 1996, when I thought of the basic idea. And since then I have been involved in concretely developing and testing this method (or approach) through computations relating to mathematical models of games of at first two and then three players.
The scheme of agencies allows that a coalition of two of the players can be effectively formed if either of the two elects the other player to become the authorized agent representing the interests of both of the two players. In the case of a game of two players that elected agent would already represent "the grand coalition" (or all the cooperative possibilities for the game). And in the case of a game of three or more players we naturally introduce the possibility for a second stage of cooperative coalescence so that an elected agent-player can either elect to accept representation by some other agency or, alternatively, that agent-player may be elected to also represent another player or agency.
We model the game situation as that of a repeated game (as if the game is infinitely or indefinitely repeated with no "discounting") and we seek to find an equilibrium in that context. So this equilibrium concept is quite parallel to the concept of equilibrium under evolutionary pressures (or "natural select-ion") in Nature. Then we seek to find for the players, which are parties that have quite limited actual opportunities for bargaining and negotiative actions (because they have nothing like the full range of human verbal communications possibilities that they could use in the process of optimizing), a type of equilibrium such that each of the game parties is not capable of making any refinement of his/her strategic behavior pattern that would improve his/her payoff expectations prospect.
The mathematical work, to find the equilibria (and to find how they vary in three-person games as the strengths of the sub-coalitions of two players is varied) becomes a matter of complex mathematical computations to find numerical approximate solutions of equations with as many as 42 variables, and this work is made feasible by modern computer resources and by software like Mathematica.
In principle, this work is equivalent to carrying out an exhaustive
experiment on the behavior of specialized robot players that are designed to effectively bargain or negotiate for obtaining favorable outcomes in terms of “the division of the spoils” as regards the payoffs realized from the cooperative game.
Now I am moving on into the consideration of a variant modeling for the same type of three person games (which are “CF” described entirely by the listing of a “characteristic function” giving the payoffs realizable by the separate action of the members of any sub-coalition and that available to the “grand coalition” of all players). This is a model involving “attorney-agents” that enter into the game like new players and represent specific coalitions. They are entirely robotic in their motivation and that simplifies the analysis. So we hope to obtain very illuminating data valuable for making comparisons of different model variants.
And one of the paradoxes connecting with the possibility of modeling the coalition formation actions with the robotic attorney-agents is that this can lead to an actual reduction of the number of strategic parameters that would need to be determined to calculate an equilibrium. (When we first considered the possibility of attorney-agents it SEEMED as if using them would lead to greatly more complex models!)

Abraham Neyman

Hebrew University of Jerusalem

  Friday, July 14, 9:00

Repeated Games with Bounded Complexity

Abstract

The talk surveys recent developements and open problems in the theory of repeated games with feasible strategy sets that are implementable by finite-state machines.

Thang Nguyen

University of Texas at Austin

  Tuesday, July 11, 4:05, Session B

Technological Progress in Races for Product Supremacy    [pdf]

Abstract

How does market organization a¤ect quality innovation e¤orts
and social welfare? Three stochastic dynamic market structures con-
sidered are monopoly, duopoly, and social planning. Products can
be either linearly or nonlinearly substitutable. The introduction of
a step function allows richer innovation strategies. First, given non-
linear substitution, a duopoly may follow an unbalanced evolution
path and have a technology frontier not dominated by that in social
planning. This result does not hold for the linear substitution case.
Second, ex ante and long-run welfare values are always the highest
in social planning and the lowest in monopoly. Thus, policies should
encourage static and dynamic competition.

Muriel Niederle

Stanford University

  Wednesday, July 12, 4:00, Session D

Signaling in Matching Markets    [pdf]

(joint work with Peter Coles)

Abstract

We evaluate the effect of preference signaling in two sided matching markets. Firms and workers have strict preferences over members of the other side of the market. Each firm makes an offer to exactly one worker. Workers select the best offer from those available to them. The short time frame produces congestion and the market fails to reach a stable outcome. But if workers are able to signal their preferences, (i.e. their top choice firm,) firms may use this information as guidance for their offer choices. We find that in this signaling setting, it is optimal for firms to make use of these signals in the form of cutoff strategies. However, making use of signals imposes a negative externality on other firms. We find that on average, introducing a signaling technology increases the average number of matches, one possible measure of social welfare.

Ricardo Nieva

University of New Brunswick Saint John

  Monday, July 10, , Session E

Sequentially Nash Credible Joint Plans in Strategic Networks    [pdf]

Abstract

I define Sequentially Nash Credible Joint Plans (SN), an extension of neologism proofness and a refinement of subgameperfect publicly correlated equilibrium. It applies to three-player network games with cheap talk where pairs in a finite rule of order select communication links and actions, have a pre-existing common language and bargain so that unexpected, "non Nash", simultaneous messages' literal meaning are clear as they signal bilateral cooperation. Multiplicity is instead obtained in standard networks with bargaining. SN are an alternative to evolutionary equilibrium selection in strategic network games. Uniqueness or existence of the related Ferreira's "non-bargained" communication-proof-equilibrium is not often the case. As pairs can threat credibly with the unique Harsanyi-Selten (HS) payoff and form a different link, the smoothed Nash demand game is SN's novel non-cooperative foundation. In contrast to HS, a companion paper finds SN in a variation of the Aumann-Myerson game with infinite action sets; in the simple majority game, the nucleolus is predicted. A version of SN "should" exist for n-person games.

Ricardo Nieva

University of New Brunswick Saint John

  Monday, July 10, 10:55, Session D

An Analytical Solution for Networks of Oldest Friends    [pdf]

Abstract

Sequentially Nash Credible Joint Plans (SN) as in Nieva (February 2006) are shown to exist also whenever actions sets are infinite in a modification of all three-player Aumann-Myerson (1988) (A-M) bilateral link formation games. In contrast to A-M, binding transfers can occur if pairs match pairs of non negative payoff proposals out of the sum of their Myerson values (1977) in the prospective network. Pairs can also enunciate simultaneous negotiation statements about payoff-relevant play and bargain cooperatively over payoffs induced by tenable and reliable joint plans where the disagreement one suggests link rejection. A SN is for the most the one that suggests credibly-so followed through-the Nash solution in the bargaining game. In contrast to the bargaining network literature and the transfer game in Bloch and Jackson (2005), the one here is bilateral, sequential and has unique payoff predictions. In strictly superadditive cooperative games the complete graph never forms. The simple majority game yields the nucleolus in coalition structure.

Maxim Nikitin

ICEF, SU-HSE (Moscow)

  Thursday, July 13, 11:20, Session A

Deterrence, Lawsuits, and Litigation Outcomes Under Court Errors    [pdf]

(joint work with Claudia M. Landeo, Scott Baker)

Abstract

This paper presents a strategic model of liability and litigation under court errors. Our framework allows for endogenous choice of level of care and endogenous likelihood of filing and disputes. We derive sufficient conditions for a unique universally-divine mixed-strategy perfect Bayesian equilibrium under low court errors. In this equilibrium, some defendants choose to be grossly negligent; some cases are filed; and, some lawsuits are dropped, some are resolved out-of-court and some go to trial. We find that court errors in the size of the award, as well as damage caps and split-awards, reduce the likelihood of trial but increase filing and reduce the deterrence effect of punitive damages. We derive conditions under which the adoption of the English rule for allocating legal costs reduces filing.

Yuichi Noguchi

Kanto Gakuin University

  Thursday, July 13, 11:45, Session D

Bayesian Learning with Bounded Rationality: Convergence to Nash equilibrium

Abstract

We provide a class of prior beliefs that (almost surely) lead to playing approximate Nash equilibrium, combined with bounded rationality, i.e., smooth approximate optimal behaviors, in any infinitely repeated game with perfect monitoring: converging to ε−Nash equilibrium for any (finite normal form) stage game, any discount factors (less than one), and any ε > 0. Furthermore, the class of prior beliefs is quite smart in the sense that, for any learnable set of opponents strategies, a prior belief in the class weakly merges with all opponents strategies in the learnable set. We also argue the implications of our positive result to impossibility results (Nachbar (1997, 2005) and Foster and Young (2001)). Specifically, we point out that the impossibity in Nachbar (1997, 2005) is obtained, because the learnability condition in Nachbar (1997, 2005) requires uniform learning such that players’ priors weakly merge with various opponents strategies uniformly in player’s various strategies, and that the impossibility in Foster and Young (2001) crucially depends on perfect rationality, i.e., exact optimal behaviors (to prior beliefs), which is obvious from our result.

Thomas Norman

Oxford University

  Thursday, July 13, 5:10, Session E

Learning, Hypothesis Testing and the Folk Theorem    [pdf]

Abstract

If players learn to play an infinitely repeated game using Foster and Young's (Games and Economic Behavior 45, 2003, 73-96) hypothesis testing, then their strategies almost always approximate equilibria of the repeated game. If, in addition, they are sufficiently "conservative" in their hypothesis revision, then almost all of the time is spent approximating an efficient subset of "forgiving" equilibria.

Anton Noskov

St. Petersburg State University

  Wednesday, July 12, , Session C

The problem of Nash Equilibrium Selection in games of three persons with two strategists    [doc]

Abstract

In the paper was investigated the applied of liner tracing procedure to class of non-antagonistic games with tree persons and with two strategists for each player. Were build the set of stability for each possible of equilibrium. In additional, was constructed the structural algorithm of Nash Equilibrium Selection.

Bruno Oliveira

Universidade do Porto

  Thursday, July 13, 10:55, Session B

The effect of a Prisoner's Dilemma in an Edgeworthian Economy    [pdf]

(joint work with Ferreira LMMS, Finkenstadt, Pinto AA)

Abstract

We present a model of an Edgeworthian exchange economy where two goods are traded in a market place. The novelty of our model is that we associate a greediness factor to each participant which brings up a game alike the prisoner's dilemma into the usual Edgeworth exchange economy. Along the time, random pairs of participants are chosen, and they trade or not according to their greediness. If the two participants trade then their new allocations are in the core determined by their Cobb-Douglas utility functions. The exact location in the core is decided by their greediness with an advantage to the greedier participant. However, if both participants are too greedy, they are penalized by not trading. We analyze the effect of the greediness factors in the variations of the individual amount of goods and in the increase of the value of their utilities. We show that it is better to be in minority. For instance, if there are more greedy participants, the increase of the value of their utilities is smaller than the increase of the value of the utilities of the non greedy participants.

Hatice Ozsoy

Rice University

  Monday, July 10, 4:05, Session C

A characterization of Bird's rule    [pdf]

Abstract

We propose a new test for cost allocation rules in minimum cost spanning tree games. The Shapley value, as well as some recent rules proposed in the literature, i.e. the equal remaining obligation rule (Feltkamp, Tijs and Muto (1994)) or the Dutta-Kar rule (Dutta and Kar, 2004), are vulnerable to merging maneuvers by users. Bird's rule, on the other hand, passes this test. We give a characterization of Bird's rule based on this property.

Selcuk Ozyurt

New York University

  Wednesday, July 12, 10:55, Session C

Repeated Games with Forgetful Players    [pdf]

Abstract

We present a model which investigates the behavior of forgetful players in infinitely repeated games. We assume that each player may forget the entire history of the play with a fixed probability. Our modeling specifications make a clear distinction between absentminded and forgetful players. We consider two extreme cases regarding the correlation of forgetfulness of the players. In the first case, forgetfulness is simultaneous: If a player forgets, so do the rest. For this part, we are able to prove two Folk theorems. In the other extreme, we consider a case where forgetfulness is independent, so players' state of memories is no longer common knowledge. We focus on Conditionally Belief-Free strategies to recapture the recursive structure in the sense of Abreu, Pearce and Stacchetti (1986, 1990). By utilizing their method, we represent characterization results for the payoff set of conditionally belief-free strategies.

Frank H. Page, Jr.

University of Alabama

  Monday, July 10, 5:15, Session B

Club Formation Games with Farsighted Agents    [pdf]

(joint work with Myrna H. Wooders)

Abstract

Modeling club structures as bipartite networks, we formulate the problem of club formation as a game of network formation and identify those club networks that are stable if agents behave farsightedly in choosing their club memberships. Using the farsighted core as our stability notion, we show that if agents’ payoffs are single-peaked and agents agree on the peak club size (i.e., agents agree on the optimal club size) and if there sufficiently many clubs to allow for the partition of agents into clubs of optimal size, then a necessary and sufficient condition for the farsighted core to be nonempty is that agents who end up in smaller-thanoptimal size clubs have no incentive to switch their memberships to already existing clubs of optimal size. In contrast, we show via an example that if there are too few clubs relative to the number of agents, then the farsighted core may be empty. Contrary to prior results in the literature involving myopic behavior, our example shows that overcrowding and farsightedness lead to instability in club formation.

Konstantinos Papadopoulos

Aristotle University of Thessaloniki, Greece

  Wednesday, July 12, 4:25, Session B

Exchange Rates and Purchasing Power Parity in Imperfectly Competitive Markets    [pdf]

Abstract

In this paper we make use of some recent results in strategic market games literature in order to study the validity of the Purchasing Power Parity (PPP) theory in a frictionless N-country exchange economy where agents have market power over commodity and currency markets. We identify individual equilibrium strategies that are compatible with the failure of PPP and result to exchange rate inconsistency. We then show that equilibrium PPP deviations and inconsistencies tend to zero as the number of agents in the economy increases.

Jee-Hyeong Park

Seoul National University

  Monday, July 10, 2:50, Session D

Private Trigger Strategies in the Presence of Concealed Trade Barriers    [pdf]

Abstract

To analyze the issue of enforcing international trade agreements in the presence of potential deviations of which countries receive imperfect and private signals, this paper analyzes a repeated bilateral trade relationship where each country can secretly raise its protection level through concealed trade barriers. In particular, it explores the possibility that countries adopt private trigger strategies (PTS) under which each country triggers an explicit tariff war based on its privately observed imperfect signals of the potential use of concealed trade barriers Based on a full characterization of optimal protection sequence that each can take under PTS, the analysis establishes that symmetric countries may restrain the use of concealed trade barriers under symmetric PTS as long as their private signals are sensitive enough to concealed protection. The analysis on symmetric PTS also reveals that it is not optimal to push down the cooperative protection level to its minimum attainable level. The paper identifies two factors that may limit the use of PTS; one is a reduction in each country’s time lag to adjust its protection level in response to the other country’s initiation of an explicit tariff war, and the other is asymmetry among countries. Both of these factors may limit the level of cooperation attainable under PTS by reducing the length of a tariff war phase that countries can employ against potential deviations.

Alberto Adrego Pinto

Faculdade de Ciencias da Universidade do Porto

  Tuesday, July 11, 11:45, Session E

Dynamics of R&D investment strategies in duopoly competitions    [pdf]

(joint work with Fernanda A. Ferreira, Flávio Ferreira, Miguel Ferreira, Bruno Oliveira)

Abstract

We present new and simple deterministic and stochastic dynamics on the production costs of Cornot and Stakelberg competitions, determined by R&D investment strategies with and without uncertainty. At every period of time, the duopoly competition with R&D investment programs consists of two subgames. In the first subgame, both firms have initial production costs and choose R&D investment strategies, either with or without complete information, to obtain new production costs. The second subgame is either a Cournot or Stakelberg competition with parameters determined by the R&D investment program. We prove that the Cournot game presents either one, two or three Nash investment equilibria in the parameter regions studied. The Nash investment equilibria vary continuously with the initial production costs and with the differentiation of the goods. The deterministic dynamics, period after period, on the production costs of the duopoly competition appear from the firms deciding to play the Nash investment equilibria in the Cournot competition with R&D investment programs. We study some behaviours of the firms in the case of similar firms and in the case of non-identical firms with different R\&D programs. We study the transients and the asymptotic limits of the deterministic dynamics on the production costs of the duopoly competition. Curiously, we prove that there is a piecewise smooth curve of stable equilibria which is robust under small parameter perturbations. We analyse the loss in the profits of one firm, if this firm decides not to invest in R&D. The stochastic dynamics on the production costs of the firms in a duopoly competition appear if we consider incomplete information in the R&D investment programs We observe that the uncertainty deviates the mean of the stochastic trajectories from the deterministic trajectories of the production costs.

Joe Podwol

Cornell University

  Tuesday, July 11, 11:45, Session A

Why Use a 99-cent Reserve Price on eBay?    [pdf]

Abstract

A number of empirical studies on eBay auctions have shown that a non-binding reserve price of 99-cents is both the most popular and the revenue-maximizing reserve price for a wide range of products. This result contradicts the predictions of optimal auction design ala Meyerson (1981) and Riley & Samuelson (1981). To date, no formal game-theoretic models have addressed the issue. Several authors have assumed the explanation lies in either a common value model or a model of affiliated signals. eBay sellers claim irrationality of buyers. The current paper explains the anomaly of the 99-cent reserve with an independent private values model in which the seller cannot commit to a reserve price. That is, if the item fails to sell at the initial reserve, the seller cannot constrain herself in advance from offering the item at a reduced reserve in the following period. This failure to commit leads to a “Coasian dynamic” in which buyers wait for the reserve price to become sufficiently small before submitting a bid. Analogous to the literature on durable-good monopoly, as the buyers’ discount factor approaches unity, the initial reserve converges to the minimum valuation type. Even in markets where the minimum valuation type may still be quite large, the seller will further reduce the reserve to 99-cents in order to minimize the “listing fee.”

Andrew Postlewaite

University of Pennsylvania

  Friday, July 14, 2:00

Pricing Matching Markets

(joint work with George Mailath, Larry Samuelson)

Abstract

We study why different markets are cleared by different types of prices---a universal price for all buyers and sellers in some markets, seller-specific prices that are uniform across buyers in others, and personalized prices tailored to both the buyer and the seller in yet others. We link these prices to differences in the ownership of the default shares---the shares of the surplus owned by the buyer and seller in the absence of transfers---created by the buyer-seller match. The results point to a theory of designing markets to allow effective pricing.

Rohit Prasad

MDI, Gurgaon

  Thursday, July 13, 11:45, Session E

Beware of doles: Welfare in a monetary corn model    [pdf]

Abstract

This paper examines the welfare implications of expansionary macro-policy in the context of a monetary corn model. It shows that under the assumption of decreasing returns to scale, output growth makes the worker worse off and the entrepreneur better off, even when the growth is triggered by a dole to the worker. In the same spirit, a positive technology shock that results in higher output and higher employment results in an improvement in the worker’s welfare only if the magnitude of the shock is greater than a certain threshold. Expansionary monetary policy can result in a Pareto improvement via a decline in the interest rate.

Marek Pycia

MIT

  Thursday, July 13, 10:55, Session D

Many-to-One Matching without Substitutability    [pdf]

Abstract

This paper studies many-to-one matching such as matching between students and colleges, interns and hospitals, and workers and firms. A major question that arises in such settings is the stability of matchings. A matching is stable if no agent or pair of agents can profitably deviate. The paper provides a novel sufficient and, in a certain sense, necessary condition for stability that may be used even when there are complementarities and peer effects. The condition is particularly suited to study settings in which agents are unable to enter binding agreements. In these settings, the agents are matched and then their payoffs are determined via mechanisms such as various games, bargaining, and sharing rules. A stable matching exists for all preference profiles induced by the mechanisms if, and only if, the preferences are pairwise aligned. Agents' preferences are pairwise aligned if any two agents in the intersection of any two coalitions prefer the same one of the two coalitions. For example, a stable matching exists if agents' payoffs are determined after the matching in Nash bargaining.

Cheng-Zhong Qin

UC Santa Barbara

  Wednesday, July 12, 11:20, Session A

Bid and Guess: A Nested Mechanism for King Solomon’s Dilemma    [pdf]

(joint work with Chun-Lei Yang)

Abstract

In this paper we propose a mechanism to resolve King Solomon’s dilemma about allocating an indivisible good at no cost to the participating agents. A distinctive feature of our mechanism is the design of a two-part contest that makes the agents guess each other’s bids in a second-price auction. The accuracy of an agent’s guess of the other agent’s bid endogenously determines how much she pays for participating in the contest. The truthfully bidding Bayesian-Nash equilibrium of the contesting game results in a reduced game, which has a unique and strict Bayesian-Nash equilibrium that implements the efficient outcome.

Pablo Revilla

Universidad Pablo de Olavide, Sevilla

  Monday, July 10, 11:45, Session E

Many-to-one Matching When Colleagues Matter    [pdf]

Abstract

This paper studies many-to-one matching markets in which each agent’s preferences not only depend on the institution that hires her, but also on the group of her colleagues, which are matched to the same institution. With an unrestricted domain of preferences the non-emptiness of the core is not guaranteed. We show that under certain conditions on agents' preferences two possible situations in which al least one stable allocation exists, emerge. In both situations, at least one stable allocation exists. The first one reflects real-life situations in which agents are more concerned about an acceptable set of colleagues than about the firm hiring them. The second one refers to markets in which a workers' ranking is accepted by workers and firms present in such market.

Jose Alvaro Rodrigues Neto

Central Bank of Brasilia

  Wednesday, July 12, , Session D

Optimal Target for Future Inflation: A Simple Game-Theoretic Approach    [pdf]

Abstract

In an inflation targeting regime, we study the strategic interaction of a continuum of anonymous and myopic market participants with the monetary authority, typically the central bank, modeled as the long-run player. A sufficiently patient central bank does not need to play the tightest monetary policy all the time in order to implement its preferred equilibrium.

Jose Alvaro Rodrigues Neto

Central Bank of Brasilia

  Tuesday, July 11, 3:40, Session D

From Posteriors to Priors via Cycles    [pdf]

Abstract

We present necessary and sufficient conditions for checking if certain
players' posteriors can be rationalized by a common prior. We propose a simple diagrammatic device to calculate the join and meet of players' knowledge partitions. Each cycle in the diagram has a corresponding cycle equation that must be satisfied. Besides having a geometric interpretation, our conditions differ from the literature because they do not use infnities in any sense, not even indirectly, and they characterize the set of players' partitions that automatically allow any posteriors to be rationalized by a common prior. This indicates that to assume the existence of a common prior may be a different assumption in different games. We also prove that to assume that posteriors can be rationalized by a common prior is equivalent to assuming that players have the same degree of optimism. We show how to construct a bet (in which it is always common knowledge that all players have positive expected gains) over any cycle whose corresponding equation is not satisfied. A common prior will exist when each player's posterior about her opponents' types is independent of her own type.

Orit Ronen

The Hebrew University of Jerusalem

  Wednesday, July 12, 11:20, Session E

Different Learning Methods Under Uncertainty and Uninformed Choice With a Social Planner    [doc]

Abstract

Some of the people invest more time and effort in deciding which electric appliance to buy than in which pension plan to choose. This paper presents a game theoretic model that rationally explains this behavior. For both products utility is gained from different categories such as design, electric consumption and efficiency for the first and risk, timing and return for the second. The process of learning reveals not only values of the available products, but also the consumer’s preferences over them. Learning can be done vertically or horizontally i.e. one can learn all about one product or all about one category. Due to lack of information regarding how to learn and to difficulties with processing knowledge, there is less investment in learning for the pension plan case than for other goods.
This model serves as a benchmark for a social planner intervention setting. A subset of all possible alternatives is offered by the planner and serves as a signal for both how much and in which way to learn and for which alternatives to learn about. When played once, the result of this game is that the players invest more in learning and make a better choice. In some cases it is beneficial for the planner to deceive the players and to send a signal that motivates them to invest in learning more than is optimal for them.
When played repeatedly the players interpret the signal according to the planner’s reputation. That leads to a very interesting dynamics between the planner and the “public” that is described broadly in the paper. It is based on a dual motivation for the planner that occurs sometimes; to deceive today’s public or to keep high reputation for tomorrow’s public.

Alvin Roth

Harvard University

  Monday, July 10, 2:00

The Design of School Choice Systems in NYC and Boston: The Game-Theoretic Issues

(joint work with Atila Abdulkadiroglu, Parag Pathak, Tayfun Sonmez)

Abstract

Teams of game theorists have been heavily involved in the redesign of the school choice systems for New York City high schools, and for Boston public schools.

In New York, a chief problem of the old system was that it was highly congested and didn’t produce stable matchings, and was, in consequence, heavily gamed. High school principals often withheld capacity from the system, to be able to better control admissions to their schools. The new system produces stable matches, and the first three years of performance suggest a steady return of capacity to the system.

In Boston, the chief problem of the old system was that it wasn’t strategy-proof. This came to be seen as an issue of equity and equal-access, since some parent groups seemed to devote considerable resources to gaming the system, while others appeared to suffer from not doing so, or not doing it well.

In both school systems, a novel theoretical issue is that preferences of schools over students is not strict (there are large indifference classes). There is a tradeoff between strategy proofness and efficiency, differently in Boston (where the system is one-sided, since schools aren’t strategic players), and in NY (where the system is two-sided).

Anna Rubinchik-Pessach

University of Colorado at Boulder

  Monday, July 10, 10:55, Session E

Contests with Heterogeneous Agents    [pdf]

(joint work with S´ergio O. Parreiras)

Abstract

We study tournaments with many ex-ante asymmetric contestants, whose valuations for the prize are independently distributed. First, we characterize the equilibria in monotone strategies, second, we provide sufficient conditions for the equilibrium uniqueness and, finally, we reconcile the experimental evidence documenting the ‘workaholic’ behavior in contests with the related theory by introducing heterogeneity among participants. It is a ‘weak’ participant that might become a ‘workaholic’ in an equilibrium, that is, his effort density might increase at the highest valuation – weak, either because he is more risk averse or because his rivals consider that it is very unlikely that he has a high value for the prize. In contrast, effort densities are always decreasing in case of symmetry with identically distributed values for the prize and identical attitudes towards risk in case of CARA, as well as in contests with only two participants. Moreover, we show that for low valuations more risk averse agents are less likely to exert low effort than their ‘strong’ rivals, while those with dominated distribution of the prize valuation are more likely to do so. An explicit solution for the uniform distribution case with contestant-specific support is provided as well.

Dov Samet

Tel Aviv University

  Thursday, July 13, 2:00

Where do partitions come from?

Francisco Sanchez Sanchez

CIMAT

  Friday, July 14, 10:55, Session B

Values for Team Games    [pdf]

(joint work with Luis Hernández-Lamoneda)

Abstract

In this presentation we consider cooperative games where the characteristic function could only be not zero in coalitions with a prescribed cardinality. We call these games team games. We assume that there is given an exogenous amount c to be distributed among the players. These situations arise when a group of individuals is organized in several teams with equal number of players in each one, e.g. the distribution of television rights of the transmission of a tournament among the teams in a league.
We propose several solutions satisfying certain desired properties for this kind of games: First, we get an explicit expression for every linear symmetric solution, the dimension of all such solutions is found to be three. We get also formulas for solutions that further satisfy either the efficiency or the natural (inessential) axioms. Finally, it is shown that there exists a unique linear solution which is symmetric, natural and efficient.
We conclude the presentation by studying bankruptcy games as a particular case of these type of games.

William Sandholm

University of Wisconsin

  Thursday, July 13, 4:35, Session E

Survival of dominated strategies under evolutionary dynamics

(joint work with Josef Hofbauer, University of Vienna)

Abstract

We show that any evolutionary dynamic that satisfies three mild requirements - continuity, positive correlation, and innovation - does not eliminate strictly dominated strategies in all games.

Alvaro Sandroni

University of Pennsylvania

  Thursday, July 13, 5:10, Session D

The pivotal-vote model

(joint work with Tim Feddersen)

Abstract

In this paper we propose a simple experiment where voters have to decide between policies that benefit them and policies that benefit others. We show that if the choice is affected by pivotal probabilities (i.e., the probability that the choise is implemented) then first order stochastic dominance fails.

Alvaro Sandroni

University of Pennsylvania

The pivotal-vote model

(joint work with Tim Feddersen)

Abstract

In this paper we propose a simple experiment where voters have to decide between policies that benefit them and policies that benefit others. We show that if the choice is affected by pivotal probabilities (i.e., the probability that the choise is implemented) then first order stochastic dominance fails.

Rene Saran

Brown University

  Tuesday, July 11, 4:05, Session C

In Bargaining We Trust    [pdf]

Abstract

This paper studies the influence of trust in a bilateral trading problem by introducing trustworthy types of players. It shows that the effects of degree and distribution of trust are notably different in direct mechanisms vis-à-vis k-double auctions. If either the degree of trust increases or the distribution of trust changes so that high-surplus types are now more likely among trustworthy types, then we can design direct mechanisms with higher probability of trade. In fact, with a high enough degree of trust, it is possible to construct direct mechanisms that are ex-post efficient. None of these results are true for k-double auctions.

Rajiv Sarin

Texas A&M University

  Friday, July 14, 4:00, Session D

Learning and Risk Aversion    [pdf]

(joint work with Carlos Oyarzun)

Abstract

We provide three definitions of when a learning rule may be said to be risk averse. We show that the three notions are nested and characterize learning rules which satify the weakest and the strongest criteria. The weakest is analogous to the definition used in decision theory. The paper, hence, provides a bridge between risk aversion in decision theory and risk aversion in learning theory.

Marco Scarsini

Universita di Torino

  Thursday, July 13, 11:20, Session C

Repeated Games with Public Signal and Bounded Recall    [pdf]

(joint work with Jerome Renault and Tristan Tomala)

Abstract

This paper studies repeated games with public signals, symmetric bounded recall and pure strategies. Examples of equilibria for such games are provided and the convergence of the set of equilibrium payoffs is studied as the size of the recall increases. Convergence to the set of equilibria of the infinitely repeated game does not hold in general but for particular signals and games. The difference between private and public strategies is relevant and the corresponding sets of equilibria behave differently.

Thomas Schelling

University of Maryland

  Tuesday, July 11, 5:55

Am I a Game Theorist?

Karl Schlag

European University Institute

  Thursday, July 13, 12:10, Session D

Eleven –Designing Randomized Experiments under Minimax Regret    [pdf]

Abstract

Assume that there are two alternative treatments, each with uncertain outcome and where it is only known that each treatment yields a random outcome in [0,1]. Consider the objective to design an experiment involving a given number of tests and then to estimate which of the two treatments is more successful in terms of yielding the higher mean.
We present the binomial average rule that minimizes maximal regret for any number of tests. 11 tests are needed to push maximal regret below 5%. Neither conditioning later tests on earlier outcomes nor availability of counterfactual evidence will reduce the number of tests needed. We also show how to attain minimax regret when there is covariate information.

Jesse Schwartz

Kennesaw State University

  Monday, July 10, 3:15, Session A

A Subsidized Vickrey Auction for Cost Sharing    [pdf]

(joint work with Quan Wen)

Abstract

In a cost sharing situation, a group of players jointly produce some good and must decide how much each player consumes and how much each player pays. Two wellknown solutions are the average and serial cost mechanisms. Although both mechanisms raise exactly enough money to pay the production costs, they involve complicated equilibrium strategies and are not allocatively efficient. On the other hand, the standard Vickrey auction induces dominant strategies and allocative efficiency, but generates revenue in excess of production costs. Our paper amends the Vickrey auction so that some of the surplus revenue subsidizes additional production of the good, but in such a way that preserves the dominant strategies. This subsidized Vickrey auction is allocatively inefficient, but Pareto dominates the standard Vickrey auction.

Paul Schweinzer

University of Bonn

  Thursday, July 13, 4:35, Session C

When queueing is better than push and shove    [pdf]

(joint work with Alex Gershkov)

Abstract

We address the scheduling problem of reordering an existing queue into its efficient order through trade. To that end, we consider individually rational and balanced budget direct and indirect mechanisms. We show that this class of mechanisms allows us to form efficient queues provided that existing property rights for the service are small enough to enable trade between the agents. In particular, we show on the one hand that no queue under a fully deterministic service schedule such as first-come, first-serve can be dissolved efficiently and meet our requirements. If, on the other hand, the alternative is service anarchy (ie. a random queue), every existing queue can be transformed into an efficient order.

Ella Segev

Technion, Israel.

  Thursday, July 13, 12:10, Session C

Reputation for Toughness in Bargaining with Incomplete Information    [pdf]

Abstract

This paper addresses the question whether one can achieve reputation for being a tough bargainer when bargaining with incomplete information. I make the assumption (common in the reputation literature), that a small portion of the players are irrational players who indeed act tough while bargaining - sometimes above and beyond rational decision making. Given the existence of such players I state the conditions under which a rational player will pretend to be tough in equilibrium and earn the desired reputation and consequently a better agreement. I claim that players' assumption that a small share of the population is irrational might explain why they deviate from equilibrium strategies (which are found under the assumption that there is common knowledge that all players are rational) as we see in experiments results.

Abhijit Sengupta

Unilever Corporate Research, UK

  Friday, July 14, 4:50, Session D

Achieving Efficiency in an Oligopoly under Incomplete Information

Abstract

This paper analyzes the role of a benevolent regulator in an oligopolistic market which is operating inefficiently. The regulator R, aims at inducing the competitive outcome using zero deficit incentive contracts in the situation where it has incomplete information about marginal cost of the operating firms. We have two firms, each having incomplete information about rival's cost. The distribution from which the marginal costs are drawn are known to all players in the market including R.

The first part of the paper shows the existence of an efficient mechanism which can be used to allocate contracts to more efficient firms in the market. It is shown that, in equilibrium, all firms in the market announce their costs truthfully. If all firms are symmetric, one of the firms is awarded a specific contract at random and if firms are asymmetric, the most efficient firm is awarded the contract. The resulting Cournot competition among the subsidized and unsubsidized firms results in the competitive outcome if firms are symmetric. It is also shown that such a mechanism results in zero deficit for R in equilibrium and an arbitrarily small positive deficit off the equilibrium. Social welfare improves unconditionally in all states of the world.

The second part of paper looks at implementing efficiency using a first price auction. An exclusive per unit subsidy contract is auctioned among the firms in the market. It is shown that it is possible for R to break even in expectation and still achieve improvement in welfare in all states of the world. The unsubsidized firms may continue to produce in the market.

Roberto Serrano

Brown University

  Thursday, July 13, 2:45

Marginal Contributions and Externalities in the Value

(joint work with Geoffroy de Clippel)

Abstract

For games in partition function form, we explore the implications of distinguishing between the concepts of intrinsic marginal contributions and externalities. If one requires efficiency for the grand coalition, we provide several results concerning extensions of the Shapley value. Using the axioms of efficiency, anonymity, marginality and monotonicity, we provide upper and lower bounds to players' payoffs when affected by external effects, and a characterization of an ``externality-free'' value. If the grand coalition does not form, we characterize a payoff configuration on the basis of the principle of balanced contributions. We also analyze a game of coalition formation that yields sharp predictions.

Jeff Shamma

University of California, Los Angeles

  Wednesday, July 12, 4:00, Session E

Joint Strategy Fictitious Play with Inertia for Potential Games    [pdf]

(joint work with Jason Marden and Gurdal Arslan)

Abstract

We consider finite multi-player repeated games involving a large number of players with large strategy spaces and enmeshed utility structures. In these “large-scale” games, players are inherently faced with limitations in both their observational and computational capabilities. Accordingly, players in large-scale games need to make their decisions using algorithms that accommodate limitations in information gathering and processing. A motivating example is a congestion game in a complex transportation system, in which a large number of vehicles make daily routing decisions to optimize their own objectives in response to their observations. In this setting, observing and responding to the individual actions of all vehicles on a daily basis would be a formidable task for any individual driver. This disqualifies some of the well known decision making models such as fictitious play (FP) as suitable models for driver routing behavior. A more realistic assumption on the information tracked and processed by an individual driver is the daily aggregate congestion on the specific roads that are of interest to that driver. We will show that Joint Strategy Fictitious Play (JSFP), a close variant of FP, when modified to include some sort of inertia, accommodates such information aggregation and resembles traffic decision models. We establish the convergence of JSFP with inertia to a pure Nash equilibrium in congestion games, or equivalently in finite potential games, in both cases of averaged or exponentially discounted historical data. We illustrate JSFP with inertia on a distributed traffic routing problem and derive tolling procedures that can lead to optimized total traffic congestion.

Dmitry Shapiro

Yale University

  Monday, July 10, 4:40, Session D

Separating non-monetary and strategic motives in public good games    [pdf]

Abstract

A well-established finding in experimental economics is that people tend to be considerably more cooperative than individual payoff-maximization would suggest. Recently, behavioral theories have addressed this issue by stressing different factors such as fairness, altruism, reciprocity, etc. In my paper I try to understand the importance of these factors in people’s reasoning. I divide the theories suggested in the literature into three groups: utility interdependence or UI (concern about other people’s payoffs, e.g., fairness, altruism); action interdependence or AI (subjects want to influence future opponents' decisions or to reciprocate past actions, e.g. reciprocation, encouragement, reputation); and learning. I compare subjects' behavior in three different treatments: a benchmark treatment; a phantom treatment when UI and AI are not applicable; and a two-type treatment when only UI is not applicable.
The benchmark treatment is a standard public-good game, where contributing zero is a dominant strategy. In the phantom treatment, subjects are randomly matched with the decisions that were made in the (separate) benchmark treatment. Since the opponents do not get any payoff and their actions cannot be influenced, it removes both UI and AI considerations from the subjects’ behavior. In the two-type treatments, one type of subjects gets a fixed payment regardless of the outcome, and the second type has a standard payoff function, with only the opponent\'s type being known. Thus, when subjects are matched with a person who gets fixed payoff they do not have UI considerations. The advantage of the suggested treatments is that they do not change main aspects of the game such as strategic uncertainty, information and payoff structures. The main result is that non-monetary and strategic considerations have a rather modest effect. They explain less than half of the over-contribution.

Lloyd Shapley

University of California, Los Angeles

  Friday, July 14, 5:45

Selected Short Subjects

Martin Shubik

Yale University

  Wednesday, July 12, 4:55

Game Theory and Mathematical Institutional Economics

Abstract

The relationship between Market Games and Strategic Market games is considered and related to General Equilibrium results in an exchange economy. This relationship is further considered in modeling the characteristic function from the strategic form given in a strategic market game. The connection between side-payment and no-side-payment games is examined in terms of the distinction between an ideal commodity money in sufficient supply; or a fiat money with default conditions; or personal credit and an ideal clearinghouse.

It is suggested that strategic market games, even at their simplest are intrinsically institutional as the institutions emerge from the rules needed to define the strategic form. They are the carriers of process, even if they are solved for static equilibrium. In the study of dynamics the institutional structure of the game cannot be avoided. The best we can do is to consider the extreme or minimal institutions. These institutions can be used to construct modified market games that can be analyzed mathematically for the core and value. Hence the term mathematical institutional economics is utilized. Some limiting bounding results for al institutional structures can be obtained; but it is suggested that if the dynamics of a specific part of the economy are to be examined detailed ad hoc modeling cannot be avoided.

Peter Streufert

University of Western Ontario

  Friday, July 14, 11:20, Session E

Characterizing Consistency with Monomials    [pdf]

Abstract

Beliefs are shown to be consistent iff monomials can be assigned to actions in such a way that (a) the strategy at each information set is the limit of the monomials assigned to the actions at that information set and (b) the belief at each information set is found by calculating the product of the monomials along the paths leading to each of the nodes in the information set. This characterization seems relatively tractable and is derived from the definition of consistency by means of linear algebra alone.

The paper also applies its monomial characterization to repair a nontrivial fallacy in the proofs of Kreps and Wilson's insightful theorems.


William Sudderth

University of Minnesota

  Monday, July 10, 2:50, Session E

Subgame perfect equilibria for stochastic games    [pdf]

(joint work with A. Maitra)

Abstract

For an n-person stochastic game with Borel state space S and compact metric action sets A1,A2, . . . ,An, sufficient conditions are given for the existence of subgame perfect equilibria. One result is that such equilibria exist if the law of motion q(·|s, a) is, for fixed s, continuous in a = (a1, a2, . . . , an) for the total variation norm and the payoff functions f1, f2, . . . , fn are bounded, Borel measurable functions of the sequence of states (s1, s2, . . .) 2 SN and, in addition, are continuous when SN is given the product of discrete topologies on S.

Yong Sui

University of Pittsburgh

  Friday, July 14, 4:25, Session B

All-Pay Auction with a Resale Market    [pdf]

Abstract

This paper studies the all-pay auctions with a resale market. The equilibrium bidding strategies are characterized. The existence of an active resale market creates a signaling incentive for primary bidders. The information linkage between the primary bids and the resale price is examined.
We analyze the first-price all-pay auction and the second-price all-pay auction. Provided that symmetric equilibria exist, it can be shown that the latter may generate higher expected revenue than the former. If the auctioneer has some private information which is affiliated with bidders’ signals, she may benefit from the public announcement of that information.

Arun Sundararajan

New York University

  Monday, July 10, 12:10, Session D

Local Network Effects and Complex Network Structure    [pdf]

Abstract

This paper presents a model of local network effects in which agents connected in a social network each value the adoption of a product by a heterogeneous subset of other agents in their 'neighborhood', and have incomplete information about the structure and strength of adoption complementarities between all other agents. I show that the symmetric Bayes-Nash equilibria of a general adoption game are in monotone strategies, can be strictly Pareto-ranked based on a scalar neighbor-adoption probability value, and that the greatest such equilibrium is uniquely coalition-proof. Each Bayes-Nash equilibrium has a corresponding fulfilled-expectations equilibrium under which agents form local adoption expectations. Examples illustrate cases in which the social network is an instance of a Poisson random graph, when it is a complete graph, a standard model of network effects, and when it is a generalized random graph. A generating function describing the structure of networks of adopting agents is characterized as a function of the Bayes-Nash equilibrium they play, and empirical implications of this characterization are discussed.

Satoru Takahashi

Harvard University

  Tuesday, July 11, 10:55, Session B

Multi-sender cheap talk with restricted state space    [pdf]

(joint work with Attila Ambrus)

Abstract

This paper analyzes multi-sender cheap talk in multidimensional environments. Battaglini (2002) shows that if the state space is a multidimensional Euclidean space, then generically there exists a fully revealing equilibrium. We show that if the state space is restricted, either because the policy space is restricted or the set of rationalizable policies of the receiver is not the whole space, then Battaglini’s equilibrium construction is in general not valid. We provide a necessary and sufficient condition for the existence of fully revealing equilibrium for any state space. For compact state spaces, we show that in the limit as the magnitudes of biases go to infinity, the existence of such equilibrium depends on whether the biases are of similar directions, where the similarity relation between biases depends on the shape of the state space. Our results imply that similar qualitative conclusions hold for the existence of fully revealing equilibrium for one-dimensional and multidimensional state spaces. We investigate the issue of how much information can be revealed in equilibrium if full revelation is not possible, and we address the question of robustness of equilibria.

William Thomson

University of Rochester

  Monday, July 10, 9:45

Borrowing-Proofness for Assignment Games

Abstract

I consider the possibility that an agent may have to manipulate an allocation rule by augmenting his endowment through temporarily borrowing resources. After the rule is applied and the agent has returned the resources he borrowed, he may be better off than if he had not borrowed. This may prevent the rule to yield the allocations that it is intended to yield. I investigate the existence of rules that are immune to this sort of behavior. I distinguish between "open-economy borrowing-proofness", which pertains to situations in which an agent can borrow from agents who are outside of the group with whom he is supposed to trade, and "closed economy borrowing-proofness", which pertains to situations in which he is limited to borrowing from one of his fellow traders (who of course should be provided the incentive to lend).

Elias Tsakas

University of Göteborg

  Friday, July 14, 10:55, Session E

Is partitional information always correct?    [pdf]

(joint work with Mark Voorneveld)

Abstract

In the present paper we study decision making under nonpartitional information structure. We examine the epistemic conditions of such models and study decisions in both the planning and the action space. We show that unlike the planning stage, where an optimal behavioral strategy is ensured, in the action stage Nash equilibria of the multi-agent normal form game might not exist. We also study a perfectness refinement of the equilibrium concept in the action stage and examine its relationship with the optimal behavioral strategies of the original extensive form decision problem.

Aljaz Ule

University of Amsterdam

  Monday, July 10, 11:45, Session D

Network formation and cooperation in finitely repeated games    [pdf]

Abstract

A finitely repeated multi-player prisoner's dilemma game has a unique, defective Nash equilibrium when played in a fixed group or on a fixed network. This paper shows that, in contrast, cooperation can be achieved in a subgame-perfect Nash equilibrium of a finitely repeated prisoner's dilemma game when played over an endogenously formed network. The following game is finitely repeated: in each period players simultaneously establish the network and play the prisoner's dilemma game with their neighbors in the network. Link formation is either mutual or unilateral. Cooperation can be achieved in a subgame-perfect equilibrium when either (i) linking costs are marginally increasing, or (ii) linking is costless but constrained.

Neslihan Uler

New York University

  Wednesday, July 12, 4:00, Session C

Public Goods Provision in Egalitarian Societies    [pdf]

Abstract

I consider a voluntary public good provision problem in a society, where egalitarian social norms force rich individuals to share some part of their wealth with their poor relatives. I study the level of public good provision by varying the degree of egalitarianism and ex-ante income inequality. I show that contributions to the public good are increasing in the degree of ex-ante income inequality, but ambiguous in the degree of egalitarianism. Surprisingly, purely egalitarian societies have efficient levels of public goods provision and the highest total welfare independent of the initial income distribution. In addition, our model predicts that heterogeneous societies in terms of ethnicity have lower contribution rates compared to homogeneous societies.

I also plan to conduct an experiment in order to test the effects of egalitarian norms on public goods provision. I impose a sharing rule in the following way: Each individual receives a transfer if her net income is lower than the average net income of all individuals and makes a transfer if her net income is higher than the average net income of all individuals. My design allows me to test the following predictions of my model:
i) All contributors will consume the same amount of private good. However, non-contributors consume less than contributors in the equilibrium.
ii) Private consumption of contributors decrease in the degree of egalitarianism.
iii) If everyone contributes to the public good at some level of redistribution, then as the level of redistribution increases contributions to the public good will increase.
iv) The total amount of contributions may go down as the degree of redistribution increases if the ex-ante inequality is very high.
v) On the limit case (norms that impose perfect equality), public good provision is efficient.

Amparo Urbano

University of Valencia

  Monday, July 10, 5:15, Session E

Communication through Noisy Channels    [pdf]

(joint work with Penélope Hernández, Jose Vila)

Abstract

We study two-player coordination games in which one player is better informed than the other. Specifically, let Ω={ω1,…, ωk} be the states of nature, and for each l=1,2,…k, let Πl=(A1,A2,ul1,ul2) be the strategic form game associated with ωl, where Ai and uli are the set of actions and payoff function of Πl , respectively, of player i. In this model, nature chooses one state ωl (and hence game Πl) using a commonly known probability distribution and informs only to player 1 of its choice. Then, the informed player transmits his private information though a noisy channel and actions are chosen and payoff realized. We assume that there is a unique optimal play, (âl1, âl2) /in A1xA2 in each Πl, and that players communicate by using repeatedly n times a discrete memory less noisy channel. A discrete channel is a system consisting of an input alphabet X and output alphabet Y, and a probability transition matrix p(y/x), that expresses the probability of observing the output symbol y given that the symbol x was sent. We establish a simple deterministic coding rule -the “block coding”-. Then, we define a partition of the output set with the property that each set of the partition verifies a closeness condition with respect to the feasible input sequences.
Although these coding /decoding rules are suboptimal for a finite number of repetitions of the channel, they are a very simple procedure (polynomial with respect to n) of designing noisy communication protocols. Thus, our protocol is much less complex than that of checking all pairs of feasible codification/decodification rules to find the optimal ones, but still allows the players to transmit enough information to achieve coordination outcomes.

Johannes Rene Van den Brink

Free University Amsterdam

  Thursday, July 13, 4:00, Session A

Characterisations of the Beta- and the Degree Network Power Measure

(joint work with Peter Borm, Ruud Hendrickx, Guillermo Owen)

Abstract

The purpose of this paper is to measure power or control of positions in symmetric networks represented by undirected graphs. For every symmetric network we define a cooperative transferable utility game that measures the worth or power of coalitions of positions. In cooperative game theoretic tradition we take a conservative approach to measuring the worth of a coalition by assigning to every coalition of positions the number of neighbours in that coalition that have no neighbours outside that coalition. Applying the Shapley value (Shapley (1953)) to this network power game yields the beta-measure which is discussed in van den Brink and Gilles (2000) and van den Brink and Borm (2002) as a network power measure for asymmetric networks. Applied to symmetric networks the beta-measure distributes the weight over each position equally among its neighbours. We provide an axiomatic characterisation of the beta-measure using six properties.

As mentioned before, the idea behind the beta-measure is that each position in a network has an initial weight equal to 1, and measuring power is seen as fairly redistributing this weight to all its neighbours. Instead of taking initial weights equal to 1, it seems natural to take weights that already reflect some power of the positions. In this way one obtains weighted beta-measures. Similar as done in Borm, van den Brink and Slikker (2002) for asymmetric networks, we consider a sequence of weighted beta-measures. Starting with the (unweighted) beta-measure, we compute in each step a new weighted beta-measure, taking the outcome of the previous step as input weights. This sequence of measures has a limit which equals the well-known degree measure assigning to every position its number of direct neighbours.

Besides characterizing the degree measure as the limit of the weighted beta-measures, we provide an axiomatic characterisation similar to that of the beta-measure, where the only difference is the normalization.

Vincent Vannetelbosch

CORE

  Monday, July 10, 4:40, Session B

Farsightedly Stable Networks    [pdf]

(joint work with Jean Jacques Herings, Ana Mauleon and Vincent Vannetelbosch)

Abstract

We propose a new concept, pairwise farsighted stable set, in order to predict which networks may be formed among farsighted players. A set of networks G is pairwise farsighted stable (i) if all possible pairwise deviations from any network g belonging to G are deterred by the threat of ending worse off or equal off, (ii) if there exists a farsighted improving path from any network outside the set leading to some network in the set, and (iii) if there is no proper subset of G satisfying conditions (i) and (ii). We show that a pairwise farsighted stable set always exists and we provide the necessary and sufficient condition such that a unique pairwise farsighted stable set consisting of a single network exists. We find that the pairwise farsighted stable sets and the set of strongly efficient networks, those which are socially optimal, may be disjoint if the allocation rules have nice properties. Finally, we study the relationship between pairwise farsighted stability and other concepts such as the largest consistent set.

Felix Vardy

International Monetary Fund

  Friday, July 14, 5:15, Session D

The Value of Commitment in Contests and Tournaments when Observation is Costly    [pdf]

(joint work with John Morgan)

Abstract

We study the value of commitment in sequential contests when the follower faces small costs to observe the leader's effort. We show that the value of commitment vanishes entirely in this class of games. By contrast, in sequential tournaments, games where, at a cost, the follower can observe the effectiveness of the leader's effort, the value of commitment is preserved completely provided that the observation costs are sufficiently small.

Silvinha Pinto Vasconcelos

Federal University of Rio Grande

  Friday, July 14, 4:50, Session E

Design of contracts by the Brazilian antitrust authority: the case of the cease-and-desist commitment (CCP)    [pdf]

(joint work with Francisco de Sousa Ramos)

Abstract

The cease-and-desist commitment (CCP, a mechanism equivalent to a Consent Decree in the United States) is an agreement between the Administrative Counsel of Concurrence Defense (CADE) and an anticompetitive firm, aiming to cease a non concorrencial practice for a certain period of time. During this agreement, there is a withdrawal of the lawsuit. If the firm hasn’t respected the CCP, fines and reputation sanctions can be applied. Considering that the CCP use is still new in Brazil as well as the literature about the theme, the objective of this paper is to analyze the Conditions for a firm to sign the CCP, in a game with incomplete information. The results have indicated that: the firms should follow the CCP if the loss of reputation and fines are big enough, and smaller the infraction profits against the normal profits; the antitrust authority should offer the CCP when the benefits of this proposal are bigger than the losses of the firm; the antitrust authority should offer the CCP when there is a belief that the firm is low cost type.

Cori Vilella

Universitat Rovira i Virgili

  Tuesday, July 11, 3:15, Session D

Strong constrained egalitarian allocations: How to find them    [pdf]

(joint work with Francesc Llerena and Carles Rafels)

Abstract

This paper provides a geometrical decomposition theorem for the strong lorenz core (Dutta and Ray, 1991). As a consequence, we characterize the existence of the set of strong constrained egalitarian allocations and we define an algorithm to find, for any TU game, the strong constrained egalitarian allocations. Moreover, we characterize the connectivity of the strong Lorenz core.

Bernhard Von Stengel

London School of Economics

  Thursday, July 13, 9:45

Games, Geometry and Finding Equilibria

Abstract

Game-theoretic problems have found massive recent interest in computer science. Obvious applications arise from the internet, for example the study of online auctions. In theoretical computer science, some of the most intriguing open problems concern the time that an algorithm needs to find a Nash equilibrium of a game. We give a survey of these open problems, which depend on the type of game considered. For bimatrix games, Nash equilibria are best understood geometrically, but recent results seem to make it unlikely that a "polynomial-time" algorithm for finding a Nash equilibrium exists. Zero-sum "simple stochastic games", on the other hand, are likely to be solvable in polynomial time, but a corresponding algorithm continues to be elusive. The talk is directed at a general audience, not at experts in computational complexity.

Maja Vujovic

faculty of economics

  Monday, July 10, 11:20, Session C

Strategic Decision-Making Using Game Theory    [doc]

(joint work with Nikola Dacic)

Abstract

This work explores the applicability of game theory in decision making. Many aspects of strategy can be studied and systematized by game theory. Game theory offers for the managers ability to realize the similarities between simple games and many complicated situations in business. The games illustrate general principles of behavior. Managers rarely make decisions in vacuum and their choices depend upon the choices made by others. Because of that game theory offers a systematic way of analyzing strategic decision-making in interactive situations. Game theory improves ability to think strategically in complex, interactive situations. In many every day situations managers have different informations. In some games conceal informations play crucial role. Game theory provides tools for the formal analysis of situations where decisions makers have conflict or partially conflict interests. This work reviews earlier game theoretic studies and presents a general pattern for game theory modeling in real business situations. Finally, in this work we proposed some models which can improve the efficiency decision-making process.

Jana Vyrastekova

Tilburg University

  Wednesday, July 12, 4:25, Session C

Coalition formation in a common pool resource game: An experiment

(joint work with Yukihiko Funaki and Daan van Soest)

Abstract

We present experimental data on coalition formation in a social dilemma game. Subjects play the underlying common pool resource game while having the possibility to commit themselves to an action profile in the coalition they form. The coalition formation process follows rules of the continuos time game by Pery and Reny (Econometrica 1994), designed to implement the core of a cooperative TU game. We observe that subjects prevalently form grand coalitions and agree on the efficient and egalitarian payoff vector. When the option to form coalitions is removed, the incentives to freeride are triggered again. The group welfare then drops significantly, as is known from experiments imposing individual decision-making.

Jun Wako

Gakushuin University

  Wednesday, July 12, 4:25, Session D

On a non-existence example of a wdom-vNM set in the Shapley-Scarf housing economy    [pdf]

(joint work with Kiiko Matsumoto, Toshiharu Irisawa)

Abstract

We consider the exchange economy E of Shapley and Scarf (1974). Economy E has a finite number of agents. Each agent is endowed with one differentiated object like a house. They exchange objects to obtain preferable ones. Each agent needs exactly one object, and his preference ordering over the objects may contain indifferences. No monetary transfers are allowed.

For economy E, a von-Neumann-Morgenstern set defined by strong domination may not exist even if the core is nonempty. On the other hand, the strict core becomes the unique vNM set by weak domination (wdom-vNM set) if it is nonempty. Roth and Postlewaite (1977) proved this property by assuming strict preferences. Wako (1991) proved it by allowing indifferent preferences. However, if we allow indifferences, the strict core may be empty. For such cases, it was unknown whether a wdom-vNM set always exists in economy E.

We give an example in which each feasible allocation is individually rational (IR), and a wdom-vNM set does not exist. A nice property of vNM sets enabled us to find the example in a much shorter time than doing a full check. Let X be the set of Pareto efficient allocations of a given example, and X’ the set that we have after removing from X each allocation which does not weakly dominate any allocations in X. We can show that a wdom-vNM set exists in X if and only if a wdom-vNM set exists in X’, and that each wdom-vNM set in X can be recovered from wdom-vNM sets in X’. Applying this property to X iteratively, we could examine the existence of a wdom-vNM set by checking a reduced set of X. Konishi-Quint-Wako (2001) considered extended models of economy E, and showed examples with empty cores. Their examples have no wdom-vNM sets. The non/existence of a wdom-vNM set of the original economy E was thus a remaining question, which was answered by our example. Our investigation also found an example with a unique wdom-vNM set that consists of only allocations which are not IR.

Fan Wang

Stony Brook University

  Wednesday, July 12, 10:55, Session D

Social Learning and the Role of Authority

Abstract

Learning and knowledge sharing are important functions of a society. Modern societies have made significant progress in scientific explorations and developed sophisticated social institutions. However, the journey to understand physical and human nature will never end. Along the way, existing knowledge serves as our framework and it is also challenged by new ways of thinking. We are in a constant state of confusion and trying to re-distinguish reality and fiction both as individuals and as a society. Authority opinion represents status quo of knowledge. In this paper, social learning is modeled as an evolutionary game. It shows that on one hand, society can function more efficiently by identifying capable individuals and letting layman to rely on their opinion as authorities. On the other hand, this social structure of knowledge is liable to bias either due to erroneous thinking or intentional intellectual distortion of this selected few.

Megha Watugala

Texas A&M University

  Monday, July 10, , Session A

First-Best Allocations & the Signup Game: A New Look at Incomplete Information    [pdf]

Abstract

Mechanism design problems are sensitive to the solution concept used. It is well known that by assuming a dominant strategy solution concept for principal agent problems with incomplete information, the principal can extract rent up to the second-best allocations. This corresponds to second-degree price discrimination in monopolies with incomplete information.
Piketty (JET, 1993) shows that for a finite agent population, if the agents’ underlying distribution is known, a mechanism (a game) can be designed whose solution yields the first-best allocations while the principal extracts all the rent. The solution can be derived through iterative elimination of strictly dominated strategies and is applicable to any principal agent problem.
This paper, through the problem of monopolies with incomplete information, shows that in a finite agent population, if the agents’ realized distribution is, one from a set of possible distributions which do not (first order) stochastically dominate each other, the principal is able to construct a sign-up game whose Pareto superior equilibrium has the principal extracting rent of the first-best allocations when any of the possible distributions are realized. Furthermore, this equilibrium would result if agents play weakly dominant strategies from the set of strategies that survive iterative elimination of weakly dominated strategies. The result gives interesting insight into the incompleteness of possible information sets that could be available to the principal.

Jonathan Weinstein

Northwestern

  Thursday, July 13, 11:45, Session A

Two Notes on the Blotto Game    [pdf]

Abstract

We exhibit a new equilibrium of the classic Blotto game in which players allocate one unit of resources among three coordinates and try to defeat their opponent in two out of three. This game has often been used as a simple model of electoral competition or warfare. It is well known that a mixed strategy will be an equilibrium strategy if the marginal distribution on each coordinate is U[0,(2/3)]. All known examples of such distributions have two-dimensional support. Here we exhibit a distribution which has one-dimensional support and is simpler to describe than previous examples. The construction generalizes to give one-dimensional distributions with the same property in higher-dimensional simplexes as well.
As our second note, we give some results on the equilibrium payoffs when the game is modified so that one player has greater available resources. Our results suggest a criterion for equilibrium selection in the original symmetric game, in terms of robustness with respect to a small asymmetry in resources.

Uri Weiss

Tel Aviv

  Monday, July 10, 11:20, Session B

The Regressive Effect of Legal Uncertainty    [pdf]

Abstract

Legal uncertainty has a regressive distributive effect. There are sides who gain from increasing legal uncertainty, and sides who loose from it. Legal uncertainty leads to regressive settlements. A shift from a certainty legal regime to an uncertainty legal regime transfers wealth from risk-averse parties to risk-neutral parties via the settlement. Thus, since poor people are more risk-averse than rich people, a legal uncertainty leads to a transfer of wealth from poor people to rich people. Also, since women are at least perceived to be more risk averse than men, a legal uncertainty leads to a transfer of wealth from women to men. This means that legal uncertainty has a class regressive effect and also a gender regressive effect.

Andreas Westermark

 

  Monday, July 10, 10:55, Session B

Bargaining with Externalities    [pdf]

(joint work with Jonas Björnerstedt)

Abstract

This paper studies bargaining between a seller and multiple buyers with externalities. A full characterization of the stationary subgame perfect equilibria in generic games is presented. Equilibria exist for generic parameter values, with delay only for strong positive externalities. The outcome is efficient if externalities are not too positive. Increasing the bargaining power of the seller decreases the set of parameter values for which only efficient equilibria exist.
The paper generalizes the model presented in Jehiel & Moldovanu (1995a and 1995b). Where they find delay equilibria, we find mixed equilibria, except for a region where no stationary equilibria exist. These mixed equilibria entail no delay and the equilibrium strategies converges to pure as the discount factor approaches one. We are able to show existence of stationary equilibria given a reasonable restriction on parameters. We find delay with strong positive externalities, due to a hold-up problem. All equilibria without delay have the property that agreement is with a specific buyer in the limit.

Myrna Wooders

Vanderbilt University and University of Warwick

  Monday, July 10, 12:10, Session B

Correlated Subjective Equilibrium with Stereotyping

(joint work with Edward Cartwright)

Abstract

Stereotyping of groups of individuals by other individuals appears to be a common phenomenon. For example, statements such as "red necks drive trucks", "the upper classes are snobs", and "women are more concerned about their appearance then men" all equate behavior of individuals who are, in some perhaps focal respect(s), similar. We leave it to the reader to recall statements he may have heard that stereotype behavior of members of religious or ethnic groups.

This presentation will introduce a notion of subjective correlated equilibrium with stereotyping and show that 'near' to any set of correlated equilibrium beliefs (where the beliefs of different individuals may be based on different correlated equilibria) there is an approximate subjective correlated equilibrium with stereotyping. With some, perhaps arguably mild, additional restrictions we also demonstrate existence of exact subjective correlated equilibrium with stereotyping.

This paper is based, in part, on another paper, "Conformity, correlation and equity," by the same authors which contains additional results on correlated equilibrium and conformity. A version of this paper is available on-line at
http://www.vanderbilt.edu/Econ/wparchive/working05.html,
Vanderbilt Working paper #05-W26.
See also Cartwright and Wooders, University of Warwick Working Paper 687
http://www2.warwick.ac.uk/fac/soc/economics/research/papers/2003_publications/

Huan Xie

University of Pittsburgh

  Tuesday, July 11, 3:40, Session C

Repeated Bargaining under Uncertainty of Value Distribution    [pdf]

Abstract

This paper investigates a repeated bargaining problem when the seller doesn't know whether the buyer's value is drawn from a favorable distribution or an unfavorable distribution. The seller rents a durable good to the buyer and proposes a take-it-or-leave-it offer in each period. In the two-period case, we find there exist multiple equilibria and the set of equilibria critically depends on the seller's prior. In particular, the seller always charges a high price in both periods and high type buyer in each period accepts the offer, when the seller's prior of the favorable distribution is high. In a finite horizon, a similar equilibrium exists, with the seller's prior contingent to the horizon. This result weakens the findings of Hart and Tirole (1988)'s rental model and the Gap-case Coasian dynamics that the monopolist can hardly earn profit higher than the lowest value of the buyer.

Hadi Yektas

University of Pittsburgh

  Friday, July 14, 11:20, Session A

Optimal Multi-Object Auctions with Risk Averse Buyers    [pdf]

(joint work with Cagri S. Kumru)

Abstract

We analyze the optimal auction of multiple non-identical objects when buyers are risk averse. We show that when only the downward incentive constraints matter independent auctions are never optimal. This result contradicts sharply with that of a risk neutral environment (Armstrong, 2000). On the other hand, the optimal auction remains to be weakly efficient; in the sense that each object is sold to a buyer who is eager (has high valuation) for it, if such a buyer exists. We further describe the properties of the optimal auction: The seller perfectly insures all buyers against the risk of losing the object(s) for which they have high valuation. While the buyers who are eager to win both objects are compensated if they do not win either object; the buyers who are reluctant (have low valuation) for both objects incur a positive payment in the same event.

Shmuel Zamir

Hebrew University of Jerusalem

  Friday, July 14, 2:45

Playing Against the Field and ``Visibility'' of Mixed Strategies

(joint work with Rosemarie Nagel, Ingrid M.T. Rohde)

Abstract

We present an experimental design in the spirit of population games in which a player plays simultaneously with a whole population (the field) of players with unknown individual identities. We tested this design in 2 X 2 games with a unique Nash equilibrium which is in mixed strategy. While the hypothesis that the individual players play an i.i.d. mixed strategy is rejected, the mixed strategies become `visible' when aggregating over the population. Convergence to mixed strategy play, often very close to the Nash equilibrium of the game, is `clearer' and faster than in reported results in the literature.

Andriy Zapechelnyuk

The Hebrew University

  Tuesday, July 11, 3:40, Session E

Optimal Mechanisms for an Auction Mediator    [pdf]

(joint work with Alexander Matros)

Abstract

We consider a multi-period auction with a seller who has a single object for sale, a large population of potential buyers, and a mediator of the trade. The seller and every buyer have independent private values of the object. The mediator designs an auction mechanism which maximizes her revenue subject to certain constraints for the traders. In each period the seller auctions the object to a set of buyers drawn at random from the population. The seller can re-auction the object (infinitely many times) if it is not sold in previous interactions. We characterize the class of mediator-optimal auction mechanisms. One of such mechanisms is a Vickrey auction with a reserve price where the seller pays to the mediator a fixed percentage from the closing price.

Richard Zeckhauser

Harvard University

  Tuesday, July 11, 2:50, Session D

The Elasticity of Trust: Evidence from Kuwait, Oman, Switzerland, the United Arab Emirates and the United States    [pdf]

(joint work with Iris Bohnet, Benedikt Herrmann)

Abstract

How effective are arrangements that increase the expected returns from trusting—either reducing the cost or the likelihood of betrayal—for fostering trust in three Gulf countries (Kuwait, Oman and the United Arab Emirates) and two Western countries (Switzerland and the United States)? In experimental studies, trust proves more elastic to the likelihood or the cost of betrayal in the West than in the Gulf. In order to trust, participants in the Gulf require greater likelihoods of trustworthiness than do Westerners, and they hardly adjust when the returns from trusting increase. Risk and betrayal aversion contribute to these cross-regional differences.

Lingling Zhang

McGill University

  Friday, July 14, 4:25, Session D

Bidding and Coalition Formation in Environments with Externalities    [pdf]

(joint work with Licun Xue)

Abstract

This paper proposes several multilateral negotiation mechanisms for studying simple coalition formation problems with externalities (see, e.g., Bloch (1996) and Gomes and Jehiel (2005)) where each agent’s payoff depends not only on the coalition he/she belongs to but also on how others partition themselves. We analyze and compare the efficiency properties of the equilibrium outcomes in these mechanisms.

As in Bloch (1996), we assume that coalitions form sequentially and once a coalition forms, it cannot dissolve nor can its members forge new coalitions with the rest of the agents. However, we depart from Bloch’s assumptions that agents move following an exogenously given order and no transfer payment is possible in the process of coalition formation. In particular, we study two types of bidding mechanisms. In one type of the bidding mechanism, agents bid for the right to propose. In the other type, agents bid for proposals, each of which comprises a coalition that is to form and transfers among the agents.

We prove that the sequential coalition formation game associated with each of our mechanisms admits a Markov perfect equilibrium, which is in contrast to Bloch (1996) where a Markov perfect equilibrium may fail to exist. Moreover, each of our games has a dynamically efficient equilibrium that maximizes the total present value.

Ben Zissimos

Vanderbilt University

  Thursday, July 13, 12:10, Session E

Foundations of One World Market    [pdf]

Abstract

In an international trading economy where countries behave strategically, this paper provides sufficient conditions under which all countries choose to trade on one `world' market. The paper shows that the economic structure underlying a standard strategic trade model corresponds to a Shapley-Shubik market game. Using new results from monotone comparative statics in a Shapley-Shubik market game, replication of such an international trading economy is studied. Sufficient conditions are established under which all countries in the replica economy would choose to trade with the countries in the original economy.

Back