Back

Speakers

Christina Achampong

Penn State University

  ,

The Effect of Belief on Performance and of Encounter History on Beliefs in Hawk-Dove Competitions

(joint work with Christopher Byrne)

Abstract

In earlier work, Byrne and Kurland modeled self-deception as an unconscious mechanism to resolve conflict between competing decision algorithms in a Minsky-style modular mind model. A plausible, if high-level model was used to demonstrate that two types of self deception, ignoring the value of a resource (suppressing hunger) or ignoring the cost of a fight (suppressing fear) could be fitness enhancing in hawk-dove encounters on average even though they bore risks of needlessly sacrificing the resource or engaging in a fight with a far stronger opponent. The fitness benefit in this earlier work was solely derived from the increased persuasiveness of a bluffer who believes his own bluff. If a fight ensued, a player’s fearlessness did not help him win.

In the present work, this model is extended by allowing players’ beliefs to affect their performance, either positively or negatively. A non-trivial treatment of belief effects on performance must model some form of limitations on beliefs or everyone would simply believe in victory all the time. The present work models a “capacity for faith” which is a ceiling on belief in victory unique to each player which adapts dynamically to each player’s history of encounters within a lifetime, or generation, in an evolutionary context. If the outcome of an encounter reinforces the faith, e.g. if a player believes he will win and he does, then his capacity for faith in victory increases, whereas if he believes he will win and he loses, his capacity for faith decreases and in the next contest he will not be capable of as much faith in victory – even when suppressing fear due to self-deception. The rate at which capacity for faith increases or decreases is controlled by an adaptive learning damping sequence and the model is structured to allow experiments concerning the choice of damping sequence as well as other parameters.

Josune Albizuri

Basque Country University

  ,

Values and Coalition Configurations    [pdf]

(joint work with Juan Vidal-Puga)

Abstract

Albizuri et al. (2006) considered the concept of coalition configuration to model some negotiations and defined the coalitional configuration as a family of coalitions not necessarily disjoint, whose union is the grand coalition. They generalized the Shapley value with reference to coalition configurations, and simultaneously the notion proposed by Owen (1977). In fact they obtained two generalizations of the Owen value. In both cases it is supposed that a player can cooperate in as many coalitions as he wants and moreover there is not any restriction on these cooperations which depends on the coalitions a player belongs to. Furthermore, there is only one possibility of cooperation once a player belongs to a coalition.
In this paper our aim is to consider the above issues and in this way we give an alternative generalization of the Shapley value and the Owen value. In fact we will obtain two families of values. The second family will be the dual of the first one. We will obtain them with a probabilistic approach based on the ordering of players. We will also include an axiomatic system for each family. We characterize the new values by adapting Owen’s (1977) axioms and adding some other specific axioms. One axiom will be related to the Merger axiom employed by Albizuri et al. (2006). We will also develop the new values by following the heuristic Owen’s procedure of considering the bargaining among the coalitions and within each of the coalitions.

Attila Ambrus

Harvard University

  ,

Hierarchical cheap talk    [pdf]

(joint work with Eduardo Azevedo and Yuichiro Kamada)

Abstract

We investigate situations in which agents can only communicate to each other through a chain of intermediators, for example because they have to obey institutionalized communication protocols. We assume that all involved in the communication are strategic, and might want to influence the action taken by the final receiver. The set of outcomes that can be induced in pure strategy perfect Bayesian Nash equilibrium is a subset of the equilibrium outcomes that can be induced in direct communication, characterized by Crawford and Sobel(1982). Moreover, the set of supportable outcomes in pure equilibria is monotonic in each intermediator's bias, and the intermediator with the largest bias serves as a bottleneck for the information flow. On the other hand, there can be mixed strategy equilibria of intermediated communication that ex ante Pareto-dominate all equilibria in direct communication, as mixing by an intermediator can relax the incentive compatibility constraints on the sender. We provide a partial characterization of all mixed strategy equilibria, and show that the order of intermediators matters with respect to mixed equilibria, as opposed to pure strategy ones.

Rabah Amir

Université catholique de Louvain

  ,

Network E¤ects, Market Structure and Industry Performance    [pdf]

(joint work with Natalia Lazzati)

Abstract

This paper provides a thorough analysis of oligopolistic markets with positive demand-side network externalities and perfect compatibility. The minimal structure imposed on the model primitives is such that industry output increases in a …rm’s rivals’total output as well as in the expected network size. This leads to a generalized equilibrium existence treatment that includes guarantees for a nontrivial equilibrium, and some insight into possible multiplicity of equilibria.

We formalize the concept of industry viability and show that it is always enhanced by having more …rms in the market. We also characterize the e¤ects of market structure on industry performance, with an emphasis on departures from standard markets. As per-firm profits need not be monotonic in the number of competitors, we revisit the concept of free entry equilibrium for network industries. The approach relies on lattice-theoretic methods, which allow for a unified treatment of various general results in the literature on network goods. Several illustrative examples with closed-form solutions are also provided.

Salomon Antoine

LAGA Université Paris 13

  ,

Large Bandit Games    [pdf]

Abstract

We study a multi-player one-arm bandit game: for infinitely many stages, players choose between playing a risky action or dropping out irreversibly to a safe action. Each player observe her payoffs and other players' actions only. We study equilibria of the game when the number of players gets large. We argue that limit equilibrium can exhibit aggregate randomness, and provide a characterization of games where players behaviors lead to a swift determination of the value of the risky action.

Itai Arieli

Hebrew University of Jerusalem

  ,

Rationalizability in Continuous Games    [pdf]

Abstract

Define a continuous game to be one in which every player's strategy set is a Polish space, and the payoff function of each player is bounded and continuous. We prove that in this class of games the process of sequentially eliminating "never-best-reply" strategies does not terminate after the first uncountable ordinal, and that this bound is tight. Also, we examine the connection between this process and common belief of rationality in the universal type space of Mertens and Zamir.

Georgy Artemov

University of Melbourne

  ,

Finitely Repeated Bilateral Trade    [pdf]

(joint work with Sergei Guriev, Dmitriy Kvasov)

Abstract

We study a bilateral trade problem that is repeated finitely many times. In each period seller may sell to a buyer a unit of indivisible good; the valuations for the goods are independent both across agents and across periods. We assume that the budget needs to be balanced in every period. After each period, either player can refuse the exchange; thus, we impose \textit{per-period ex-post} IR constraints. In the last period of relationship, imposing budget balance, IC and IR leads to inefficient trade. However, in any period but last, the agents value future relationship. The ex-ante surplus that this relationship generates enters per-period ex-post IR constraint, thus relaxing it and allowing for more trade in each round but the last. We show that if the relationship lasts long enough, trade in the first periods is fully efficient. Our result does not rely on the assumption that, if an agent deviates and does not follow the prescription of the mechanism, her future surplus is set to zero. In fact, we assume that agents are able to return to the equilibrium path in case of the deviation by the start of the next period. However, the mechanism sets a disagreement point for the deviation: it prescribes that the continuation play after the deviation is take-it-or-leave-it offers, made by the party who has not deviated, until the end of the relationship. As ex-ante surplus of take-it-or-leave-it offers is higher than ex-ante surplus of an efficient mechanism for the party making offers, that party needs to be compensated to return to the equilibrium path. This compensation gives us the ex-ante surplus that would enter into per-period ex-post IR constraint and allow to generate higher levels of trade.

Helena Aten

Georgetown University

  ,

Competing Informed Principals and Representative Democracy    [pdf]

Abstract

This paper proposes a model in which representative democracy can be preferable to direct democracy. Voters are uninformed about the value of a policy-relevant state. Two informed politicians compete for votes by committing to platforms that may or may not reveal information about the underlying state.

We find that if voters' policy preferences are not too sensitive to changes in the state, then the two politicians offer divergent policy platforms. In addition, our main result characterizes Perfect Bayesian Equilibria in which the offered platforms are non-revealing menu contracts, and the resulting welfare is higher than in any separating equilibrium. The result may be viewed as a welfare explanation for why voters may defer policy choices to an elected representative rather than directly select policy based on the information revealed by the political competition itself.

Yaron Azrieli

The Ohio State University

  ,

Characterization of Multidimensional Spatial Models of Elections with a Valence Dimension    [pdf]

Abstract

Spatial models of political competition are typically based on two assumptions. One is that all the voters identically perceive the platforms of the candidates and agree about their score on a ``valence'' dimension. The second is that each voter's preferences over policies are decreasing in the distance from that voter's ideal point, and that valence scores enter the utility function in an additively separable way.

The goal of this paper is to examine the restrictions that these two assumptions impose, starting from a more primitive (and observable) data. Specifically, we consider the case where only the ideal point in the policy space and the ranking over candidates are known for each voter. We provide necessary and sufficient conditions for this collection of preference relations to be consistent with utility maximization as in the standard models described above. That is, we characterize the case where there are policies x_1,...,x_m for the m candidates and numbers v_1,...,v_m representing valence scores, such that a voter with an ideal policy y ranks the candidates according to v_i-||x_i-y||^2.

Yakov Babichenko

Hebrew University of Jerusalem, Center for the Study of Rationality.

  ,

Completely Uncoupled Dynamics and Nash Equilibrium    [pdf]

Abstract

Completely uncoupled dynamic is a repited play of a game, when in every given time the action of every player depends only on his own payoffs in the past. In this paper we try to formulate the minimal set of necessary conditions that guarantee a convergence to a Nash equilibrium in completely uncoupled model.
The main results are:
1. The convergence to a Nash equilibrium cannot be guaranteed with finite memory strategies, in a generic game.
2. A convergence to an ε-Nash equilibrium almost all the time can be guaranteed with finite memory strategies, in a generic game.

Paulo Barelli

University of Rochester

  ,

On the Existence of Nash Equilibria in Discontinuous and Qualitative Games    [pdf]

(joint work with Idione Soza)

Abstract

We show that compact games have pure strategy Nash equilibria if conditions C and Q are satisfied. Condition C states that, whenever a profile of strategies x is not an equilibrium, there exists an open neighborhood V of x and well behaved maps f_i, one for each player i, mapping V to each player's strategies, and satisfying the following property: for any profile of strategies y in the neighborhood V, there exists one player i such that f_i(y) belongs to player i's strict upper contour set, while f_j(y) is unrestricted for players j other than i. For other profiles y' in V, the chosen player need not be player i. Condition Q is a weakening of own-strategy quasiconcavity. This result unifies and generalizes results establishing existence of pure strategy Nash equilibria in the literatures on discontinuous quasiconcave games and on qualitative convex games.

Sourav Bhattacharya

University of Pittsburgh

  ,

Preference Monotonicity and Information Aggregation in Elections    [pdf]

Abstract

If voter preferences depend on a noisy state variable, under what conditions do large elections deliver outcomes "as if" the state were common knowledge? While the existing literature models elections using the jury metaphor where a change in information regarding the state induces voters to switch in favour of only one alternative, I allow for more general preferences where a change in information can induce switch in favour of either alternative. I show that information is aggregated for any voting rule if and only if the probability of switch in favour one alternative is strictly greater than the probability of switch away from that alternative for any change in information. In other words, unless preferences closely conform to the jury metaphor, for large classes of voting rules, there are equilibria that produce outcomes different from the full information outcome with high probability. This condition is very fragile and may be easily violated in spatial elections if the policy space is multidimensional. I conclude that state-contingent conflict in voter preferences may often lead to failure of information aggregation.

David Laurens Bijl

Delft University of Technology

  ,

A Model of Consensus in the European Commission    [pdf]

(joint work with Scott W Cunningham)

Abstract

The European Union is an interesting field of study for game theorists, because of the impact of decisions made at that level, and the complexity of the institutions. Several models have been constructed to analyze the interactions within the Council of Ministers, and between the Council and other European decision-making bodies including the Commission, the Parliament, the European Council, the European Court of Justice, and lobbyists.

This paper adds to the current research by extending an existing model and applying it to the \emph{internal} decision making of the European Commission. If models of the Commission are to be linked with models of other European institutions, the ``factual" outcome may be as important to model as the payoffs. To accommodate this, our model extends the solution concepts of the core and $\varepsilon$-core to include the decision outcomes. For the Commission, these outcomes take the form of proposals for new legislation that are then accepted, rejected or amended by the Council (and in some cases, the Parliament).

This remainder of this paper is structured as follows. First the proposed model will be related to previous literature. The next section briefly describes the various European institutions and the features of the European Commission that make it a unique modeling object. Section 4 introduces the mathematical definitions of the model, and our extension of the standard definitions of solution concepts. The last section offers conclusions about the presented model and a number of recommendations for further research.

Francis Bloch

Universite catholique de Louvain

  ,

Dynamic assignment of durable objects    [pdf]

Abstract

We analyze the assignment of durable objects (positions, offices, dorm rooms) to successive generations of agents. Because agents have temporary property rights over the objects, the assignment mechanism must satisfy an individual rationality constraint, and the assignment process is dynamic. We first characterize fair assignment rules in a model with homogeneous agents, and show that the seniority and rank rules are the only fair rules which satisfy a condition of independence. When agents are heterogeneous, we exhibit a conflict between efficiency and fairness, but show that the existence of fair and efficient rules in dichotomic societies. We also analyze the dynamically efficient assignments in a model with a continuum of types, and show that the planner always prefers to assign the object to older agents. The dynamically efficient assignment is characterized by a stationary selectivity rule, which measures the gap between the types of young and old agents for which goods are assigned to young agents.

Louis Boguchwal

University of St Andrews

  ,

A System for Modeling Strategy Change, Demonstrated with the Ultimatum Game    [pdf]

Abstract

A system has been created that provides a new framework for analyzing Ultimatum Games based on modeling strategy change throughout a game. It models strategy change of individuals, groups, or populations and characterizes dynamics by quantifying newly-defined rates,accelerations, and stability indices of strategy change. To better model real situations, the system has been generalized to ultimatum bargaining games with multi-dimensional strategies. It also applies to experimental environments.

Steven Brams

New York University

  ,

The Undercut Procedure: An Algorithm for the Envy-Free Division of Indivisible Items    [pdf]

(joint work with D. Marc Kilgour and Christian Klamler)

Abstract

We propose a procedure for dividing indivisible items between two players in which each player ranks the items from best to worst and has no information about the other player’s ranking. It ensures that each player receives a subset of items that it values more than the other player’s complementary subset, given that such an envy-free division is possible. We show that the possibility of one player’s undercutting the other’s proposal, and implementing the reduced subset for himself or herself, makes the proposer “reasonable” and generally leads to an envy-free division, even when the players rank items exactly the same. Although the undercut procedure is manipulable, each player’s maximin strategy is to be truthful. Applications of the undercut procedure are briefly discussed.

Kristina Buzard

University of California, San Diego

  ,

Contracting Problems and the Technology of Trade: A Robustness Result with Application to Hold-Up    [pdf]

(joint work with Kristy Buzard and Joel Watson)

Abstract

Watson (2007) demonstrates the importance of modeling trade actions as individual instead of public for implementation in contractual settings with complete but unverifiable information, verifiable trade actions, and nondurable trading opportunities. This paper examines the robustness of the result by developing simple tools that allow easy calculation of the ``punishment values'' that determine the sets of implementable value functions and then providing general conditions under which the modeling choice leads to differences in the implementable sets. We show that, with a minimal amount of structure along the lines of what is typically assumed, every contractual setting has this property and so one must model the trade action as individual in order to correctly characterize the set of implementable value functions for a wide array of settings. However, we find that although the implementable set under ex-post renegotiation is generally larger when one models trade actions as individual instead of public, the additional implementable outcomes do not necessarily include the efficient outcome and so hold-up remains an obstacle to efficient implementation for some classes of trade technologies.

Christopher Byrne

Penn State University

  ,

Size Dependence in an Evolutionary Game Model of Self-Deception

(joint work with Christopher C. Byrne, Matthew P. Haney, Christina Achampong)

Abstract

In earlier work, Byrne and Kurland modeled self-deception as an unconscious mechanism to resolve conflict between competing decision algorithms in a Minsky-style modular mind model. A plausible, if high-level model was used to demonstrate that two types of self deception, ignoring the value of a resource (suppressing hunger) or ignoring the cost of a fight (suppressing fear) could be fitness enhancing in hawk-dove encounters on average even though they bore risks of needlessly sacrificing the resource or engaging in a fight with a far stronger opponent. The fitness benefit in this earlier work was solely derived from the increased persuasiveness of a bluffer who believes his own bluff. If a fight ensued, a player’s fearlessness did not help him win.

In the present work, a higher fidelity model computes the fitness benefits of self-deception separately for players of different sizes to test whether the different self-deceptive tendencies might correlate with the size of a player, where size is a proxy for competitive ability. While the original model focused on the evolution of self-deception in a player of average size and concluded that the entire population would eventually be self-deceiving, the present size dependent model indicates that the type of self-deception that evolves is highly dependent on player size.

Bo Chen

Southern Methodist University

  ,

Optimal Time-Contingent Contract Design    [pdf]

(joint work with Bo Chen; Zaifu Yang)

Abstract

This paper studies a contract design problem in a setting where time clauses are important. A principal hires an agent to complete a project within a fixed time horizon and prefers to have the project done as early as possible. The agent whose effort is not contractible has a tendency to shirk and to delay exerting effort. We show that in the principal's optimal contract, deadlines and payment schemes can be used jointly as effective instruments to motivate the agent to exert effort and to avoid delay so that a better outcome can be achieved for the principal. Specifically, if an early successful completion time is not verifiable and thus a time-contingent wage scheme is infeasible, then a stochastic deadline can be strictly optimal for the principal. On the other hand, if an early successful completion time is verifiable so that the principal can adopt a time-contingent wage scheme, then the principal's optimal deadline is deterministic and the optimal wage scheme features bonus for early completion.

Ying-Ju Chen

University of California, Berkeley

  ,

Contractual Traps    [pdf]

(joint work with Xiaojian Zhao)

Abstract

In numerous economic scenarios, contracting parties may not have a clear picture of all the relevant aspects. While confronted with these unawareness issues, the strategic decisions of the contracting parties critically depend on their sophistication. A contracting party may be unaware of what she is entitled to determine. Therefore, she can only infer some missing pieces via the contract offered by other parties and determine whether to accept the contract based on her own evaluation of how reasonable the contract is. Further, a contracting party may actively gather information and collect evidence about all possible contingencies to avoid to be trapped into the contractual agreement. In this paper, we propose a general framework to investigate these strategic interactions with unawareness, reasoning, and cognition. We build our conceptual framework upon the classical
principal-agent relationship and compare the equilibrium behaviors under various degrees of the unaware agent's sophistication. Several implications regarding optimal contract design, possible exploitation, and cognitive thinking are also presented.

Hsiao-Chi Chen

National Taipei University

  ,

Imitation, Local Interaction, and Coordination    [pdf]

(joint work with Yunshyong Chow and Li-Chau Wu)

Abstract

This paper analyzes players' long run behavior in evolutionary coordination games with one-dimensional local interaction and imitation. Different from Al\'os-Ferrer and Weidenholzer's study (JET, 2008), players in our model are assumed to extract valuable information from their interaction neighbors only. It is found that the payoff-dominant equilibrium could survive in the long run with a positive less-than-one probability. We derive the conditions under which both risk-dominant-strategy and payoff-dominant-strategy takers would coexist in the long run. And the risk-dominant equilibrium is the unique long run equilibrium for the rest cases. These results supplement the findings of Al\'os-Ferrer and Weidenholzer. Finally, the convergence rates to all equilibria are reported.

Chia-Hui Chen

Massachusetts Institute of Technology

  ,

Name Your Own Price at Priceline.com:    [pdf]

(joint work with Strategic Bidding and Lockout Periods)

Abstract

Priceline.com, a website helping travelers obtain discount rates for travel-related items, gained prominence for its Name Your Own Price system. Under Name Your Own Price, a traveler names his price for airline tickets, hotel rooms, or car rentals. Priceline then checks if there is any seller willing to accept the offer. If no one accepts, the buyer has to wait for a certain period of time (the lockout period) before rebidding. This paper builds a one-to-many dynamic model without commitment to examine the buyer's and the sellers' equilibrium strategies. We show that without a lockout period, in equilibrium, the sellers with different costs are either almost fully discriminated or pooled in intervals except the one with the lowest possible cost. In the latter case, the buyer does not raise the bids much until the very end, so the price pattern is convexly increasing, consistent with the empirical finding, and most transactions occur just before the day of the trip, which illustrates the deadline effect that is observed in many negotiation processes. The lockout period restriction, which limits the buyer's bidding chances and seems to hurt the buyer, thus moves the transactions forward and can actually benefit a buyer in some circumstances.

Daniele Condorelli

UCL and Northwestern University

  ,

Value, Willingness to Pay and the Allocation of Scarce Resources    [pdf]

Abstract

A fair share of public resources that produce private benefits is not awarded to those who have the highest willingness to pay for them. Rather, the distribution takes place trough some non-market mechanisms. Moreover, when this is the case, resale of the goods is generally prohibited. Examples include the allocation of scarce medical resource, research funds and housing subsidies. I argue that the case against competitive bidding can be made when (i) people face different opportunity costs of money, and (ii) the designer is interested in maximizing the value of the allocation and not the welfare of the people.

In particular, I study the problem of designing an ex-ante optimal mechanism for the initial allocation of a number of indivisible and identical goods to a set of agents who have private values for the goods. I consider a linear-in-money environment where agents are not budget constrained but face heterogeneous opportunity costs of borrowing, which are unknown to the designer. The designer wishes to maximize the total value of the allocation. However, values are only imperfectly reflected in the willingness to pay for the goods, which are the only observable variables.

I show that a first best can never be achieved; and that extracting information on the willingness to pay is almost always beneficial to designer. Further, I characterize the cases when running a standard auction produces an optimal allocation. I also show that when simple auction are not optimal (e.g. if the value and the opportunity cost of money are highly correlated), a no-resale clause must be imposed to implement the second best.

Scott Woodroofe Cunningham

Delft University of Technology

  ,

Strategic Transmission of Information and the Framing of Environmental Regulation    [pdf]

(joint work with Joop Koppenjan and Scott W. Cuningham)

Abstract

The paper explores the idea that a government's willingness to commit to new research in support of its environmental legislation is itself an instrument of strategy. Promise always to commit to new research when under protest, and the government is subject to the hazard of never ending dispute at the hands of those it would regulate. On the other hand, a government that never investigates the validity of its legislation is subject to the resultant social costs of inadequate legislation. This paper sets up a two-player game between government and industry, subject to uncertainty, and considers the resulting perfect Bayesian equilibrium as players evaluate the tenor of the resulting claims and counterclaims.

Geoffroy De Clippel

Brown University

  ,

Egalitarianism and Egalitarian Equivalence under Asymmetric Information    [pdf]

(joint work with David Perez-Castrillo and David Wettstein)

Abstract

The theory of social choice has been applied extensively to determine collective actions. Nevertheless, the implication of informational constraints are not yet well understood. This is an important limitation. In many practical scenarios the participants already have some private information when they engage in the cooperative process. Extending the theory of social choice to characterize selection criteria that are applicable to the mechanism design problem is thus an important research agenda. As a step in that direction, we discuss in a first paper (joint with Daviv Wettstein) possible extensions of the egalitarian solution to environments with asymmetric information. In a second paper (joint with David Perez-Castrillo and David Wettstein) we avoid interpersonal comparisons of interim utilities by studying egalitarian equivalence in an exchange economy under incomplete information. Both papers are work in progress at the time of submission to the conference, but I will submit a polished version of these papers before the conference, if accepted.

Massimo De Francesco

University of Siena

  ,

The Competitive Outcome in a Dynamic Entry and Price Game with Capacity Indivisibility    [pdf]

Abstract

Strategic market interaction is here modelled as a two-stage game in which potential entrants choose capacities and active firms compete in prices. Due to capital indivisibility, the capacity choice is made from a finite grid and there are substantial economies of scale. In the simplest version of the model assuming a single production technique, the equilibrium of the game is shown to depend on the level of total demand at a price equal to the minimum of average cost: with a sufficiently large market, the competitive price (a price equal to the minimum of average cost) emerges at a subgame-perfect equilibrium of the game; failing the large market condition, the firms randomize in prices on the equilibrium path of the game. Generalizations are provided for the case of two techniques.

Kris De Jaegher

Utrecht University, Utrecht School of Economics

  ,

All Purpose Minimal Sufficient Networks in the Threshold Game    [pdf]

Abstract

This paper considers a multi-player stag hunt where players differ in their degree of conservatism, i.e. in the threshold of players that need to act along with them before they see benefits in collective action. Additionally, any player is either available for action or not. Minimal sufficient networks, which depending on their thresholds allow players to achieve just enough interactive knowledge about each other’s availability to act, take the form of hierarchies of cliques (Chwe, RES, 2000). We show that any typical threshold game has a plethora of such networks, so that players seem to face a large degree of strategic uncertainty over which network to use. The plethora of networks includes cases where the structure of the network infects players into acting more conservatively than is reflected in their thresholds. An extreme case of this is the core-periphery network, where each player acts as conservatively as the most conservative player that can exist in the population. Because of this feature, the core-periphery network is minimal sufficient for all possible populations. Players can thus solve the strategic uncertainty arising from the multiplicity of minimal sufficient networks by using the all-purpose core-periphery network.

Dinko Dimitrov

University of Munich

  ,

How to Connect under Incomplete Information    [pdf]

(joint work with Dinko Dimitrov and Claus-Jochen Haake)

Abstract

We study how players' incomplete information about neighbors affects the structure of a network. In our setup, a player's type is a set of players he would like to be connected with, while a social planer designs mechanisms assigning a undirected network to each profile of types. We suppose that players enter into coalitional contracts either at the ex ante or at the interim stage, and show that the ex ante incentive compatible core and the interim incentive compatible coarse core are both non-empty in the presence of link-specific costs and benefits.

Emin Dokumaci

University of Wisconsin-Madison

  ,

Schelling Redux: An Evolutionary Dynamic Model of Residential Segregation    [pdf]

(joint work with William H. Sandholm)

Abstract

Schelling (1971) introduces a seminal model of the dynamics of residential segregation in an isolated neighborhood. His model combines agent heterogeneity with explicit behavior dynamics; as such it is presented informally, and with the use of \"semi-equilibrium\" restrictions on out-of-equilibrium play. In this paper, we use recent techniques from evolutionary game theory to introduce a formal version of Schelling\'s model, one that dispenses with equilibrium restrictions on the adjustment process. We show that key properties of the resulting infinite-dimensional dynamic can be derived using a simple finite-dimensional dynamic that captures aggregate behavior. We determine conditions for the stability of integrated equilibria, and we derive a strong restriction on out-of-equilibrium dynamics that implies global convergence to equilibrium: along any solution trajectory, one population\'s aggregate behavior adjusts monotonically, while the other\'s changes direction at most once. We present a variety of examples, and we show how extensions of the basic model can be used to study both alternative specifications of agents’ preferences and policies to promote integration.

Miguel A Duran

University of Malaga

  ,

The Economics of Favoritism    [pdf]

(joint work with Miguel A. Duran, Antonio J. Morales)

Abstract

This paper analyzes why agents are interested in belonging to a group of friends which is used as an alternative search channel by both employers and workers. We use a principal-agent model with two types of workers (high and low productivity workers) and two effort levels, excluding any friendship-related externality in individuals' utility function. In this setting, we show that, for certain values of the relevant variables, the equilibrium involves workers hired by friends shirking, although they would have provided a high-effort level in a competitive labor market. That is, we show that the use of social links in matching processes might reduce the effort level that employees exert. In addition, by contrast to previous theoretical contributions in this area, we give theoretical support to the empirical evidence that points out that workers hired by friends or relatives receive lower salaries.

Wioletta Dziuda

Northwestern University

  ,

Dynamic Policy-Making with Endogenous Default    [pdf]

(joint work with Wioletta Dziuda, Antoine Loeper)

Abstract

We analyze a dynamic, infinite horizon model of policy-making with endogenous default. Every period one party proposes a new policy together with the scheme of transfers, and the vote takes place. If a sufficient set of players vote in favor of the proposal, the new policy is implemented, and the transfers are exchanged. Otherwise, the previous period policy, called status quo, remains in place. In the subsequent period the policy implemented in the last period becomes the default option and the legislative process repeats itself. The preferences of the parties evolve over time, and this generates the need for policy updates. We allow the transfers to be arbitrarily inefficient to capture the common assumption that pork-barrel spending and other distributive policies might come at a cost. In particular, infinitely inefficient transfers represent the case in which no transfers are allowed.
We show that when transfers are efficient, there exists a unique stationary Markov equilibrium, in which in each period the strongly efficient policy is chosen. With inefficient transfers, the policy function is never strongly efficient in each state, and the level of optimality of the implemented policy decreases with the inefficiency of transfers. When transfers are not allowed, the policy function is even not Pareto efficient. Hence, transfers between the legislators, however inefficient, may increase the welfare of the citizens, but increase the volatility in the policies implemented.
We show that when the distribution of the peaks of the players is identical and unimodal, the policies will be closer to the mean than the strongly optimal policy. This implies, that in extreme situations the policies will be less efficient. We analyze the properties of the optimal policy for different distributions of the preferences.

Omer Edhan

The Hebrew University

  ,

Continuous Values of Exact Market Games    [pdf]

Abstract

We study the uniqueness of continuous values on HM - the space generated by exact market games with a finite dimensional strictly convex core. We first prove a representation theorem for values on HM. We then prove that every value on HM is determined by its values on HM' - the subspace of HM containing games of the form f(P), where the vector measure P is the image under some affine map of a vector of countably many mutually singular measures. Finally we prove that the Mertens value is the unique continuous value on HM which obeys the "increasing subgame axiom". Though previous works studied the uniqueness of the value on spaces of non-differentiable market games (e.g.- [Haimanko 2000], [Haimanko 2001],[Haimanko 2002]) they did so under the assumption that the games were of the form f(P) where P is a vector of mutually singular measures, and this assumption was essential in these studies. The present paper contributes to the study of the uniqueness of the value in the more general case, i.e. - when P is not necessarily a vector of mutually singular measures. This is in fact the first contribution in this direction.
[Haimanko 2000] O. Haimanko, Values of Games with a Continuum of Players, Phd Thesis, The Hebrew University of Jerusalem, 2000.
[Haimanko 2001] O. Haimanko, Cost sharing: The non-differentiable case, Journal of Mathematical Economics 35, 2001, 445-462.
[Haimanko 2002] O. Haimanko, Payoffs in non-differentiable perfectly competitive TU economies, Journal of Economics Theory 106, 2002, 17-39.

Micael Ehn

Mälardalen University

  ,

Why Social Stratification is to be Expected    [pdf]

Abstract

Social stratification is present in all modern societies and humans have developed systems where individuals can greatly improve their lot through strategies such as education. Thus social stratification in modern societies presents us with a dilemma: why don't people with low income simply change their strategies to mimic the high earners?

This paper uses a mathematical model with minimal assumptions that show how social stratification might evolve from education when people are equal and discount their future payoffs. The model is shown to fit well with statistical data on income and education in several countries, suggesting that the kind of social stratification that we observe is to be expected to appear endogenously, whether or not individuals have equal chances. Furthermore, the results yield concrete suggestions on how to increase the proportion of educated people in society.

Kfir Eliaz

Brown University

  ,

Reason-Based Choice: A Bargaining Rationale for the Attraction and Compromise Effects

(joint work with Geoffroy de Clippel)

Abstract

Among the most important and robust violations of rationality are the attraction and the compromise effects. The compromise effect refers to the tendency of individuals to choose an intermediate option in a choice set, while the attraction effect refers to the tendency to choose an option that dominates some other options in the choice set. This paper argues that both effects may result from an individual's attempt to overcome the difficulty of making a choice in the absence of a single criterion for ranking the options. Moreover, we propose to view the resolution of this choice problem as a cooperative solution to an intra-personal bargaining problem among different selves of an individual, where each self represents a different criterion for choosing. We first identify a set of properties that characterize those choice correspondences that coincide with our bargaining solution, for some we characterize a set of properties such that there exists a pair of preference relations. Second, we provide a revealed-preference foundation to our bargaining solution and characterize the extent to which these two preference relations can be uniquely identified. Alternatively, our analysis may be reinterpreted as a study of (inter-personal) bilateral bargaining over a finite set of options. In that case, our results provide a new characterization, as well as testable implications, of an ordinal bargaining solution that has been previously discussed in the literature under the various names of fallback bargaining, unanimity compromise, Rawlsian arbitration rule and Kant-Rawls social compromise.

Matthew Elliott

Stanford University

  ,

Inefficiencies in Trade Networks    [pdf]

Abstract

Buyers and sellers make relationship specific investments to enable trade, which is modeled as a network formation problem. Inefficiencies are investigated and depend on bargaining power and the investment protocol: whether buyers and sellers must make fixed non-substitutable exogenous investments, or whether they can endogenously negotiate individual contributions. It is shown that inefficiencies can consume ALL the gains from trade, except when exogenous investment are made in proportion to bargaining power. Inefficiencies are partitioned into three types: over-investment in relationships used only to generate outside options, under-investments in relationships that should be used for trade, and coordination inefficiencies. With exogenous investments, under-investment inefficiency can consume all the gains from trade whenever investment shares are not exactly proportional to bargaining power, whilst over-investment inefficiency is bounded. With endogenous investment, there is no under-investment inefficiency, but over-investment inefficiency can consume all the gains from trade.

Jeffrey Ely

Northwestern University

  ,

Sunk-cost Bias: A Memory Kludge    [pdf]

(joint work with Sandeep Baliga)

Abstract

We study a sequential investment model and offer a theory of the sunk cost fallacy as an optimal response to limited memory. As new information arrives, a decision-maker may not remember all the reasons he began a project. The initial sunk cost gives additional information about future net profits and should inform subsequent decisions. We show that in different environments, this can generate two forms of sunk cost bias. The Concorde effect makes the investor more eager to complete projects when sunk costs are high and the pro-rata effect makes the investor less eager. The relative magnitude of these effects determines the overall direction of the sunk cost bias. In a controlled experiment we had subjects play a simple version of the model. In a baseline treatment with no memory constraints subjects exhibit the pro-rata bias. When we induce memory constraints the effect reverses and the subjects exhibit the Concorde bias.

Mahmoud Farrokhi Kashani

Institute of Mathematical Economics, Bielefeld University

  ,

Coalition Formation in the Airport Problem    [pdf]

Abstract

We have studied the incentives of forming coalitions in the Airport Problem. It has shown that in this class of games, if coalitions form freely, the Shapley value does not lead to the formation of grand or coalitions with many players. Just a coalition with a few number of players forms to act as the producer and other players would be the consumers of the product. We have found the two member coalition which forms and we have checked its stability.

Emel Filiz Ozbay

University of Maryland

  ,

Multi‐unit Auctions with Resale    [pdf]

(joint work with Erkut Ozbay)

Abstract

We study multi-unit auctions in the presence of resale opportunities among bidders. There are two types of bidders in terms of number of units they are interested: large and small bidders. Large and small bidders have independent private values and each small bidder is symmetric in terms of their valuations but they demand different single units. The large bidders are interested in multi units and there are complementarities among the units. We analyze equilibrium bids and expected revenues for First- and Second-Price Sealed Bid Auctions that are run separately for each unit of the good and Generalized Vickrey Auction where bidders submit package bids for the units in a single auction. If the allocation by an auction is not efficient, then post-auction trades may occur among bidders via monopoly pricing by the winners of the auction. We show that the generalized Vickrey auction with or without resale markets allocates the units efficiently but it does not maximize the revenue; although truth-telling is equilibrium of the Second-Price Auction without resale, it is no longer the case when resale markets are present; Second-Price Auction with resale generates higher revenue than Second-Price Auction without resale; for particular distributions, First-Price Auction with resale generates higher revenue than First-Price Auction without resale. (JEL D44)

Guillermo Flores

Pontificia Universidad Católica del Perú

  ,

Corruption Efficiency: Corruptible Bureaucratic Systems and Implementation of Governmental Solutions    [pdf]

(joint work with Mendighetti, Alejandro and Necochea, Romina)

Abstract

The government provides services to the citizens at a low price (close to the production cost) and in a first-come-first basis. In some cases, the government officers in charge of distributing those services provided by the government would require a bribe from the citizens, which configures a “corruption act”. In limited cases, a “corruption act” can be considered as “efficient”, if it allows a better distribution of the services and satisfies the real expectative of the citizens without breaking the law or creating negative externalities to society. By applying game theory, we demonstrate that the “efficient corruption act” positive effects can be taken and incorporated to the current bureaucratic system even with the elimination of the “corruption act” by promoting (i) “whistle blowing behavior” among government officers; or, (ii) a “double window system” with differentiated costs.

Francoise Forges

Universite Paris Dauphine

  ,

Core-stable bidding rings    [pdf]

(joint work with Omer Biran)

Drew Fudenberg

Harvard University

  ,

Repeated Unknown Games

(joint work with Yuichi Yamamoto)

Abstract

We study repeated games with imperfectly observed actions in which the state of the world, chosen by Nature at the beginning of the play, influences the distribution of public signals and/or the payoff functions of the stage game. To do so, we introduce the concepts of perfect public ex-post equilibrium and type-contingent perfect public ex-post equilibrium. These equilibria have a recursive structure, so we are able to characterize the equilibrium payoffs for patient players by extending the linear programming techniques of Fudenberg and Levine [1994], and provide sufficient conditions for various sorts of folk theorem. We can also provide weaker sufficient conditions for the existence of these ex-post equilibria.

Takako Fujiwara-Greve

Keio University

  ,

Cooperation in Repeated Prisoner's Dilemma with Outside Options    [pdf]

(joint work with Yosuke Yasuda)

Abstract

We examine variants of repeated Prisoner's Dilemma from which players can exit by taking an outside option and investigate effects of outside option structures on the sustainability of cooperation. Although mutual cooperation becomes more difficult in the presence of outside options than in ordinary repeated games, whether the options are perturbed or not makes a difference.
Stochastic outside options enhance cooperation as compared to deterministic ones, when the possibility of an attractive option tomorrow makes players patient today. This logic applies to both one-sided and two-sided outside option models, but the effects of stochasticity are weaker in the latter.

Andrey Garnaev

Saint Petersburg State University

  ,

Jamming in Wireless Networks with Cooperative Jammers    [pdf]

Abstract

The problem of jamming plays a very important role in ensuring the quality and security of wireless communications, especially now when wireless networks are quickly becoming ubiquitous. Since jamming phenomenon can be considered as a game where a player (say, jammer) is playing against a user (transmitter), game theory is an appropriate tool for dealing with the jamming problem. In this paper we study how increasing number of jammers impacts the game. Namely, we consider plots with M jammers and as an objective function to the user we consider SINR and Shannon capacity. For both objective functions we have shown that the jammers employ time sharing strategies to bring the maximal harm and we produce a finite step algorithm allowing to find the saddle point in closed form.

Gagan Pratap Ghosh

University of Iowa

  ,

Efficiency in a Class of Multi-Unit Auctions    [pdf]

Abstract

We analyze discriminatory auctions with symmetric bidders having demand
for two units and single-dimensional signals of private valuations, that is, the valuation for the second unit is a known function of the valuation for the first unit. We show that if the distribution of signals and the valuation function are differentiable then there exists a unique symmetric equilibrium which is differentiable. This equilibrium leads to an ineffcient allocation with positive probability.

Wolf Gick

Harvard University

  ,

Like-Biased Experts And Noisy Signals    [pdf]

Abstract

This paper revisits the literature on strategic information transmission with multiple senders of like biases.
We impose restrictions on information transmission by assuming that experts observe noisy signals. The paper compares fully revealing equilibria with partition equilibria, to reach some findings that are of use for the design of practically feasible game forms. We find that fully revealing equilibria do not survive our perturbation, while the class of partition equilibria that we study is robust. In the proposed equilibrium with two senders, the decision maker designs a communication protocol that gives the second sender a reduced message space, and after the disclosure the decision maker best responds to his beliefs. The paper so highlights the value of partition strategies for viable game forms of expertise, vis-a-vis fully revealing equilibria, and closes a lacuna that has remained open in the literature on strategic information transmission with like-biased senders.

Grandjean Gilles

university of Louvain (UCL)

  ,

Strongly Rational Sets for Normal Form Games    [pdf]

(joint work with Ana Mauleon, Vincent Vannetelbosch)

Abstract

Curb sets [Basu and Weibull, Econ. Letters 36 (1991), 141-146] are product sets of pure strategies containing all individual best-responses against beliefs restricted to the recommendations to the remaining players. Prep sets [Voorneveld, Games Econ. Behav. 48 (2004), 403-414] only require that the product sets contain at least one best-response to such beliefs. While the concepts of curb and prep sets are set-theoretic coarsenings of the notion of Nash equilibrium, we introduce the concepts of strong curb sets and strong prep sets which are set-theoretic coarsenings of the notion of strong Nash equilibrium. We require the set to be immune not only against individual deviations, but also against group deviations. We show that every game has at least one minimal strong curb (prep) set. Minimal strong curb (prep) sets are compared with strong Nash equilibria, coalition-proof Nash equilibria and the set of coalitionally rationalizable strategies. Finally, we provide a dynamic learning process leading the players to playing strategies from a minimal strong curb set.

Jacob Goeree

California Institute of Technology

  ,

Threshold versus Exposure in Simultaneous Ascending Auctions    [pdf]

(joint work with Yuanchuan Lien)

Abstract

We consider environments where a single global bidder interested in only the package that contains all items competes with local bidders interested in only a single item. This environment creates a severe "exposure problem" for the global bidder in the simultaneous ascending auction (SAA) where competition takes place on an item-by-item basis. We derive the Bayes-Nash equilibrium for this setup and illustrate the degree to which efficiency and revenue are suppressed as a result of the exposure problem. We also consider a variant of the simultaneous ascending auction that allows for package bidding (SAAPB). Our environment creates a severe "threshold" or free-riding problem for the local bidders since all that matters is that as a group they outbid the global bidder. We derive the Bayes-Nash equilibrium for the SAAPB and illustrate the extent to which e±ciency and revenue are suppressed as a result of the threshold problem. We also report the results of experiments in which two or ¯ve local bidders compete with a single global bidder in either the SAA or SAAPB. While the experimental results closely match the theoretical predictions for SAA, we find little evidence for the threshold problem under SAAPB. As a result, the SAAPB performs equally well as the SAA in an environment where it is supposed to do much worse. These ¯ndings can be explained by considering the feedback effects of deviations from the Bayes-Nash equilibrium: in the SAAPB, the naive bidding strategy of bidding up to ones value is an "almost equilibrium." In contrast, when the global bidder deviates from the Bayes-Nash equilibrium in the SAA, there are no feedback effects for the local bidders who follow a simple dominant strategy.

Russell Golman

University of Michigan

  ,

Quantal Response Equilibria with Heterogeneous Agents    [pdf]

Abstract

I examine the use of single-agent and representative-agent models to describe the aggregate behavior of heterogeneous quantal responders. I consider heterogeneous quantal response functions arising from a distribution of distributions of payoff shocks. A representative agent would have the population average quantal response function. Weakening a standard assumption about the admissible distributions of payoff shocks, I show existence of a representative agent. However, this representative agent does not have a representative distribution of payoff shocks, nor any iid distribution in large enough games. Almost all applications fitting quantal response equilibrium to data have assumed iid payoff disturbances; my result suggests we should allow noise terms that are jointly dependent across actions. I consider a specific case of heterogeneous logit responders and find that a mis-specified homogenous logit parameter will have downward bias. This means that a single-agent logit model underestimates the average level of rationality in a heterogeneous player pool.

Olivier Gossner

Paris School of Economics & London School of Economics

  ,

A Reasoning Approach to Knowledge

(joint work with Elias Tsakas (Maastricht University))

Abstract

We study the knowledge of a reasoning agent who assumes consciousness of all primitives: for each primitive proposition, the agent believes that he knows whether he knows if this proposition is true.

If the agent is really conscious of all primitive propositions, we show that the agent is actually conscious of all propositions, in which case positive and negative introspection hold for every proposition. This result provides a foundation for introspection based on the assumptions that 1) the agent can derive knowledge using a reasoning process 2) in this reasoning process, the agent assumes that he is conscious or primitive propositions and 3) the agent is indeed conscious these primitive propositions.

If the agent is not conscious of all primitive propositions, but thinks he is, we show that the agent is necessarily either unaware of some primitive proposition, or unaware about his knowledge of a primitive proposition, or exhibits delusion about his own knowledge. In this case, bounded rationality arises as the outcome of the agent making an unfounded assumption on the structure of his own knowledge, assuming consciousness of primitive propositions when this property doesn't hold.

What distinguishes the rational agent's knowledge from the boundedly rational one isn't their mental processes, but rather the level of familiarity that these agents have with their environments.

Finally, we show that the complexity of the state space we study is low in the sense that each state can be described through the value of primitive propositions and the knowledge of the agent on a limited number of propositions at that state. This shows that our model, while encompassing both the rational agent and the unaware one, remains tractable.

Konrad Grabiszewski

Instituto Tecnológico Autónomo de México

  ,

Procedural Type Spaces    [pdf]

Abstract

Type space is of fundamental importance in epistemic game theory. This paper shows how to build type space if players approach the game in a procedural way advocated by rationalizability. If an agent fixes a strategy profile of her opponents and ponders which of their beliefs about her set of strategies make this profile optimal, such an analysis is represented by transition probabilities and yields disintegrable beliefs. Our construction requires that underlying space is separable.

Amy Greenwald

Brown University

  ,

An Algorithm to Compute the Stochastically Stable Distribution of a Perturbed Markov Matrix    [pdf]

(joint work with John Wicks)

Abstract

We present a novel state aggregation technique, which we use to give the first (to our knowledge) scalable, exact algorithm for computing the stochastically stable distribution of a perturbed Markov matrix. Since it is not combinatorial in nature, our algorithm is computationally feasible even for high-dimensional models.

Sergiu Hart

Hebrew University of Jerusalem

  ,

Dynamics and Equilibrium    [pdf]

Abstract

It is a fact that in the existing literature there are no general natural dynamics leading to Nash equilibria. The talk provides an overview of research that sheds light on this and related issues.

Ziv Hellman

Hebrew University

  ,

How Common are Common Priors?    [pdf]

(joint work with Ziv Hellman; Dov Samet)

Abstract

To answer the question in the title we vary agents' beliefs against the background of a fixed knowledge space, that is, a state space with a partition for each agent. Beliefs are the posterior probabilities of agents, which we call type profiles. We then ask what is the topological size of the set of consistent type profiles, those that are derived from a common prior (or a common improper prior in the case of an infinite state space). The answer depends on what we term the tightness of the partition profile. A partition profile is tight if in some state it is common knowledge that any increase of any single agent's knowledge results in an increase in common knowledge. We show that for partition profiles which are tight the set of consistent type profiles is topologically large, while for partition profiles which are not tight this set is topologically small.

Penelope Hernandez

University of Valencia

  ,

Bounded Memory Equilibrium    [pdf]

(joint work with Penelope Hernandez and Eilon Solan)

Dorothea Herreiner

Loyola Marymount University

  ,

Do Intentions Matter for Empowerment? Procedural Justice in Simple Bargaining Games

Abstract

Giving an affected person some control in a decision-making process generally increases the satisfaction with the outcome because participation enhances procedural justice. Empowering a receiver in a simple bargaining game by providing the option to reject a proposal (ultimatum game) instead of imposing a proposal (dictator game) leads to more equitable outcomes as Shor (2007) shows. Whether empowerment itself matters, i.e. the fact that the receiver can influence the outcome, or the implicit recognition by the proposer that the receiver is disadvantaged, i.e. the intention behind the empowerment, remains an open questions addressed in this experimental study. Several variants of Shor’s empowerment game (choice between ultimatum and dictator game) are run where the choice to empower the receiver is made by the proposer, randomly, or a third party. Significant differences emerge between proposals depending on the empowerment of the receiver and for the frequency with which the receiver is empowered; the intentionality behind the empowerment decisions, however, does not seem to make a significant difference.

Brent Hickman

University of Iowa

  ,

Effort, Achievement Gaps, and Affirmative Action: A New Look at College Admissions    [pdf]

Abstract

I construct a strategic model of incomplete information where many heterogeneous students compete for seats at colleges and universities of varying prestige. I argue that the model is strategically equivalent to an all-pay auction where agents differ with respect to their cost of bidding. Using tools from auction theory, I characterize equilibrium behavior by deriving a set of equations that approximate equilibrium strategies when the number of players is large. I use the model to analyze the effects of Affirmative Action policies on effort choice and achievement gaps. I also compare the performance of two common implementations of Affirmative Action: quotas and admission preferences. I show that these policies have very different effects on effort, achievement gaps, and allocation of college admissions in equilibrium. The model suggests that admissions preferences (such as those previously used in undergraduate admissions at the University of Michigan) are unambiguously bad for effort choice, and ineffective as an allocational mechanism. These findings differ from those of previous theoretical work on Affirmative Action. Quotas seem to perform better than admissions preferences, with some positive and some negative effects on effort and achievement gaps. Both policies widen the achievement gap among the best and brightest students.

Magnus Hoffmann

University of Magdeburg

  ,

Do I Want It All? A Simple Model of Satiation in Contests    [pdf]

(joint work with Vilen Lipatov)

Abstract

We present a formal analysis of a contest with possible satiation of the contestants in the prize they might obtain. We consider contests, given a ratio form of the contest success function and risk neutral, possibly asymmetric players. After laying out the Nash-equilibria and the solution in a sequential move game, we will extend the analysis so as to endogenize the order of moves. We find that if the players are sufficiently asymmetric, the effort level will be zero in equilibrium, although players’ aggregate demand exceeds the value of the prize.

Sunghoon Hong

Vanderbilt University

  ,

Enhancing Transportation Security against Terrorist Attacks    [pdf]

(joint work with Sunghoon Hong)

Abstract

We study a model of strategic interaction between a terrorist organization and a security agency in a transportation network carrying passengers and freight between locations. By carrying explosives to a target location through the transportation network, the terrorist organization can damage the target and disrupt the operation of the network. While gaining utility from the damage of the target and from the disruption of the network, the terrorist organization incurs the cost of carrying explosives. A security agency is informed of the terrorist attack. By shutting down some transportation routes in the network, the security agency can protect the target from the attack. Since the shutdown of routes disrupts the operation of the network, the security agency incurs the cost of shutting down transportation routes. The security agency also loses utility from the damage of the target. In this model we find an optimal security policy under which the security agency can protect the target from devastating terrorism and effectively operate the network. To understand how the terrorist organization commits terrorism under the optimal security policy, we find a class of subgame perfect equilibria of this model. We also introduce algorithms to find a maximum flow and a minimum cut in a transportation network.

Matias Iaryczower

California Institute of Technology

  ,

Choosing Records: Flip-Flops and Cronies    [pdf]

(joint work with Andrea Mattozzi)

Younghwan In

National University of Singapore

  ,

Signaling Private Choices    [pdf]

(joint work with Younghwan In and Julian Wright)

Abstract

For a number of important applications of signaling, it is sometimes more reasonable to assume that the sender rather than nature chooses its unobservable features (e.g. its private choice of quality). In other situations, it makes no sense at all for nature to determine the sender's unobservable features (e.g. its private choice of capacity, investment, contract or price). This paper provides a framework to analyze a wide range of such endogenous signaling problems. An equilibrium concept (Reordering Invariance) is proposed which is powerful in eliminating unreasonable equilibria and relatively easy to apply. A class of monotone endogenous signaling games is characterized, in which the sender can influence the receivers' actions to its benefit through signaling. For such games, we show that a sender's private choice can still have some commitment value even though it is not observed, and that in equilibrium, the sender's signals must be exaggerated. These points are illustrated with a simple model of costly announcements that applies to the classic time inconsistency problem of monetary policy. The paper also explains how to apply our framework to more complicated settings, including to situations which have not previously been considered as signaling problems (e.g. to loss leader pricing and to the opportunism problem that arises when a manufacturer sells to competing retailers through secret contracts).

Elena Inarra

University of the Basque Country

  ,

Deriving Nash Equilibria as the Supercore for a Relational System    [pdf]

(joint work with E Inarra, C Larrea and A Saracho)

Abstract

In this paper, under a binary relation that refines the standard relation which only accounts for single profitable deviations, we obtain that the set of NE strategy profiles of every finite non-cooperative game in normal form coincides with the supercore (Roth, 1976) of its associated abstract system. Further, under the standard relation we show when these two solution concepts coincide.

Tanguy Isaac

Université Catholique de Louvain

  ,

Information Revelation in Markets with Pairwise Meetings : Complete Revelation in Dynamic Analysis    [pdf]

Abstract

We study information revelation in markets with pairwise meetings. We focus on the one-sided case and perform a dynamic analysis of a constant entry flow model. The same question has been studied in an identical framework in Serrano and Yosha (1993) but they limit their analysis to the stationary steady states. Blouin and Serrano (2001) study information revelation in a one-time entry model and obtain results di fferent than Serrano and Yosha (1993). We establish that the main di fference is not due to the steady state analysis but is due to the diff erences concerning the entry assumption.

Reinoud Joosten

University of Twente

  ,

Generalized Projection Dynamics in Evolutionary Game Theory    [pdf]

(joint work with Reinoud Joosten & Berend Roorda)

Abstract

We introduce the ray-projection dynamics in evolutionary game theory by employing a ray projection of the relative fitness (vector) function both locally and globally. By global (local) ray projection we mean a projection of the vector (close to the unit simplex) unto the unit simplex along a ray through the origin. For these dynamics, we prove that every interior evolutionarily stable strategy is an asymptotically stable fixed point, and that every strict equilibrium is an evolutionarily stable state and an evolutionarily stable equilibrium.
Then, we employ these projections on a set of functions related to the relative fitness function which yields a class containing e.g., best-response, logit, replicator, and Brown-Von-Neumann dynamics.

Ruben Juarez

University of Hawaii

  ,

Monotonic Solutions to the Experts Aggregation Problem    [pdf]

Abstract

An amount of money needs to be divided among a group of tasks.
A group of experts (judges) provide impartial recommendations on how to divide this money. An aggregator takes into account these recommendations to provide an exact division of the money.
This paper characterizes the class of rules that meet unanimity and monotonicity in the expert's opinions. If all the experts specialize on all tasks, only the linear aggregators meet these two properties.
On the the hand, if experts share some expertize, but maybe not the same, then a large class of rules meet the above properties.
Only quasi-proportional aggregators are unanimous and strongly-monotonic.

Adam (Tauman) Kalai

Microsoft Research

  ,

Bargaining in Strategic Games with Private Information    [txt]

(joint work with Ehud Kalai)

Abstract

The difficulty of achieving efficiency in strategic games, especially in the presence of private information, arises from the inherent tension between *cooperation* and *competition*. To address this, we propose the "coco" solution for two-person private-information strategic games with side payments. For zero-sum games, the coco solution coincides with the von-Neumann minmax value. For TU variable-threat bargaining, it coincides with the Nash (1953), Raiffa (1953), Kalai-Smorodinsky (1975), and egalitarian solutions.

Following Selten (1960), we justify the coco value by axioms of monotonicity and efficiency, imposed directly on the class of Bayesian strategic games.

Finally, we introduce a incentive-compatible efficient mechanism that implements the coco solution in a broad class of games. The mechanism is a simple two-part agreement between the players: (a) they form a team and share payoffs equally, thereby achieving complete efficiency, and (b) a separate side payment is made to compensate the player with the strategic advantage.

Ehud Kalai

Northwestern University

  ,

A cooperative/competitive solution to a class of strategic games

(joint work with Adam Tauman Kalai)

Abstract

Cooperative game theory (implicitly) allows communications and binding agreements, but it leaves out important strategic details. Strategic game theory is rich in strategic details, but it leaves out the possibility of communications and binding agreements. The semi cooperative approach used in this paper combines the strategic details with the possibility of communications and binding agreements.

For two-person Bayesian games with side payments, we introduce a cooperative/competitive (coco) value that has the following properties: (1) It is efficient, fair and easy to compute. (2) It generalizes the minmax value from zero-sum to general-sum games. (3) It extends the major bargaining solutions and their variable threat versions. (4) It is justified by natural axioms imposed directly on Bayesian games. And (5) it is Nash implementable by protocols that are similar to real life partnerships.

Marek Kaminski

University of California, Irvine

  ,

Generalized Backward Induction    [pdf]

Abstract

I introduce axiomatically infinite sequential games that extend von Neumann and Kuhn’s classic axiomatic frameworks. Within this setup, I define a modified backward induction procedure that is applicable to all games. A strategy profile that survives backward pruning is called a backward induction equilibrium (BIE). The main result compares the sets of BIE and subgame perfect equilibria (SPE). Remarkably, and similarly to finite games of perfect information, BIE and SPE coincide both for pure strategies and for a large class of behavioral strategies. This result justifies the “folk algorithm” of using backward induction to find SPEs in all games.

Michihiro Kandori

University of Tokyo

  ,

Revision Games    [pdf]

(joint work with Yuichiro Kamada, Harvard University)

Abstract

We analyze a situation where players in advance prepare their actions in a game. After the initial prepartion, they have some opportunities to revise their actions, which arrive stochastically. Prepared actions are assumed to be mutually observable. We show that players can achieve a certain level of cooperation in such an environment.

Kamalakar Karlapalem

International Institute of Information Technology, Hyderabad, India

  ,

Games with Minimalistic Agents    [pdf]

(joint work with Asrar Ahmed and Kamalakar Karlapalem)

Abstract

In this paper we are study solution concepts when agents are interested to have a threshold utility or a cutoff above which they choose to benefit the system. Such behavior would be more relevant when we want agents to make socially responsible decisions. For example when agents mediate on behalf of humans or with humans themselves we would prefer agents to have such an attribute. We consider such a behavior to be more closer to human nature rather than maximizing ones own utility in case of self interested agents, or always choose actions for the benefit of the system in case of altruistic agents. To this end we have extended the notion of satisficing and present a formal analysis of games when agents preferences reflect such characteristic. Apart from discussing the solution concept for n player normal form game, we also consider the issues when not all agents can satisfy their minima. We then discuss the case when agents defect from the solution concept.

Eiichiro Kazumori

University of Tokyo

  ,

Dynamic Limit Order Book Markets    [pdf]

Abstract

The limit order book mechanisms (dynamic double auctions) are used in more than half of the world's stock exchanges such as Euronext, Helsinki, Tokyo, and Toronto. We develop a model of the dynamic limit order book markets among multiple informed traders with private information that builds on the previous models of hybrid markets with designated market makers (e.g. Kyle (1985, Econometrica), Foster and Viswanathan (1996, Journal of Finance), and Back, Kao, and Willard (2000, Journal of Finance)). We explicitly solve for a linear closed-form equilibrium strategy and find a close connection between the limit order book markets and the hybrid markets with market makers.
We then examine the effect of the number of market participants, the informativeness of the signal, and the effect of liquidity shocks on the equilibrium outcome. We also discuss connections with the empirical literature.

Suntak Kim

University of Pittsburgh

  ,

Divergence in Pre-Electoral Campaign Promises with Post-Electoral Policy Bargaining    [pdf]

Abstract

This paper investigates a relationship between electoral outcomes and post-electoral political process. In particular, the present paper is interested in how electoral announcements by politicians or political parties will be shaped if they cannot commit to the policy to be implemented before the election, but know that they should bargain over the final policy after the election, based on their pre-electoral campaign promises. The central question is whether consideration for post-electoral bargaining would let the political parties make divergent promises or announcements, contrary to the prediction of the median voter theorem. One lesson to be learned is that politicians are neither fully committed to nor completely irresponsible for pre-electoral campaign promises.

Min Kim

University of Southern California

  ,

Information Asymmetry and Incentives for Active Management    [pdf]

Abstract

This paper presents a model for delegated portfolio management, given incomplete information about managerial skills and efforts. I show that under information asymmetry, equilibrium outcomes depend on compensation structure and heterogeneity of skills. A performance fee can screen managers of differing ability and lead to a separating equilibrium (high skill managers actively manage funds while low skill managers track indexes), provided that skills are sufficiently superior. Otherwise, a pooling equilibrium arises in which managers track indexes. The model suggests that the recent growth in passive management (e.g., closet-indexing) in the mutual fund industry could stem from its lower level of skills, for example, due to a brain drain to the hedge fund industry.

Nicolas (Alexandre) Klein

University of Munich

  ,

Free-Riding And Delegation In Research Teams    [pdf]

Abstract

This paper analyzes a two-player game of strategic experimentation with three-armed exponential bandits in continuous time. Players face replica bandits, with one arm that is safe in that it generates a known payoff, whereas the likelihood of the risky arms' yielding a positive payoff is initially unknown. It is common knowledge that the types of the two risky arms are perfectly negatively correlated. I show that the efficient policy is incentive-compatible if, and only if, the stakes are high enough. Moreover, learning will be complete in any Markov perfect equilibrium if, and only if, the stakes exceed a certain threshold.

Yukio Koriyama

Ecole Polytechnique

  ,

Freedom to Not Join: A Voluntary Participation Game of a Discrete Public Good    [pdf]

Abstract

A problem of the provision of a discrete public good is considered. All members in the society are homogeneous and they decide simultaneously whether to contribute to the provision. Contribution cost per person is fixed and non-refundable. Because of the free-rider problem, inefficiency in the provision is inevitable, even in the most efficient symmetric Nash equilibrium. However, when we add a pre-stage game where all members decides simultaneously whether to voluntarily participate to the original contribution game, expected social welfare might be improved in the symmetric subgame perfect equilibrium. It turns out that the improvement is always possible when the cost of contribution is sufficiently high.

Nagarajan Krishnamurthy

Chennai Mathematical Institute, India

  ,

Orderfield Property of Stochastic Games via Dependency Graphs    [pdf]

(joint work with T Parthasarathy, G Ravindran)

Abstract

We propose the concept of dependency graphs of stochastic games and mixtures of classes of stochastic games. We use this concept to derive conditions which are sufficient for a class of stochastic games (with rational inputs) to possess the orderfield property. By analyzing simple structural properties of these dependency graphs such as the existence of cycles involving different classes, we find if (certain subclasses of) mixtures of stochastic games (including complicated mixtures such as those involving different numbers of players, those involving both zero and non-zero-sum games and those involving both discounted and undiscounted games), have the orderfield property or not. Given a stochastic game with rational inputs or a class of stochastic games, these "sufficient conditions" can be verified in polynomial time using a simple algorithm.

Rida Laraki

Centre national de la recherche scientifique, Ecole Polytechnique

  ,

Explicit Formulas for Repeated Games with Absorbing States    [pdf]

Abstract

Explicit formulas for the asymptotic value and the asymptotic minmax of finite discounted absorbing games are provided as the discount factor goes to zero (the players are more and more patients). New simple proofs for the existence of the limits are given. Similar characterizations for stationary Nash equilibrium payoffs are obtained. The results may be extended to absorbing games with compact action sets and jointly continuous payoff functions.

Rida Laraki

Centre national de la recherche scientifique, Ecole Polytechnique

  ,

Majority Judgment Strategic Analysis

Emiliya Lazarova

Queen's University Belfast

  ,

Coalitional Matchings    [pdf]

(joint work with Dinko Dimitrov and Emiliya Lazarova)

Abstract

In a coalitional two-sided matching problem agents on each side of the market may form coalitions such as student groups and research teams who -- when matched -- form universities. We assume that each researcher has preferences over the research teams he would like to work in and over the student groups he would like to teach to. Correspondingly, each student has preferences over the groups of students he wants to study with and over the teams of researchers he would like to learn from. In this setup, we examine how the existence of core stable partitions on the distinct market sides, the restriction of agents' preferences over groups to strict orderings, and the extent to which individual preferences respect common rankings shape the existence of core stable coalitional matchings.

SangMok Lee

California Institute of Technology

  ,

The Testable Implications of Zero-sum Games    [pdf]

Abstract

We study Nash-rationalizable joint choice behavior under restriction on zero-sum games. We show that interchangeability of choice behavior is the only additional condition which distinguishes zero-sum games from general non-cooperative games with respect to testable implications. This observation implies that in some sense interchangeability is not only a necessary but also a sufficient property which differentiates zero-sum games.

Yehuda Levy

Hebrew University

  ,

Stochastic Games with Information Lag    [pdf]

Abstract

Two-player zero-sum stochastic games with fi nite state and action spaces, as well as two-player zero-sum absorbing games with compact metric action spaces, are known to have undiscounted values. We study such games under the assumption that one or both players observe the actions of their opponent after some time-dependent delay. We develop criteria for the rate of growth of the delay such that a player subject to such an information lag can still guarantee himself in the undiscounted game as much as he could have with perfect monitoring. We also demonstrate that the player in the Big Match with the absorbing action subject to information lags which grow too rapidly, according to certain criteria, will not be able to guarantee as much as he could have in the game with perfect monitoring.

Wooyoung Lim

University of Pittsburgh

  ,

Communication in Bargaining over Decision Rights    [pdf]

Abstract

This paper develops a model of bargaining over decision rights between an uninformed principal and an informed but self-interested agent in which the uninformed principal makes a price offer to the agent who then decides either to accept or to reject the offer. Contrary to the Coase Theorem prediction, actions induced in the unique perfect Bayesian equilibrium do not always satisfy ex-post efficiency. Once we introduce explicit communication into the model, however, there exists a truth-telling perfect Bayesian equilibrium, in which induced actions always satisfy ex-post efficiency. Moreover, the truth-telling equilibrium is always neologism proof in the sense of Farrell (1993). This equilibrium outcome is ex-ante Pareto superior to that of several dispute resolution schemes studied in the framework of Crawford and Sobel (1982) and Holmstr¨om (1977).

Shi-Miin Liu

National Taipei University

  ,

Commitment or No-Commitment to Monitoring in Emission Tax Systems?    [pdf]

(joint work with Hsiao-Chi Chen)

Abstract

This paper analyzes and compares behavior of the regulator and polluting firms in emission tax systems with and without commitment to monitoring. In the commitment case, firms are found noncompliant at all equilibria. It means that there exists no paradox of ex ante commitment to monitoring as shown in principal-agent models. We also discover that the commitment to monitoring system is at least as efficient as the no-commitment to monitoring system. It implies that the regulator may face efficiency loss when she can commit but chooses not to. Accordingly, the regulator has stronger incentive to adopt the commitment system. Finally, relative magnitudes of firms' optimal emissions as well as equilibrium monitoring probabilities in the two systems are uncertain unless firms' weight in the social cost function is no less than one.

Fernando M. Louge

University of Wisconsin - Madison

  ,

Evolution with Private Information: Caution, Contrarianism and Herding    [pdf]

Abstract

This paper considers a model where agents receive private signals correlated with the unknown state of the world. The standard approach to this problem is to assume that agents maximize their (objective) expected utility based on their Bayesian posteriors. We present a repeated, non-strategic version of this model and show that the expected utility rule is evolutionarily suboptimal. We provide a characterization of the evolutionarily optimal rule. Compared to the behavior rule that maximizes the expected utility, our evolutionary criterion provides more ‘smoothing’ of the population growth rate across states of the world. This translates into two properties of the optimal behavior rule: contrarian behavior and caution. Contrarian behavior consists of a probabilistic bias towards actions that defy the ‘common wisdom’ embedded in the prior beliefs. Agents exhibit caution when, compared to expected utility maximizers, a more extreme prior is required before disregarding their private information. We extend the model of social learning of Smith and Sørensen (2000) to a general class of behavior rules that includes the evolutionary and the expected utility behavior rules. We show that the qualitative properties of the model are preserved within this class. In particular, herds eventually arise. The limit distributions of public beliefs, however, are different. We find that our evolutionary-founded rule induces herding on the optimal action with higher probability than the expected utility rule.

Jason Marden

California Institute of Technology

  ,

Distributed Welfare Games    [pdf]

(joint work with Adam Wierman)

Abstract

We consider a variation of the resource allocation problem. In the traditional problem, there is a global planner who would like to assign a set of players to a set of resources so as to maximize welfare. We consider the situation where the global planner does not have the authority to assign players to resources; rather, players are self-interested. The question that emerges is how can the global planner entice the players to settle on a desirable allocation with respect to the global welfare? To study this question, we focus on a class of games that we refer to as distributed welfare games. Within this context, we investigate how the global planner should distribute the welfare to the players. We measure the efficacy of a distribution rule in two ways: (i) Does a pure Nash equilibrium exist? (ii) How does the welfare associated with a pure Nash equilibrium compare to the global welfare associated with the optimal allocation? In this paper we explore the applicability of cost sharing methodologies for distributing welfare in such resource allocation problems. We demonstrate that obtaining desirable distribution rules, such as distribution rules that are budget balanced and guarantee the existence of a pure Nash equilibrium, often comes at a significant informational and computational cost. In light of this, we derive a systematic procedure for designing desirable distribution rules with a minimal informational and computational cost for a special class of distributed welfare games. Furthermore, we derive a bound on the price of anarchy for distributed welfare games in a variety of settings. Lastly, we highlight the implications of these results using the problem of sensor coverage.

Laurent Mathevet

University of Texas at Austin

  ,

Designing Stable Mechanisms in Economic Environments    [pdf]

(joint work with PJ Healy)

Abstract

We study the design of mechanisms that Nash-implement Walrasian or Lindahl allocations and induce supermodular games for a wide class of economies. Such mechanisms are robust to the presence of myopic agents who use adaptive learning rules to choose their strategies. We proceed in three steps: First, we identify strong necessary conditions on the functional form of any mechanism that implement Walrasian or Lindahl equilibria. Second, we use these necessary conditions to identify impossibility results for mechanisms with small strategy spaces. Finally, we show how to use additional dimensions in the strategy space to turn anyWalrasian or Lindahl mechanism into a supermodular mechanism.

Alexander Matros

University of Pittsburgh

  ,

Raising Revenue With Raffles: Evidence from a Laboratory Experiment    [pdf]

(joint work with Wooyoung Lim and Theodore Turocy)

Abstract

Lottery and raffle mechanisms have a long history as economic institutions for raising funds. In a series of laboratory experiments we find that total spending in raffles is much higher than Nash equilibrium predicts. Moreover, this overspending is persistent as the number of participants in the raffle increases. Subjects as a group do not strategically reduce spending as group sizes increase, in contrast to the comparative statics theory provides. The lack of strategic response cannot be explained by learning direction theory or level-k reasoning models, although quantal response equilibrium can fit the observed distribution of choices. Much of the observed spending levels in the larger groups cannot be explained by financial incentives.

Ana Mauleon

Facultés Universitaires Saint-Louis

  ,

Von Neumann-Morgenstern Farsightedly Stable Sets in Two-Sided Matching    [pdf]

(joint work with Ana Mauleon, Vincent Vannetelbosch and Wouter Vergote)

Abstract

We adopt the notion of von Neumann-Morgenstern (vNM) farsightedly stable sets to predict which matchings are possibly stable when agents are farsighted in one-to-one matching problems. We provide the characterization of vNM farsightedly stable sets: a set of matchings is a vNM farsightedly stable set if and only if it is a singleton set and its element is a corewise stable matching. Thus, contrary to the vNM (myopically) stable sets [Ehlers, J. of Econ. Theory 134 (2007), 537-547], vNM farsightedly stable sets cannot include matchings that are not corewise stable. Moreover, we show that our main result is robust to many-to-one matching problems with responsive preferences.

Emerson Melo

California Institute of Technology

  ,

Congestion Pricing and Learning in Traffic Networks Games    [pdf]

Abstract

A stochastic model describing the learning process and adaptive behavior of finitely many users in a congested traffic network with parallel links is used to prove convergence almost surely towards an efficient equilibrium for a related game. To prove this result we assume that the social planner charges on every route the marginal cost pricing without knowing what is the efficient equilibrium. The result is a dynamic version of Pigou's solution, where the implementation is made in a decentralized way and the information about players gathered by the social planner is minimal. Our result and setting may be extended to the general case of negative externalities.

Chun-Hui Miao

University of South Carolina

  ,

Sequential Innovation, Technology Leakage and the Duration of Technology Licensing    [pdf]

(joint work with John Gordanier and Chun-Hui Miao)

Abstract

A large literature has examined the optimal payment scheme in technology licensing. In this paper, by assuming that technology transfer is not completely reversible, we consider the payment scheme and the duration of licensing contracts offered by an innovator with a sequence of possible innovations. We find that it may be optimal to license the innovation for less than the full length of the patent. We also propose a new rationale for the use of royalty contracts: they resolve a time-inconsistency problem faced by the innovator. Our results suggest that licensing contracts based on royalty have a longer duration than fixed-fee licenses and are more likely to be used in industries where sequential innovations are frequent.

Maximilian Mihm

Cornell University

  ,

What Goes Around Comes Around: A theory of strategic indirect reciprocity in networks    [pdf]

(joint work with Russell Toth and Corey Lang)

Abstract

We consider strategic interaction on a network of heterogeneous, long-term relationships. The bilateral relationships are independent of each other in terms of actions and realized payoffs, and we assume that information regarding outcomes is private to the two parties involved. In spite of this, the network can induce strategic interdependencies between relationships, which facilitate efficient outcomes. We derive necessary and sufficient conditions that characterize efficient equilibria of the network game in terms of the architecture of the underlying network, and interpret these structural conditions in light of empirical regularities observed in many social and economic networks.

Dylan Minor

University of California, Berkeley

  ,

When Second Best is Best: on the Optimality of Offering a Larger Second Prize    [pdf]

Abstract

Contests are ubiquitous; from a true tournament to auctions, from competing employees to competing firms, much can be cast as a contest. Consequently, there has been much literature examining the optimal design of contests. Interestingly, few studies have considered the role of convex costs or designer concave benefits, which is how we model many real world applications found in a contest setting. We show under such a setting, given sufficient number of participants, it is best to offer a larger second over first prize. However, in such a contest, non-monotonicities emerge, for which we propose a mechanism dubbed the generalized second prize contest. We then examine indivisible prizes, finding again with sufficient number of participants, it is best to offer the sole prize to second place instead of first. Finally, we provide some applications of our findings ranging from regulation to innovation.

Toshiji Miyakawa

Osaka University of Economics

  ,

On the Bilateral Contracting Process in Economies with Externalities    [pdf]

Abstract

This paper examines whether an efficient outcome can be achieved through the bilateral contracting processes in a noncooperative coalitional bargaining game model with externalities and renegotiations. We describe the bargaining situation in a strategic form game. When the members of coalitions make binding agreements about their actions and transfers in the coalition formation process, almost all Markov perfect equilibria converge to the efficient state.
On the other hand, in the partition function form game situation, all equilibria may remain in an inefficient state forever even if the grand coalition is efficient.

Key words: Coalitional bargaining, bilateral contracting, externalities, strategic form game

Subhasish Modak Chowdhury

Purdue University

  ,

The All-pay Auction with Non-monotonic Payoff    [pdf]

Abstract

This article analyzes a two-bidder first-price all-pay auction under complete information where the winning payoff is non-monotonic in own bids. We derive the conditions for the existence of pure strategy Nash equilibria and fully characterize the unique mixed strategy Nash equilibrium when the pure strategy equilibria do not exist. Unlike the standard all-pay auction results as in Baye et al (1996) or Siegel (2009), under this non-monotonic payoff structure, the stronger bidder has two distinct mass points in his/her equilibrium mixed strategy and the equilibrium support of the weaker player is not continuous. When the bidders face common value, then in the equilibrium mixed strategy both bidders place mass points at the same point of support. The equilibrium payoff conditions stated in Siegel (2009) do not hold in case of pure strategy Nash equilibria. Possible real life applications are discussed.

Herve Moulin

Rice University

  ,

Pricing Traffic in a Spanning Network    [pdf]

Abstract

Each user of the network needs to connect a pair of target nodes. There are no variable congestion costs, only a direct connection cost for each pair of nodes. A centralized mechanism elicits target pairs from users, and builds the cheapest forest meeting all demands. We look for cost sharing rules satisfying

Routing-proofness: no user can lower its cost by reporting as several users along an alternative path connecting his target nodes;

Stand Alone core stability: no group of users pay more than the cost of a subnetwork meeting all connection needs of the group.

We construct rst two core stable and routing-proof rules when connecting costs are all 0 or 1. One is derived from the random spanning tree weighted by the volume of trac on each edge; the other is the weighted Shapley value of the Stand Alone cooperative game.
For arbitrary connecting costs, we prove that the core is non empty if the graph of target pairs connects all pairs of nodes. Then we extend both rules above by the piecewiselinear technique. The former rule is computable in polynomial time, the latter is not.

Ahuva Mu'alem

California Institute of Technology

  ,

On Multi-Dimensional Envy-Free Mechanisms

Abstract

We consider {\it fairness design} scenarios in which each bidder
follows the global goal of the mechanism designer only if the resulted allocation would be fair from his own point of view. More formally, we focus on approximation algorithms for indivisible items with supporting envy-free bundle prices.
We study the canonical problem of makespan-minimizing unrelated machine scheduling in an envy-free manner. Tight algorithmic bounds are given for the special interesting case of related machines.

Victor Naroditskiy

Brown University

  ,

Destroy to Save    [pdf]

(joint work with Geoffroy de Clippel, Victor Naroditskiy, and Amy Greenwald)

Abstract

We study the problem of how to allocate m identical items among n>m agents, assuming each agent desires exactly one item and has a private value for consuming the item. We assume the items are jointly owned by the agents, not by one uninformed center, so an auction cannot be used to solve our problem. Instead, the agents who receive items compensate those who do not.
This problem has been studied by others recently, and their solutions have modified the classic VCG mechanism. Advantages of this approach include strategy-proofness and allocative efficiency. Further, in an auction setting, VCG guarantees budget balance, because payments are absorbed by the center. In our setting, however, where payments are redistributed to the agents, some money must be burned in order to retain strategy-proofness.
However, there is no reason to restrict attention to VCG mechanisms. In fact, allocative efficiency (allocating the m items to those that desire them most) is not necessarily an appropriate goal in our setting. Rather, we contend that maximizing social surplus is. In service of this goal, we study a class of mechanisms that may burn not only money but destroy items as well. Our key finding is that destroying items can save money, and hence lead to greater social surplus.
More specifically, our first observation is that a mechanism is strategy-proof iff it admits a threshold representation. Given this observation, we restrict attention to specific threshold and payment functions for which we can numerically solve for an optimal mechanism. Whereas the worst-case ratio of the realized social surplus to the maximum possible is close to 1 when m=1 and 0 when m=n-1 under the VCG mechanism, the best mechanism we find coincides with VCG when m=1 but has a ratio approaching 1 when m=n-1 as n increases.

Barry O'Neill

University of California, Los Angeles

  ,

Vagueness in Communication    [doc]

Abstract

Vagueness in communication is modeled by certain global games. Speaking vaguely is different from speaking non-specifically. “The weather is hot” has a boundary of truth that is not commonly known by the speaker and listener, and so is vague, whereas “The temperature is above 21o“, is non-specific but has a sharp boundary. Some norms that constrain social behaviour have clear lines separating what is permitted from and what is forbidden. They are more stable for that reason, less prone to “slippery slopes.” Some taboo behaviours naturally admit a clear line -- cannibalism, incest, the use of nuclear weapons in war -- while others do not -- enhanced interrogation versus torture, attacks civilians versus combatants as military targets. The game model provides a non-metaphorical understanding of a “clear line”: it means a distinction that can be referred to non-vaguely in communications.

Marius-Ionut Ochea

University of Amsterdam

  ,

Evolution in Repeated Prisoner's Dilemma under Perturbed Best-Reply Dynamics    [pdf]

Abstract

In an evolutionary set-up, we append an ecology of iterated Prisoner's Dilemma (IPD) game strategies, consisting of unconditional cooperators (AllC), unconditional defectors (AllD) and reactive players (TFT) with two repeated strategies that seem to receive less attention in the evolutionary IPD game literature: the error-proof, "generous" tit-for-tat (GTFT) which, with a certain probability, re-establishes cooperation after a (possibly by mistake) defection of the opponent and the penitent, "stimulus-response" (Pavlov) strategy that resets cooperation after the opponent punished for defection. Stable oscillations in the frequency of both the forgiving (GTFT) and repentant (Pavlov) strategy along with chaotic behavior emerge under a perturbed version of best-response dynamics, the logit dynamics.

David Ong

University of California

  ,

Fishy Gifts: Bribing with Shame and Guilt    [pdf]

Abstract

The following proposes a psychological mechanism by which the trust vested in fiduciaries (experts with wide unobservable discretion) might be exploited by third parties. The motivation is the $250 billion prescription drug industry, which spends $19 billion per year on marketing to US doctors, mostly on `gifts' and often with no monitoring for reciprocation. In one incident, a pharmaceutical firm representative closed her presentation to Yale medical residents by handing out $150 medical textbooks and remarking, "one hand washes the other." By the next day, half the textbooks were returned. I model such bribing of fiduciaries as a one shot psychological trust game with double-sided asymmetric information. I show that the `shame' of acceptance of a possible bribe, rather than being an impediment to bribing, can screen for reciprocating `guilt' -- and that an announcement of the expectation of reciprocation can extend the effect. Current policies to deter reciprocation might aid such screening.
This paper is a part of a series of papers that I am writing on how moral hazard is dealt with in fiduciary professions like accounting, credit rating and subprime mortgage lending, where wide unobservable discretion is exercised. The results of Fishy Gifts were tested in a controlled laboratory setting in "Sorting with Shame in the Laboratory". Both are available at www.davidong.net.

Ram Orzach

Oakland University

  ,

Revenue Comparison in Common-Value Auctions: Two Examples    [pdf]

(joint work with David A. Malueg)

Abstract

Milgrom and Weber (1982) established that for symmetric auction environments in which players' (affiliated) values are symmetrically distributed, expected revenue in the second-price sealed-bid auction is at least as large as in the first-price sealed bid auction. We provide two simple examples of a common-value environment showing this ranking can fail when players are asymmetrically informed.

Antonio Miguel Osorio-Costa

University Carlos III Madrid

  ,

Efficiency Gains in Repeated Games at Random Moments in Time    [pdf]

(joint work with Antonio M. Osorio-Costa)

Abstract

This paper studies repeated games where the time of repetitions of the stage game is not known or controlled by the players. Many economic situations of interest where players repeatedly interact share this feature, players do not know exactly when is the next time they will be called to play again. We call this feature random monitoring. We show that perfect random monitoring is always superior to perfect deterministic monitoring when players discount function is convex in time domain. Surprisingly when the monitoring is imperfect but public the result does not extend in the same absolute sense. The positive effect in the players discounting is not sufficient to compensate for a larger probability of punishment for all frequencies of play. However, we establish conditions under which random monitoring allows efficiency gains on the value of the best strongly symmetric equilibrium payoffs, when compared with the classic deterministic approach.

Eduardo Perez

Stanford University

  ,

Competing with Equivocal Information: The Importance of Weak Candidates    [pdf]

Abstract

In the usual persuasion game framework, where an informed sender tries to persuade an uninformed receiver to take a certain action by selectively communicating verifi able information, all the relevant information is revealed in equilibrium because any action of the sender can be outguessed by the receiver. If the sender is unable to interpret her own information, however, this classical unraveling argument breaks down. When the receiver is sufficiently inclined to act as the sender wishes without any information, the sender has no incentive to inform her. This paper examines whether full disclosure can be restored with competition between multiple senders. In the model, the senders compete for a limited number of prizes allocated by the receiver. Full disclosure can be restored only in the presence of weak candidates, that is ex ante unpromising candidates. With sufficiently many weak candidates, it is always possible to ensure full disclosure.

Wolfgang Pesendorfer

Princeton University

  ,

Measurable Ambiguity

(joint work with Faruk Gul)

Abstract

We introduce subjective expected uncertain utility theory (SEUU). In SEUU the decision maker uses a semiprobability to assess the likelihood of events. The semiprobability allows the decision maker to reduce acts to bilotteries. A bilottery specifies for each interval of monetary prizes [x, y] the probability that the decision maker will end up with a prize in this interval. A bilottery allows for the possibility that the probability of receiving a prize in the interval [x, y] cannot be reduced to the probability of receiving a prize in the subinterval [x, w) and [w, y]. The decision maker evaluates bilotteries by taking the expectation of a utility index u that specifies a utility for each interval [x, y]. We provide a Savage style representation theorem for SEUU theory, define uncertainty aversion and characterize the corresponding order on bilotteries.

Gwenael Piaser

Université du Luxembourg

  ,

Moral Hazard: Deterministic Indirect Mechanisms and Efficiency    [pdf]

(joint work with Andrea Attar, Eloisa Campion, Uday Rajan)

Abstract

In this paper we examine strategic interactions between a principal
and several agents under moral hazard. We show how (messages)
communication may improve on efficiency even in models of complete
information. Messages are useful two main reasons. First, if the
principal cannot use stochastic mechanisms, mechanisms with messages
can sustain mixed strategies and hence indirectly a stochastic
outcome. Second, even if stochastic mechanisms are allowed, messages
can be used to induce correlation between efforts and outcome.
Finally, we provide sufficient conditions under which an equilibrium
allocation supported by a stochastic direct mechanism, can be
sustained by a deterministic indirect mechanism.

Brijesh Preston Pinto

University of Southern California

  ,

Strongly Stable Matchings with Cyclic Preferences

Abstract

We construct an order-reversing function on a lattice whose fixed points correspond to strongly stable matchings when preferences are cyclic in three or more dimensions. We then demonstrate that the algorithm of Echenique and Yenmez (2007) can be used to compute all the fixed points of this function. We discuss applications to the paired kidney exchange problem.

Brennan Platt

Brigham Young University

  ,

Pay-to-Bid Auctions    [pdf]

(joint work with Joseph Price, Henry Tappen)

Abstract

We analyze an auction format in which bidders pay a fee each time they increase the auction price. The bidding fees are the primary source of revenue for the seller, but result in an expected revenue equivalent to standard auctions. Our model predicts a particular distribution of ending prices, which we are able to test against observed auction data. Our model fits the data well for over three-fourths of the items which are routinely auctioned. The notable exceptions are items related to video game systems; these result in more aggressive bidding and higher expected revenue. By incorporating mild risk-loving preferences in the model, we can explain nearly all of the auctions.

Roland Pongou

Brown University

  ,

A Dynamic Theory of Fidelity Networks with an Application to the Spread of HIV/AIDS    [pdf]

(joint work with Roberto Serrano)

Abstract

We study the dynamic stability of fidelity networks, which are networks that form in a mating economy of agents of two types (say men and women), where each agent enjoys having direct links with opposite type agents, while engaging in multiple partnerships is punished if detected by the cheated partner. We assume that such a punishment is more severe for women than for men, which results in that women’s optimal number of partners is smaller than men’s. We define two dynamic and stochastic matching processes in which agents form and sever links based on the reward from doing so, but possibly take actions that are not beneficial with small probability. In defining the probability of such actions, the first process relies on the intuition that an individual who invests more time in a relationship makes it stronger and harder to break by his/her partner, while in the second process, such an individual is perceived as weak. We find that in the long run, only egalitarian pairwise stable networks, in which all agents have the same number of partners, are stable under the first process; while under the second process, only anti-egalitarian pairwise stable networks, in which all women have their desired number of partners and are matched to a small number of men, are stable. Next,
we apply these results to find that under the first process, men and women are equally vulnerable to HIV/AIDS, while under the second process, women are more vulnerable. The key implication is that
even if the prevalence of HIV/AIDS is lower among women compared to men at some point in time,the number of infected women will grow over time to reach and possibly offset the number of infected men. Our analysis lends support to the hypothesis that anti-female discrimination is a key factor in the greater vulnerability of women to HIV/AIDS often observed in real data.

Daniel Quint

University of Wisconsin

  ,

Bargaining with Endogenous Information    [pdf]

(joint work with Ricardo Serrano-Padial (University of Wisconsin))

Abstract

Imagine a consulting firm pitching a project to a potential customer. While the deliverables may be clear, the firm may not know exactly how costly they will be to complete; similarly, the buyer may not know exactly how much benefit they will get. Prior to negotiations, both sides may choose to invest time and resources in sharpening their estimate. In the current paper, we study how each side's bargaining power in the negotiations over price interacts with the incentives to invest ex-ante in information.

We find that in a private values setting, most of the time (though not always), information and bargaining power are strategic complements. Facing a better-informed opponent generally increases your payoff, but decreases your incentive to gather information. We compare equilibrium information acquisition when your choice of how much to invest in information-gathering is observable to your opponent and when it is not, and find complementarity between bargaining power and secrecy -- the party with bargaining power prefers information-gathering to be unobservable. We also find that when information acquisition is observable, there exist parameters such that the party with all the bargaining power ex-post gets a lower ex-ante expected payoff, due to his spending on information acquisition in equilibrium.

In future work, we plan to simultaneously endogenize the bargaining protocol (including the two sides' bargaining power) and information, by considering the more general question of bilateral mechanism design with endogenous private information.

Javier Rivas

University of Leicester

  ,

Cooperation, Imitation and Correlated Matching    [pdf]

Abstract

A setting where players are matched into pairs to play a Prisoners' Dilemma game is studied. Players are not rational in that they simply imitate the more successful actions they observe. Furthermore, a certain correlation is added to the matching process: players belonging to a pair were both parties cooperate repeat partner next period while all other players are randomly matched into pairs. While under complete random matching cooperation vanishes for any initial interior condition, the correlation in the matching process considered in this paper makes a significant amount of cooperation the unique outcome under mild conditions. Furthermore, it is shown that no separating equilibrium, i.e. a situation where cooperators and defectors are not matched together, exits.

Artus Philipp Rosenbusch

Darmstadt University of Technology

  ,

Satisfiable Fairness in Cooperative Games with Transferable Utility    [pdf]

Abstract

The driving question behind cooperative game theory is how to divide a cooperatively generated value among the players. The dominant solution concept is presently the core. Other solution concepts include the tau-value, the Shapley value and the egalitarian core.

Wherever cooperative game theory is used to model human behavior, the question arises as to whether the modeled solutions can be considered FAIR. Now, while some solution concepts are motivated by certain notions of fairness, the term itself cannot be accurately defined. The word carries a range of semantics as diverse as EQUITY OF NEEDS, PERFORMANCE FAIRNESS and EQUAL OPPORTUNITIES. In addition, the degree of personal inequity aversion varies between cultures.

This paper provides a sanity condition for different fairness notions called SATISFIABILITY. Furthermore, different fairness predicates on the imputation space are defined and their satisfiability is discussed.

The proposed fairness concepts include respecting a pre-order of relative value on the player set as given by the game’s payoff function, compatibility with splitting the game into a purely cooperative and a trivial component and respecting a pre-order of relative value on the lattice of coalitions, which can be thought of as the formation of labor unions. A discussion whether the solution concepts mentioned above meet these fairness predicates is also included.

Evangelos Rouskas

Athens University of Economics and Business

  ,

Efficient Delay in Decision Making    [pdf]

Abstract

This research places standard price dispersion models in a dynamic perspective and comes up with an explanation about the frequent delay in durables' consumption evidenced by surveys of purchase intent. First, I introduce a dynamic price information clearinghouse setup with endogenous acquisition of information and demonstrate that depending on parameter values there exist equilibria where buyers purchase early on or delay purchases and they are better-, worse-off or indifferent compared to the static benchmark. Second, in a dynamic game with a small number of capacity constrained sellers and a small number of buyers with growing demand I identify unexplored equilibria which, contrary to known results, confirm that in some occasions deliberately putting off consumption at a later time is more beneficial for buyers than the static outcome. Third, I argue that dynamics clearly confer advantages to the supply-side only in underground markets like the retail one for illicit drugs where there exists significant variation in the price/quality ratio.

Asha Sadanand

University of Guelph

  ,

Outside Options and Investment    [pdf]

(joint work with Patrick Martin)

Abstract

OUTSIDE OPTIONS AND INVESTMENT by Asha Sadanand and Patrick Martin

When investment is relation specific, we encounter the familiar
hold up problem where too little investment occurs because the
investor is concerned about losing bargaining power by investing.
In the literature, many models assume that the outcome of the
bargaining stage, that occurs after the investments are
undertaken, is directly affected by outside options in a
particular manner, such as for example, moving the disagreement
point in Nash Bargaining. With such a mechanism in place,
anticipating the bargaining outcome typically leads each party to
suboptimal investment. In this paper we drop the ad hoc bargaining
assumption and instead first solve for the theoretical problem of
finding subgame perfect equilibrium in bargaining when each party
potentially has different outside options. We then apply the
solution to the hold up problem to characterize the conditions
under which first best levels of investment are possible in a
subgame perfect Nash equilibrium.

Siddhartha Sahi

Rutgers University

  ,

The Allocation of a Prize

(joint work with Pradeep Dubey)

Abstract

Consider agents who undertake costly effort to produce stochastic outputs observable by a principal. The principal can award a prize deterministically to the agent with the highest output or to all of them with probabilities that are proportional to their outputs. We show that the deterministic prize elicits more (expected, total) output when agents’ abilities are evenly matched, otherwise the proportional prize does better. Therefore if agents’ characteristics are sufficiently diverse compared to the noise on output, and are not heavily correlated (e.g., because they are picked i.i.d.), then the proportional prize will elicit more output. We in fact show that this is the case when any Nash selection (under the proportional prize) is compared with any individually rational strategy selection (under the deterministic prize), provided agents know each others’ characteristics (the complete information case). When there is incomplete information, the same conclusion holds (but now we must restrict to Nash selections for both prizes). In the event that the principal knows the distribution of agents’ characteristics, we also compute the optimal scheme for awarding the prize (among all schemes conceivable).

Ahmet Sahin

Kahramanmaras Sutcu Imam University

  ,

An Application of Game Theory to Producers in Competition with Production and Market Price Risks: The Case of Turkey    [pdf]

(joint work with Bulent Miran, Ibrahim Yildirim, Murat Cankurt)

Abstract

In view of deciding on production patterns, which will be profitable and sustainable, it is essential to develop strategies against fluctuations of production and market prices. Game theory is a strategy choice against the uncertainties and could be used to solve the problems of competitions, where a conflict of interest occurs among the decision makers.
This study is aimed at determining the plant enterprises with highest gross profit, which operate under production and market price risk conditions. The data of 2006 was collected from 162 producers of 279 different plant products in Bayındır, İzmir, Turkey.
Maximax, Wald, Regrets, Hurwicz, Utility and Laplace of game theory were used in this study. These criterions were considered to represent major characteristics of producers.
Tomatoes had the highest gross profit per da with US Dolar 554.39 when the Maximax, Hurvicz and Laplace criterions were applied followed by pepper with US Dolar 182.50 provided that Wald and Utility criterions of game theory was employed. Taking into consideration the results obtained, we recommend to producers in the area to concentrate on the tomatoes and pepper activities under the given production and market prices conditions. We are in opinion that the producers may make a good use of the results obtained from this study to develop rational product pattern planning. The database may also be beneficial for the related sectors and policy makers in their decisions.

Key Words: Game Theory, Products Pattern, Producers Competition against Risks

Marco Scarsini

Libera Università Internazionale degli Studi Sociali

  ,

Repeated Congestion Games with Local Information    [pdf]

(joint work with Tristan Tomala)

Abstract

In congestion games considerable attention has been devoted to the inefficiency of Nash equilibria and to the relation between equilibrium costs and efficient costs. In particular two measures of inefficiency have been studied: the price of anarchy, i.e., the ratio of the worst Nash equilibrium cost to the optimal cost, and the price of stability, i.e., the ratio of the best Nash equilibrium cost to the optimal cost.

One motivation for this work is to study inefficiency of equilibria in repeated congestion game. While it is impossible to reduce the price of anarchy, since the one-shot equilibrium is also an equilibrium of the repeated game, a folk-theorem argument shows that the price of stability can often be reduced to 1 in the repeated game.

The difficulty in proving a folk theorem for congestion games resides in the fact that the monitoring is only local, i.e., players observe only the routes that they go through. Our model does not allow to use any of the various versions of the folk theorem under imperfect monitoring. Hence we explicitly define a punishment strategy that implements an efficient equilibrium.

Burkhard C Schipper

University of California, Davis

  ,

Unbeatable Imitation    [pdf]

(joint work with Peter Duersch, Joerg Oechssler, Burkhard C. Schipper)

Abstract

We show that the simple decision rule ``imitate-the-best'' can not be beaten even by a dynamic relative payoff optimizer in many classes of symmetric games. These classes comprise of all symmetric 2x2 games, games for which the relative payoff function is quasiconcave or a valuation, aggregative quasiconcave quasisubmodular games, and symmetric quasiconcave zero sum games. Examples include Cournot oligopoly, rent seeking, public goods games, common pool resource games, minimum effort coordination games, arms race etc. It suggests that prior theoretical studies of imitation in those games are less ad hoc than previously thought.

Karl Schlag

Universitat Pompeu Fabra

  ,

Can Sanctions Induce Pessimism? An Experiment    [pdf]

(joint work with Roberto, Galbiati, Karl H. Schlag, Joel van der Weele)

Abstract

We experimentally investigate the effects of sanctions when there are multiple equilibria. Two subjects play a two-period minimum effort game in the presence of third player (principal). The principal benefits from coordination on higher effort, and is the only one informed of previous choices. We contrast introducing an exogenously imposed sanction in the second round to the case where the principal is allowed to decide whether or not, at a small cost, to impose a sanction. We find that exogenously introduced sanctions are effective in inducing optimistic beliefs about others and help coordination on more efficient equilibria. On the other hand, endogenously introduced sanctions negatively influence beliefs about the effort of the other player. The results supports the idea that sanctions have an expressive
dimension which can undermine their effectiveness by discouraging optimistic players.

Sergei Severinov

University of British Columbia and Essex University

  ,

Multidimensional Screening with One-Dimensional Allocation Space    [pdf]

(joint work with Raymond Deneckere, University of Wisconsin-Madison)

Abstract

We develop a general method for solving multi-dimensional screening
problems in which the `physical\' allocation space is one-dimensional, and provide necessary and sufficient
conditions for the existence of exclusion in the optimal mechanism.
Our method is based on identifying the
`isoquants\' - the sets of types who obtain the same allocation.
We provide a
general characterization of the optimal mechanism, and apply it to explicitly
compute the solution when the utility function is linear-quadratic and the
types are distributed uniformly.
Interestingly, the optimal solution
exhibits discontinuity at the boundary between exclusion and non-exclusion regions for a large set of parameter values.

Itai Sher

University of Minnesota

  ,

Optimal Shill Bidding in the VCG Mechanism    [pdf]

(joint work with Itai Sher)

Abstract

This paper studies shill bidding in the VCG mechanism applied to combinatorial auctions. Shill bidding is a strategy whereby a single decision-maker enters the auction under the guise of multiple identities (Sakurai,Yokoo, and Matsubara 1999). I formulate the problem of optimal shill bidding for a bidder who knows the aggregate bid of her opponents. A key to the analysis is a subproblem--the cost minimization problem--which searches for the cheapest way to win a given package using shills. This formulation leads to an exact characterization of the aggregate bids b such that some bidder would have an incentive to shill bid against b. It is well known that when goods are substitutes, there is no incentive to shill bid. In contrast, I show that when goods are pure complements, the incentive to shill takes a simple form: there is an incentive to disintegrate and bid for each item using a different identity. With a mix of substitutes and complements, I show that the winner determination problem (for single minded bidders)--the problem of finding an efficient allocation in a combinatorial auction--can be embedded into the optimal shill bidding problem. Shill bidding is closely related to collusion. Setting aside the ordinary incentive to suppress competition, the disincentive to disintegrate using shills when facing a substitutes valuation is shown to translate into an incentive to merge for a coalition facing the same valuation. Only when valuations are additive can the incentives to shill and merge simultaneously disappear. The paper also shows that there does not exist a dominant strategy in the VCG mechanism when shill bidding is possible. I find a large class of shill bidding strategies which sometimes outperform truthful bidding, but also show that no shill bidding strategy dominates truthful bidding.

Eran Shmaya

Kellogg School of Management

  ,

The Determinacy of Infinite Games with Eventual Perfect Monitoring    [pdf]

Abstract

An infinite two-player zero-sum game with a Borel winning set, in which the opponent's actions are monitored eventually but not necessarily immediately after they are played, admits a value. The proof relies on a representation of the game as a stochastic game with perfect information, in which Nature operates as a delegate for the players and performs the randomizations for them.

John Smith

Rutgers-Camden

  ,

Not So Cheap Talk: A Model of Advice with Communication Costs    [pdf]

(joint work with Jo Hertel and John Smith)

Abstract

We model a game similar to the interaction between an academic advisor and advisee. Like the classic cheap talk setup, an informed player sends information to an uninformed receiver who is to take an action which affects the payoffs of both sender and receiver. However, unlike the classic cheap talk setup, the preferences regarding the receiver's actions are identical for both sender and receiver. Additionally, the sender incurs a communication cost which is increasing in the complexity of the message sent. We characterize the resulting equilibria. We show that if communication is costly then there is no equilibrium in which communication is complete. Under one out-of-equilibrium condition, our equilibrium is analogous to that found in Crawford and Sobel (1982). Under a more restrictive out-of-equilibrium condition, our equilibrium is analogous to that under the No Incentive to Separate (NITS) condition as discussed in Chen, Kartik and Sobel (2008). Finally, we model the competency of the advisee by the probability that the action is selected by mistake. We show that the informativeness of the sender is decreasing in the likelihood of the mistake. Therefore, we expect the informativeness of the relationship to be increasing in the competency of the advisee.

Noah Stein

Massachusetts Institute of Technology

  ,

Games on Manifolds    [pdf]

(joint work with Noah D. Stein, Pablo A. Parrilo, and Asuman Ozdaglar)

Abstract

We consider games played on smooth manifolds with smooth utilities. All strategically relevant information can be expressed in terms of a smooth 1-form, which we call a game form. We characterize which 1-forms correspond to game forms and which of those correspond to exact potential games. We then show that this topological setup leads naturally to generalizations of the notion of Nash equilibrium defined in terms of local deviations. These in turn suggest definitions for generalized classes of ``games'' with non-transitive preferences. These preferences cannot be written in terms of utility functions but can be expressed naturally using 1-forms. We examine the existence and non-existence of equilibria in this more general setting.

Nichalin Suakkaphong

University of Arizona

  ,

Competition and Cooperation in Decentralized Distribution    [pdf]

(joint work with Nichalin Suakkaphong, Moshe Dror)

Abstract

Any decentralized retail or wholesale system of competing entities
has a benefit sharing arrangement when collaborating with regards to
demand realizations. We study a distribution system similar to the
observed behavior of independent car dealerships. If a dealership
does not have in stock the vehicle requested by a customer, it might
consider acquiring it from a competing dealer. This raises questions
about procurement strategies that achieve a system optimal
(first-best) outcome. We examine such a decentralized distribution
system with respect to: (a) Does a unique first-best solution imply
unique Nash equilibrium procurement strategies? (b) If some of the
participants do not select Nash procurement strategies, what are the
implications on the benefit sharing? (c) When demand parameters are
not of common knowledge the system might not encourage truth
revelation. (d) How are the above results affected if we relax the
assumption of satisfying local demand first? We show that the profit
sharing rules like the ones found in the literature will result in a
stable collaborative outcome that achieves first-best only if (i)
individual demand parameters satisfy a number of restrictive
conditions, (ii) complete information assumption holds, and (iii)
all the parties select Nash equilibrium strategy.

Yong Sui

Shanghai Jiao Tong University

  ,

All-pay Auctions with Private Values and Resale    [pdf]

(joint work with Qiang Gong)

Abstract

This paper studies all-pay auctions with resale opportunities within an independent-private-value framework. Given the existence of a resale market, the primary players will compete more aggressively over an indivisible prize. We characterize a symmetric equilibrium for all-pay auctions with private values and resale and derive a revenue-ranking result for all-pay auctions with and without resale opportunities. Depending on the first-stage winner's ability to extract surplus in the resale stage, the initial seller may or may not benefit from excluding some players from the primary competition thus creating a resale market.

Ching-jen Sun

Deakin University

  ,

Robustness of Intermediate Agreements and Bargaining Solutions    [pdf]

(joint work with Nejat Anbarci)

Abstract

Most real-life bargaining resolves gradually; two parties reach intermediate agreements without knowing the whole range of possibilities. These intermediate agreements serve as disagreement points in subsequent rounds. Cooperative bargaining solutions ignore this dynamics and therefore can yield accurate predictions only if they are robust to its specification. We identify robustness criteria that four of the best-known bargaining solutions, Nash, Kalai-Smorodinsky, Proportional and Discrete Raiffa, satisfy. We show that "robustness of intermediate agreements" plus well-known and plausible additional axioms provide the first characterization of the Discrete Raiffa solution and novel axiomatizations of the other three solutions. Hence, we provide a unified framework for comparing these solutions' bargaining theories.

Nobue Suzuki

Komazawa University

  ,

Voluntarily Separable Repeated Prisoner's Dilemma with Shared Belief    [pdf]

(joint work with Takako Fujiwara-Greve (Keio University) and Masahiro Okuno-Fujiwara (University of Tokyo))

Abstract

In Fujiwara-Greve and Okuno-Fujiwara (2009),
an evolutionary stability concept was defined by allowing mutations of
any strategy. However, in human societies, not all strategies are
likely to be tried out when a player considers what happens in the
future. In this paper we introduce the ``shared belief'' of potential
continuation strategies, generated and passed on in a society, and
mutations are restricted only among best responses against the shared
belief. We show that a myopic strategy becomes a part of a bimorphic
equilibrium under a shared belief and contributes to a higher payoff
than ordinary neutrally stable distributions'.

Yair Tauman

SUNY Stony Brook and Interdisciplinary Center Herzliya (IDC)

  ,

The Decision to Attack a Nuclear Facility: The Role of Intelligence

(joint work with Dov Biran)

Abstract

The paper analyzes the impact of intelligence in a simple model of two rival countries (players). Player 1 wishes to develop a nuclear bomb and Player 2’s aim is to frustrate Player 1’s intention, even if it requires attacking and destroying whose facilities. But before launching an attack, Player 2 wants to be convinced with high probability that his rival is indeed developing the bomb. For this purpose Player 2 operates a spying device or intelligence system (IS) of a certain precision a 0.5 The preferences of the players for the four possible outcomes are as follows: Player 2’s best outcome is that Player 1 does not build a bomb and player 2 does not attack him. The second best outcome for Player 2 is that a bomb is built and destroyed. This outcome is better for her than the one where she unjustifiably attacks Player 1. And letting Player 1 build a bomb is the worst outcome for Player 2. As for Player 1, he prefers not to be attacked irrespective of his actions. He most prefers the outcome where he builds a bomb but is not attacked. His worst outcome is when he builds a bomb and it is destroyed. Note that this game is not a zero-sum-game.
Several results are quite surprising. Consider first the case where a is commonly known. In equilibrium both players benefit from a higher precision IS. While it is not surprising why this is so for Player 2, the benefit to Player 1 is less clear, since 1's actions are now better monitored by 2. Moreover, it is shown that both players are better off with the IS than without it, irrespective of its precision. The best equilibrium outcome for both players as a function of the precision of the IS is obtained with a perfect IS (a=1). Namely, when the IS sends a perfectly accurate signal. In this case Player 1 will not build a bomb and Player 2 will not attack him. This is the first best outcome for Player 2 and the second best outcome for Player 1. The implication is that, if necessary, Player 1 is best off subsidizing Player 2's building of an IS that is as accurate as possible, even though this means that Player 2 will be better able to monitor Player 1's actions. Actually, the best equilibrium outcome can be easily implemented. Player 1 should not build a bomb and in addition he should prove this to Player 2 by opening up his nuclear facility for inspection. Saddam Hussein apparently made the mistake of implementing only the first part of this strategy.
Next, if the IS is sufficiently accurate (a exceeds a certain threshold), Player 2 as expected will choose not to attack Player 1 if the signal is nb. But if the signal is "b" still Player 2 will not attack Player 1 with significant probability, even though the worst case for Player 2 is to allow Player 1 to have a bomb. On the other hand if the IS is less accurate (a is smaller than that threshold) Player 2 will act aggressively. She will attack Player 1 with probability 1 if the signal is "b" and she will even attack him with positive probability if the signal is "nb". Nevertheless, in this region of a the probability of Player 1 building a bomb is increasing the higher is the precision of the IS, even though he is more likely to be detected.
Let us provide some intuition for these results. If the precision of the IS is relatively high, Player 1 knows that if he chooses to build a bomb, Player 2 will detect it with high probability and she is likely to attack Player 1. Hence, Player 1 is better off building a bomb with a small probability. But then if the IS sends the signal "b" suggesting that Player 1 is building a bomb, the signal becomes less reliable and Player 2 hesitates to act aggressively. Indeed, if Player 2 obtains the signal "b" she attacks Player 1 with a probability which is bounded away from 1 and it decreases with the precision of the IS. On the other hand if the IS is less accurate, Player 1 builds a bomb with significant probability, knowing that there is a good chance that he will not be detected. In an attempt to avoid the worst case scenario, Player 2, conditional on the signal "b", attacks Player 1 with no hesitation. Moreover, with positive probability she attacks him even if the signal is "nb" (In this case the probability that Player 1 builds a bomb is also significant).
It is also shown that in equilibrium the unconditional probability that Player 2 attacks Player 1 decreases the higher is the precision of the IS. This is in line with the result that Player 1 benefits from a higher precision IS.

Tristan Tomala

HEC Paris

  ,

Existence of Belief-free Equilibria in Repeated Games with Incomplete Information and Known-own Payoffs    [pdf]

(joint work with Johannes Hörner and Stefano Lovo)

Abstract

In this work, we first characterize belief-free equilibrium payoffs in infinitely repeated games with incomplete information. We define a set of payoffs that contains all the belief-free equilibrium payoffs; conversely, any point in the interior of this set is a belief-free equilibrium payoff vector when players are sufficiently patient. This generalizes Hörner and Lovo (2009) who consider the two-player case.
Second, we consider repeated games with known-own-payoffs and study the existence of belief-free equilibria. We prove that if two players have finer information than any other player, and if these two players' information structures are comparable, then belief-free equilibria exist. This extends the 2-player result of Shalev (1994) and provides new conditions for existence of equilibria of undiscounted n-player repeated games with incomplete information.
The talk will emphasize the second issue.

Amparo Urbano

University of Valencia

  ,

Pragmatic Languages and Universal Grammars: An Equilibrium Approach    [pdf]

(joint work with Penélope Hernández and José E.Vila)

Abstract

The aim of this paper is to explore the role of a pragmatic Language with a universal grammar as a coordination device under communication misunderstandings. Such a language plays a key role in achieving efficient outcomes in those sender-receiver games, where there may exist noisy information tranmission. The Language is pragmatic in the sense that the Receiver’ best response depends on the context, i.e, on the payoffs and on the initial probability distribution of the states of nature of Γ. The Language has a universal grammar because the Sender’s coding rule does not depend on such specific parameters of Γ and can then be applied to any sender-receiver game with noisy communication.
The common knowledge "corpus" or set of standard prototypes designed by the Sender and the Receiver’s "pragmatic variations" around the standard prototypes, generate an equilibrium pragmatic Language. Furthermore, such a Language is efficient: in spite of initial misunderstandings, the Receiver is able to infer with a high probability the Sender’s meaning and thus expected payoffs are close to those of communication without noise.

Cornelia F.A. Van Wesenbeeck

VU University Amsterdam

  ,

The Primal Auction: a New Design for Multi-commodity Double Auctions    [pdf]

(joint work with M.A. Keyzer)

Abstract

It this paper, we propose an auction design for a multi-commodity double auction where participants simultaneously submit their valuations (bids) for the commodities. We label this the Primal Auction (PA) mechanism. The auctioneer computes the prevailing market price as the average over the bids and allocates the goods over the bidders in accordance with the relative bid of each bidder compared to this market price. Under the assumption of money metric utility functions, we show convergence of this process to an efficient equilibrium, but only if truth telling by all participants can be enforced. Commitment of all players to pay the prevailing market price at each round of the auction for the commodities allocated to them provides a strong incentive for truthful revelation, since lying means that the bidder has to pay the market price for a non-optimal quantity. However, to address concerns on shill bidding and bid shielding, we implement a stronger test on truth telling by endowing the auctioneer with the power to inactivate bids that are inconsistent with Revealed Preference. If bids cannot be refuted under his rule, then this implies that the existence of a concave utility function cannot be ruled out, and this is a sufficient condition for convergence of the projected gradient path represented by the auction design. There is no need to actually estimate this utility function: it is sufficient that bids are rationalizable. The PA mechanism can be extended to include a learning phase after which automata can finish the auction, which makes it also a suitable design for Internet auctions such as eBay. Finally, we link the PA mechanism to general equilibrium theory by showing that it is the dual of Walrasian tatonnement procedures, with the important advantage that at each step, commodity balances are maintained.

Vincent Vannetelbosch

CORE

  ,

Connections among Farsighted Agents    [pdf]

(joint work with Gilles Grandjean, Ana Mauleon, Vincent Vannetelbosch)

Abstract

We study the stability and efficiency of social and economic networks when players are farsighted. In particular, we examine whether the networks formed by farsighted players are different from those formed by myopic players. We adopt Herings, Mauleon and Vannetelbosch's (Games and Economic Behavior, forthcoming) notion of pairwise farsightedly stable set. We first investigate in some classical models of social and economic networks whether the pairwise farsightedly stable sets of networks coincide with the set of pairwise (myopically) stable networks and the set of strongly efficient networks. We then provide some primitive conditions on value functions and allocation rules so that the set of strongly efficient networks is the unique pairwise farsightedly stable set. Under the componentwise egalitarian allocation rule, the set of strongly efficient networks and the set of pairwise (myopically) stable networks that are immune to coalitional deviations are the unique pairwise farsightedly stable set if and only if the value function is top convex.

Rodrigo Velez

University of Rochester

  ,

Are Incentives against Justice    [pdf]

Abstract

This paper analyzes incentives for truthful revelation of preferences for the problem of fairly allocating a set of objects when monetary compensations are possible. An example is the allocation of the rooms and the rent among housemates. We investigate the manipulability of a family of solutions which are efficient, attain some intuitive form of distributive justice [Rawls J., 1972, A Theory of Justice, Harvard U.
Press], and satisfy a strong form of solidarity under budget changes: the Generalized Money Rawlsian Fair (GMRF) correspondences [Alkan A., Demange G., Gale D., Fair allocation of indivisible goods and criteria of justice. Econometrica 59, 1023-1039]. A solution is strategy-proof if no agent can benefit by misrepresenting her preferences. (i) We show that even though no selection from these correspondences is strategy-proof, the Nash and strong Nash equilibrium outcomes of the “preference revelation game form” associated to each correspondence, retain the basic objectives of fairness and efficiency. Thus, even though each agent has an incentive to lie if the others truthfully report their preferences, in equilibrium, no agent prefers another agent’s allotment to hers according to her true preferences; moreover, in equilibrium, efficiency is preserved according to agents’ true preferences. (ii) As a corollary, we show that GMRF correspondences “naturally implement” the fair and efficient correspondence, in both Nash and strong Nash equilibria.

Bernhard Von Stengel

London School of Economics

  ,

Pathways to Equilibria, Pretty Pictures and Diagrams (PPAD)

(joint work with Jack Edmonds)

Abstract

Existence of equilibria in various economic models is closely related to fixed point theorems, for example Brouwer's theorem that a continuous function on a simplex has a fixed point. Approximate fixed points exist by Sperner's lemma that asserts the existence of a properly colored simplex of certain colored simplicial subdivisions. An algorithmic proof follows a path of simplices. Many such path-following proofs are known, not only for approximate Brouwer fixed points, but also for Nash equilibra of two-player games (Lemke and Howson 1964) or for the core of a balanced N-person game (Scarf 1965).

Our goal is to find the right mathematical abstraction to explain the relationship between these concepts. Computationally, they seem equally difficult. A recent famous result due to Chen and Deng states that finding a Nash equilibrium of a bimatrix game is "PPAD-complete", that is, already as hard as finding an approximate Brouwer fixed point, a seemingly much harder problem. We explain the theorem but not its proof. We study how to capture the directedness of the path (the "D" in "PPAD") by oriented topological manifolds in a suitable abstraction.

Our exposition will use colorful pictures and examples wherever possible.

Uri Weiss

The Center for The Study of Rationality, The Hebrew University

  ,

The Robber Asks to be Punished    [doc]

Abstract

We have a strong intuition that increasing punishment leads to less crime. Let's move our glance from the punishment on the crime itself to the punishment on the attempt to commit a crime. The more severe the punishment on the attempt to rob, i.e. on the threat, “give me the money or…” will be, the more robberies and the more attempts will take place. That is because the punishment on the attempt to commit a crime makes the withdrawal from it more expensive for the criminal, making the relative cost of committing the crime lower. Hence, the punishment of the attempt turns it into a commitment by the robber, and makes incredible threats credible. Therefore, the robber has a strong interest in increasing the punishment on the attempt.

Zhen Xu

Stony Brook University

  ,

The Contrary Effects of Listing Fee

(joint work with Yair Tauman)

Abstract

Online buying and selling activities are quite popular in the recent years. Different online shops have different ways to operate the business and charges different fees for sale using their webpages. Online shops seldom charge absolutely no fees on the sellers (e.g. Taobao in China). There are usually two kinds of fees charged. Success fee (or final value fee in Ebay) is charged when sellers successfully sell their goods out. Listing fee (or insertion fee, also in Ebay) is charged when the sellers put their objects on the webpage of the online shop. In this paper, I mainly exam at the effects of the listing fee.

Duygu Yengin

University of Adelaide

  ,

Appointment Games in Fixed-Route Traveling Salesman Problems and The Shapley Value    [pdf]

Abstract

Starting from her home, a service provider visits several customers, following a predetermined route, and returns home after all customers are visited. The problem is to find a fair allocation of the total cost of this tour among the customers served. A transferable-utility cooperative game can be associated with this cost allocation problem. We introduce a new class of games, which we refer as the fixed-route traveling salesman games with appointments. We study the Shapley Value in this class and show that it is in the core. Our first characterization of the Shapley value involves a property which requires that sponsors do not benefit from mergers, or splitting into a set of sponsors. Our second theorem involves a property which requires that the cost shares of two sponsors who get connected are equally effected. We also show that except for our second theorem, none of our results for appointment games extend to the class of routing games (Potters, Cruiel, Tijs, Mathematical Programming, 1992).

Akira Yokotani

University of Rochester

  ,

The Sequential Belief Representation of Harsanyi Type Spaces with Redundancy    [pdf]

(joint work with NA)

Abstract

In the epistemic analysis of games, the existence of redundant types which are the Harsanyi types to represent the same sequential beliefs over the basic uncertainty, has been an obstacle. To resolve the redundancy of types, we consider the sequential beliefs over an augmented uncertainty. We consider sequential beliefs over not only the basic uncertainty but also a newly added payoff irrelevant parameter space C. We show that any Harsanyi type space, even if it has redundant types, can be isomorphically embedded into the extended sequential belief space when C={0,1} or it has larger cardinality. Based on this result, we also show that there exists a so-called universal type space where we can isomorphically embed any Harsanyi type space. Finally, as an application, we define the intrinsic correlation by Brandenburger-Friedenberg (2008, JET) in terms of redundancy, and show that our result can be applied to obtain the same result as them.

Jung You

Rice University

  ,

Envy-free and Incentive Compatible division of a commodity    [pdf]

Abstract

This article proposes a new mechanism for allocating a divisible commodity to a number of buyers. Buyers are assumed to behave as price-anticipators rather than as price-takers. The proposed mechanism is as parsimonious as possible, in the sense that it requests participants to report a single-dimensional message instead of an entire utility function, as requested by VCG mechanisms. This article shows that this mechanism yields efficient allocations in Nash equilibria, and moreover, that these equilibria are envy-free. Additionally, this paper presents distinct results that this mechanism is the only simple VCG-like mechanism that both implements efficient Nash equilibria and satisfies the No Envy axiom of fairness. Furthermore, the mechanism's Nash equilibria are proven to satisfy the fairness properties of both Ranking and Voluntary Participation.

Peyton Young

University of Oxford

  ,

Gaming Performance Fees by Portfolio Managers: An Application of Game Theory to Finance

Abstract

It is widely believed that performance bonuses contributed to the recent economic crisis by creating incentives for financial managers to take on excessive risk. This paper shows that reforming the incentives is not going to be easy: any incentive scheme that pays managers for high performance can be 'gamed' by charlatans who do not offer above-average returns but nevertheless capture a sizable amount of the fees intended for the managers who do. We estimate the extent to which any reward scheme can be gamed, and show that it is impossible to design a scheme that separates the charlatans from the managers with superior skill.

Shmuel Zamir

Center for the Study of Rationality, The Hebrew University of Jerusalem

  ,

Condorcet Jury Theorem: The Dependent Case    [pdf]

(joint work with Bezalel Peleg and Sgmuel Zamir)

Abstract

See abstract in the paper attached.
There are too many mathematical symbols to type online.

Andriy Zapechelnyuk

University of Bonn

  ,

Bargaining Against a Status Quo: the Algebra of Strikes

(joint work with Yair Tauman)

Abstract

A common view of the economic literature on strikes, as a phenomenon of a failure of negotiations, is that strikes occur only due to asymmetric information between the involved parties. In contrary to this opinion, we show that incomplete information is not a necessary requirement for a strike to occur. We model a wage negotiation process between an employer and an employee's union as a game of complete information. We show that, under plausible assumptions, (i) in every equilibrium with a positive probability an agreement is not reached in the first period, and (ii) not only the strike is a credible threat, but also it occurs in equilibrium with a positive probability.

José Manuel Zarzuelo

Basque Country University

  ,

The Bilateral Consistent Prekernel and the Core on NTU Games and Exchange Economies

(joint work with G. Orshan, P. Sudhölter and J´. M. Zarzuelo)

Abstract


It is shown that the bilateral consistent prekernel (BCPK), an NTU solution concept that generalizes the Nash bargaining solution by means of a principle of bilateral consistency, intersects the core for the class of balanced games. A second contribution of this paper is the definiton of an ordinal solution concept on exchange economies. This solution incorporates the bilateral consistency property and is always in the core of the economy.

Ping Zhang

University of Nottingham

  ,

Collusion in Share Auctions: Mechanism Design and Communication among Bidders    [doc]

(joint work with Martin Sefton)

Abstract

We use laboratory experiments to compare the impact of alternative allocation rules on revenue in uniform price share auctions. In standard uniform price auctions there is a tacit collusion equilibrium that results in arbitrarily large underpricing. Kremer and Nyborg (2004) argue that this can be eliminated by modifying the allocation rule. A uniform allocation rule, where rationing applies to all winning bids (bids placed either above or at the market price), eliminates the tacit collusion equilibrium, though another equilibrium that results in large underpricing can occur when bidders have capacity constraints. They also show that a hybrid allocation rule eliminates both types of unappealing equilibria and drives the market price close to the market value. We examine these three allocation rules in two environments. In one the bidders are allowed to communicate with one another before placing bids and in the other they cannot. Our laboratory experiments provide little evidence of revenue differences among the three allocation rules. Without communication, market prices appear competitive in all treatments. When bidders are allowed to communicate collusive outcomes predominate under all three allocation rules. We conclude that the scope to use mechanism design considerations to improve auction revenue is limited by the communication possibilities available to bidders.

Xiaojian Zhao

University of Mannheim

  ,

Strategic Mis-selling and Pre-Contractual Cognition    [pdf]

Abstract

The paper studies asymmetric awareness of the appropriateness of a status quo product between a seller and a buyer, where the latter can invest cognitive resources before contracting `a la Tirole (2008). In the one-shot interaction, we show that there is no separating equilibrium in which the seller always truthfully reports the appropriate product. If the extent of mis-selling and the transfer from the seller to the buyer in the case of mis-selling are low, we have a pooling equilibrium where the seller always announces that the status quo product is appropriate. Otherwise, we obtain a semi-separating equilibrium where the seller randomizes between telling the truth and mis-selling if the status quo product is inappropriate. The transaction cost of pre-contractual cognition increases with the extent of mis-selling as the extent of mis-selling is small and decreases thereafter. Finally, reputation with a “tip” mechanism or competition between sellers may yield a separating equilibrium where the transaction cost vanishes.

Charles Zheng

Iowa State University

  ,

A Noncooperative Reformulation of the Core    [pdf]

Abstract

The concept of the core is reformulated to handle externality problems where the total payoff for a coalition depends upon the actions of the players outside the coalition. A deviating coalition has rational expectations of the outsiders' individual attempts to stop the deviation and their coalitional response if the deviaiton cannot be stopped, with the coalitional response itself a core solution among the outsiders. A noncooperative game of competing principals is designed and the set of its subgame perfect equilibria, with two refinement conditions, is equal to the reformulated core. Applied to a problem of pollution externality, the new concept of the core prescribes a way to maintain Pareto optimal cooperation without repeated games or a central planner.

Back