|
on Microeconomics |
By: | Jo\~ao Thereze (Fuqua School of Business, Duke University); Udayan Vaidya (Fuqua School of Business, Duke University) |
Abstract: | A principal seeks to contract with an agent but must do so through an informed delegate. Although the principal cannot directly mediate the interaction, she can constrain the menus of contracts the delegate may offer. We show that the principal can implement any outcome that is implementable through a direct mechanism satisfying dominant strategy incentive compatibility and ex-post participation for the agent. We apply this result to several settings. First, we show that a government that delegates procurement to a budget-indulgent agency should delegate an interval of screening contracts. Second, we show that a seller can delegate sales to an intermediary without revenue loss, provided she can commit to a return policy. Third, in contrast to centralized mechanism design, we demonstrate that no partnership can be efficiently dissolved in the absence of a mediator. Finally, we discuss when delegated contracting obstructs efficiency, and when choosing the right delegate may help restore it. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.19326 |
By: | Hiroto Sato; Konan Shimizu |
Abstract: | In social learning environments, agents acquire information from both private signals and the observed actions of predecessors, referred to as history. We define the value of history as the gain in expected payoff from accessing both the private signal and history, compared to relying on the signal alone. We first characterize the information structures that maximize this value, showing that it is highest under a mixture of full information and no information. We then apply these insights to a model of markets for history, where a monopolistic data seller collects and sells access to history. In equilibrium, the seller's dynamic pricing becomes the value of history for each agent. This gives the seller incentives to increase the value of history by designing the information structure. The seller optimal information discloses less information than the socially optimal level. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.11029 |
By: | Maxwell Rosenthal |
Abstract: | This paper develops a data-driven approach to Bayesian persuasion. The receiver is privately informed about the prior distribution of the state of the world, the sender knows the receiver's preferences but does not know the distribution of the state variable, and the sender's payoffs depend on the receiver's action but not on the state. Prior to interacting with the receiver, the sender observes the distribution of actions taken by a population of decision makers who share the receiver's preferences in best response to an unobserved distribution of messages generated by an unknown and potentially heterogeneous signal. The sender views any prior that rationalizes this data as plausible and seeks a signal that maximizes her worst-case payoff against the set of all such distributions. We show positively that the two-state many-action problem has a saddle point and negatively that the two-action many-state problem does not. In the former case, we identify adversarial priors and optimal signals. In the latter, we characterize the set of robustly optimal Blackwell experiments. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.03203 |
By: | Georgy Lukyanov; Samuel Safaryan |
Abstract: | We study public persuasion when a sender faces a mass audience that can verify the state at heterogeneous costs. The sender commits ex ante to a public information policy but must satisfy an ex post truthfulness constraint on verifiable content (EPIC). Receivers verify selectively, generating a verifying mass that depends on the public posterior mu. This yields an indirect value v(mu;F) and a concavification problem under implementability. Our main result is a reverse comparative static: when verification becomes cheaper (an FOSD improvement in F), v becomes more concave and the optimal public signal is strictly less informative (Blackwell). Intuitively, greater verifiability makes extreme claims invite scrutiny, so the sender optimally coarsens information - "confusion as strategy." We extend the model to two ex post instruments: falsification (continuous manipulation) and violence (a fixed-cost discrete tool), and characterize threshold substitutions from persuasion to manipulation and repression. The framework speaks to propaganda under improving fact-checking. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.19682 |
By: | Yaron Azrieli; Ritesh Jain; Semin Kim |
Abstract: | We study the design of voting mechanisms in a binary social choice environment where agents' cardinal valuations are independent but not necessarily identically distributed. The mechanism must be anonymous -- the outcome is invariant to permutations of the reported values. We show that if there are two agents then expected welfare is always maximized by an ordinal majority rule, but with three or more agents there are environments in which cardinal mechanisms that take into account preference intensities outperform any ordinal mechanism. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.08055 |
By: | Zihao Li |
Abstract: | We study a model of dynamic monopoly with differentiated goods that buyers can freely dispose of. The model extends the framework of Coasian bargaining to situations in which the quantity or quality of the good is endogenously determined. Our main result is that when players are patient, the seller can sustain in equilibrium any payoff between the lowest possible buyer valuation and (approximately) the highest payoff achievable with commitment power, thus establishing a folk theorem. We apply our model to data markets, where data brokers sell marketing lists to producers. Methodologically, we leverage the connection between sequential bargaining and static mechanism design. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.13137 |
By: | Christoph Carnehl; Anton Sobolev; Konrad Stahl; André Stenzel |
Abstract: | We study information design in a vertically differentiated market. Two firms offer products of ex-ante unknown qualities. A third party designs a system to publicly disclose information. More precise information guides consumers toward their preferred product but increases expected product differentiation, allowing firms to raise prices. Full disclosure of the product ranking alone suffices to maximize industry profits. Consumer surplus is maximized, however, whenever no information about the product ranking is disclosed, as the benefit of competitive pricing always dominates the loss from suboptimal choices. The provision of public information on product quality becomes questionable. |
Keywords: | Information Design, Vertical Product Differentiation, Quality Rankings, Competition |
JEL: | D43 D82 L13 L15 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2025_700 |
By: | Robin Ng; Greg Taylor |
Abstract: | We study how content moderation facilitates communication on online platforms. A sender transmits information to a receiver, exerting effort to signal their truthfulness. Communication fails without moderation because the effort required is prohibitive. Moderation resolves this problem by making effort a more powerful signal of veracity. However, moderation crowds-out sender effort, decreasing content quality on the platform. A socially optimal or profit-maximizing policy may therefore involve limited moderation. We study the choice between being a platform or broadcaster, how moderation influences competition for attention, and the effects of misinformation actors, AI-generated content, and moderator errors on the sustainability of communication. |
Keywords: | user-generated content, content moderation, creator economy, media platforms, misinformation |
JEL: | D83 L82 L86 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2025_698 |
By: | Georgy Lukyanov; Vasilii Ivanik |
Abstract: | We embed a taste for nonconformism into a canonical Bikhchandani-Hirshleifer-Welch social-learning model. Agents value both correctness and choosing the minority action (fixed or proportion-based bonus). We study exogenous signals and endogenous acquisition with a fixed entry cost and convex cost of precision in a Gaussian-quadratic specification. Contrarian motives shift equilibrium cutoffs away from 1/2 and expand the belief region where information is purchased, sustaining informative actions; conditional on investing, chosen precision is lower near central beliefs. Welfare is shaped by a trade-off: mild contrarianism counteracts premature herding, whereas strong contrarianism steers actions against informative social signals and induces low-value experimentation. A tractable characterization delivers closed-form cutoffs, comparative statics, and transparent welfare comparisons. Applications include scientific priority races and academic diffusion, where distinctiveness yields rents yet excessive contrarianism erodes information aggregation. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.21446 |
By: | Andrea Benso |
Abstract: | We consider a repeated game in which players, considered as nodes of a network, are connected. Each player observes her neighbors' moves only. Thus, monitoring is private and imperfect. Players can communicate with their neighbors at each stage; each player, for any subset of her neighbors, sends the same message to any player of that subset. Thus, communication is local and both public and private. Both communication and monitoring structures are given by the network. The solution concept is perfect Bayesian equilibrium. In this paper we show that a folk theorem holds if and only if the network is 2-connected for any number of players. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.10148 |
By: | Hinata Kurashita; Ryosuke Sakai |
Abstract: | We study the problem of allocating homogeneous and indivisible objects among agents with money. In particular, we investigate the relationship between egalitarian-equivalence (Pazner and Schmeidler, 1978), as a fairness concept, and efficiency under agents' incentive constraints. As a first result, we characterize the class of mechanisms that satisfy egalitarian-equivalence, strategy-proofness, individual rationality, and no subsidy. Our characterization reveals a strong tension between egalitarian-equivalence and efficiency: under these properties, the mechanisms allocate objects only in limited cases. To address this limitation, we replace strategy-proofness with the weaker incentive property, non-obvious manipulability (Troyan and Morrill, 2020). We show that this relaxation allows us to design mechanisms that achieve efficiency while still ensuring egalitarian-equivalence. Furthermore, upon achieving efficiency, we identify the agent optimal mechanism in the characterized class. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.09152 |
By: | Georgy Lukyanov; Anna Vlasova; Maria Ziskelevich |
Abstract: | We study expert advice under reputational incentives, with sell-side equity research as the lead application. A long-lived analyst receives a continuous private signal about a binary payoff and recommends a risky (Buy) or safe action. Recommendations and outcomes are public, and clients' implementation effort depends on current reputation. In a recursive, belief-based equilibrium: (i) advice follows a cutoff in the signal; (ii) under a simple diagnosticity asymmetry, the cutoff is (weakly) increasing in reputation (reputational conservatism); and (iii) comparative statics are transparent - higher signal precision or a higher success prior lowers the cutoff, whereas stronger career concerns raise it. A success-contingent bonus implements any target experimentation rate via a closed-form mapping. The model predicts that high-reputation analysts make fewer risky calls yet attain higher conditional hit rates, and it clarifies how committee thresholds and monitoring regimes shift behavior. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.19707 |
By: | Zhonghong Kuang; Yi Liu; Dong Wei |
Abstract: | We study the optimal design of relational contracts that incentivize an expert to share specialized knowledge with a novice. While the expert fears that a more knowledgeable novice may later erode his future rents, a third-party principal is willing to allocate her resources to facilitate knowledge transfer. In the unique profit-maximizing contract between the principal and the expert, the expert is asked to train the novice as much as possible, for free, in the initial period; knowledge transfers then proceed gradually and perpetually, with the principal always compensating the expert for his future losses immediately upon verifying the training he provided; even in the long run, a complete knowledge transfer might not be attainable. We further extend our analysis to an overlapping-generation model, accounting for the retirement of experts and the career progression of novices. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.11018 |
By: | Yaron Azrieli; Rachana Das |
Abstract: | We study a model of persuasion in which the receiver is a `conservative Bayesian' whose updated belief is a convex combination of the prior and the correct Bayesian posterior. While in the classic Bayesian case providing information sequentially is never valuable, we show that the sender gains from sequential persuasion in many of the environments considered in the literature on strategic information transmission. We also consider the case in which the sender and receiver are both biased and prove that the maximal expected payoff for the sender under sequential persuasion is the same as in the case where neither of them is biased. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.09464 |
By: | Hendrik Rommeswinkel |
Abstract: | Decision makers may face situations in which they cannot observe the consequences that result from their actions. In such decisions, motivations other than the expected utility of consequences may play a role. The present paper axiomatically characterizes a decision model in which the decision maker cares about whether it can be ex post verified that a good consequence has been achieved. Preferences over acts uniquely characterize a set of events that the decision maker expects to be able to verify in case they occur. The decision maker chooses the act that maximizes the expected utility across verifiable events of the worst possible consequence that may have occurred. For example, a firm choosing between different carbon emission reduction technologies may find some technologies to leave ex post more uncertainty about the level of emission reduction than other technologies. The firm may care about proving to its stakeholders that a certain amount of carbon reduction has been achieved and may employ privately obtained evidence to do so. It may choose in expectation less efficient technologies if the achieved carbon reduction is better verifiable using the expected future evidence. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.19585 |
By: | Krzysztof Mierzewski |
Abstract: | The stability rule for belief, advocated by Leitgeb [Annals of Pure and Applied Logic 164, 2013], is a rule for rational acceptance that captures categorical belief in terms of $\textit{probabilistically stable propositions}$: propositions to which the agent assigns resiliently high credence. The stability rule generates a class of $\textit{probabilistically stable belief revision}$ operators, which capture the dynamics of belief that result from an agent updating their credences through Bayesian conditioning while complying with the stability rule for their all-or-nothing beliefs. In this paper, we prove a representation theorem that yields a complete characterisation of such probabilistically stable revision operators and provides a `qualitative' selection function semantics for the (non-monotonic) logic of probabilistically stable belief revision. Drawing on the theory of comparative probability orders, this result gives necessary and sufficient conditions for a selection function to be representable as a strongest-stable-set operator on a finite probability space. The resulting logic of probabilistically stable belief revision exhibits strong monotonicity properties while failing the AGM belief revision postulates and satisfying only very weak forms of case reasoning. In showing the main theorem, we prove two results of independent interest to the theory of comparative probability: the first provides necessary and sufficient conditions for the joint representation of a pair of (respectively, strict and non-strict) comparative probability orders. The second result provides a method for axiomatising the logic of ratio comparisons of the form ``event $A$ is at least $k$ times more likely than event $B$''. In addition to these measurement-theoretic applications, we point out two applications of our main result to the theory of simple voting games and to revealed preference theory. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.02495 |
By: | Shashwat Khare; Souvik Roy; Ton Storcken |
Abstract: | This paper studies matching markets where institutions are matched with possibly more than one individual. The matching market contains some couples who view the pair of jobs as complements. First, we show by means of an example that a stable matching may fail to exist even when both couples and institutions have responsive preferences. Next, we provide conditions on couples' preferences that are necessary and sufficient to ensure a stable matching for every preference profile where institutions may have any responsive preference. Finally, we do the same with respect to institutions' preferences, that is, we provide conditions on institutions' preferences that are necessary and sufficient to ensure a stable matching for every preference profile where couples may have any responsive preference. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.07501 |
By: | Qing Hu (Kansai University); Ryo Masuyama (Kushiro Public University of Economics); Tomomichi Mizuno (Kobe University) |
Abstract: | It is well known that common ownership lessens competition, which tends to decrease consumer and total surpluses. This study challenges this well known result by introducing downstream firms' voluntary investment. We consider a vertical market with one upstream firm and two downstream firms, where the downstream firms engage in voluntary investment that can reduce the upstream firm's marginal cost. We show that common ownership may increase the consumer and total surpluses if the upstream marginal cost without investment is sufficiently high and the investment is sufficiently efficient. We also find our results are robust even in the market with two supply chains. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:koe:wpaper:2520 |
By: | Simon Finster (FAIRPLAY - IA coopérative : équité, vie privée, incitations - CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - GENES - Groupe des Écoles Nationales d'Économie et Statistique - X - École polytechnique - IP Paris - Institut Polytechnique de Paris - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - GENES - Groupe des Écoles Nationales d'Économie et Statistique - IP Paris - Institut Polytechnique de Paris - CNRS - Centre National de la Recherche Scientifique - IP Paris - Institut Polytechnique de Paris - Criteo AI Lab - Criteo [Paris] - Centre Inria de l'Institut Polytechnique de Paris - Centre Inria de Saclay - Inria - Institut National de Recherche en Informatique et en Automatique); Patrick Loiseau (FAIRPLAY - IA coopérative : équité, vie privée, incitations - CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - GENES - Groupe des Écoles Nationales d'Économie et Statistique - X - École polytechnique - IP Paris - Institut Polytechnique de Paris - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - GENES - Groupe des Écoles Nationales d'Économie et Statistique - IP Paris - Institut Polytechnique de Paris - CNRS - Centre National de la Recherche Scientifique - IP Paris - Institut Polytechnique de Paris - Criteo AI Lab - Criteo [Paris] - Centre Inria de l'Institut Polytechnique de Paris - Centre Inria de Saclay - Inria - Institut National de Recherche en Informatique et en Automatique); Simon Mauras (FAIRPLAY - IA coopérative : équité, vie privée, incitations - CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - GENES - Groupe des Écoles Nationales d'Économie et Statistique - X - École polytechnique - IP Paris - Institut Polytechnique de Paris - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - GENES - Groupe des Écoles Nationales d'Économie et Statistique - IP Paris - Institut Polytechnique de Paris - CNRS - Centre National de la Recherche Scientifique - IP Paris - Institut Polytechnique de Paris - Criteo AI Lab - Criteo [Paris] - Centre Inria de l'Institut Polytechnique de Paris - Centre Inria de Saclay - Inria - Institut National de Recherche en Informatique et en Automatique); Mathieu Molina (FAIRPLAY - IA coopérative : équité, vie privée, incitations - CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - GENES - Groupe des Écoles Nationales d'Économie et Statistique - X - École polytechnique - IP Paris - Institut Polytechnique de Paris - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - GENES - Groupe des Écoles Nationales d'Économie et Statistique - IP Paris - Institut Polytechnique de Paris - CNRS - Centre National de la Recherche Scientifique - IP Paris - Institut Polytechnique de Paris - Criteo AI Lab - Criteo [Paris] - Centre Inria de l'Institut Polytechnique de Paris - Centre Inria de Saclay - Inria - Institut National de Recherche en Informatique et en Automatique); Bary Pradelski (MFO - Maison Française d'Oxford - MEAE - Ministère de l'Europe et des Affaires étrangères - CNRS - Centre National de la Recherche Scientifique, LIG - Laboratoire d'Informatique de Grenoble - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes, POLARIS - Performance analysis and optimization of LARge Infrastructures and Systems - Centre Inria de l'Université Grenoble Alpes - Inria - Institut National de Recherche en Informatique et en Automatique - LIG - Laboratoire d'Informatique de Grenoble - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes) |
Abstract: | We initiate the study of how auction design affects the division of surplus among buyers. We propose a parsimonious measure for equity and apply it to the family of standard auctions for homogeneous goods. Our surplus-equitable mechanism is efficient, Bayesian-Nash incentive compatible, and achieves surplus parity among winners ex-post. The uniform-price auction is equity-optimal if and only if buyers have a pure common value. Against intuition, the pay-as-bid auction is not always preferred in terms of equity if buyers have pure private values. In auctions with price mixing between pay-as-bid and uniform prices, we provide prior-free bounds on the equity-preferred pricing rule under a common regularity condition on signals. |
Keywords: | uniform price, pay-as-bid, mechanism design, equity, auctions, common value |
Date: | 2025–07–07 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-05225702 |
By: | Georgy Lukyanov; Anna Vlasova |
Abstract: | We study dynamic delegation with reputation feedback: a long-lived expert advises a sequence of implementers whose effort responds to current reputation, altering outcome informativeness and belief updates. We solve for a recursive, belief-based equilibrium and show that advice is a reputation-dependent cutoff in the expert's signal. A diagnosticity condition - failures at least as informative as successes - implies reputational conservatism: the cutoff (weakly) rises with reputation. Comparative statics are transparent: greater private precision or a higher good-state prior lowers the cutoff, whereas patience (value curvature) raises it. Reputation is a submartingale under competent types and a supermartingale under less competent types; we separate boundary hitting into learning (news generated infinitely often) versus no-news absorption. A success-contingent bonus implements any target experimentation rate with a plug-in calibration in a Gaussian benchmark. The framework yields testable predictions and a measurement map for surgery (operate vs. conservative care). |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.19676 |
By: | Asen Kochov; Yangwei Song |
Abstract: | We study infinitely repeated games in which the players’ rates of time preference may evolve endogenously in the course of the game. Our goal is to strengthen the folk theorem of Kochov and Song (2023) by relaxing the assumption of observable mixtures. To that end, we identify and impose a new sufficient condition on preferences. The condition holds automatically in the standard case of time-separable utilities and a common discount factor, while being generic in ours. |
Keywords: | folk theorem, recursive utility, endogenous discounting, unobserved mixtures |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_12066 |
By: | Georgy Lukyanov; Konstantin Popov; Shubh Lashkery |
Abstract: | We analyze a dynamic labor market in which a worker with career concerns chooses each period between (i) self-employment that makes output publicly observable and (ii) employment at a firm that pays a flat wage but keeps individual performance hidden from outside observers. Output is binary and the worker is risk averse. The worker values future opportunities through a reputation for talent; firms may be benchmark (myopic) (ignoring the informational content of an application) or equilibrium (updating beliefs from the very act of applying). Three forces shape equilibrium: an insurance - information trade-off, selection by reputation, and inference from application decisions. We show that (i) an absorbing employment region exists in which low-reputation workers strictly prefer the firm's insurance and optimally cease producing public information; (ii) sufficiently strong reputation triggers self-employment in order to generate public signals and preserve future outside options; and (iii) with equilibrium firms, application choices act as signals that shift hiring thresholds and wages even when in-firm performance remains opaque. Comparative statics deliver sharp, testable predictions for the prevalence of self-employment, the cyclicality of switching, and wage dynamics across markets with different degrees of performance transparency. The framework links classic career-concerns models to contemporary environments in which some tasks generate portable, public histories while firm tasks remain unobserved by the outside market (e.g., open-source contributions, freelancing platforms, or sales roles with standardized public metrics). Our results rationalize recent empirical findings on the value of public performance records and illuminate when opacity inside firms dampens or amplifies reputational incentives. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.01265 |
By: | Guillermo Alonso Alvarez; Erhan Bayraktar; Ibrahim Ekren |
Abstract: | We study a principal-agent model involving a large population of heterogeneously interacting agents. By extending the existing methods, we find the optimal contracts assuming a continuum of agents, and show that, when the number of agents is sufficiently large, the optimal contracts for the problem with a continuum of agents are near-optimal for the finite agents problem. We make comparative statistics and provide numerical simulations to analyze how the agents' connectivity affects the principal's value, the effort of the agents, and the optimal contracts. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.09415 |
By: | Jonathan Bendor; Lukas Bolte; Nicole Immorlica; Matthew O. Jackson |
Abstract: | It is socially beneficial for teams to cooperate in some situations (``good games'') and not in others (``bad games;'' e.g., those that allow for corruption). A team's cooperation in any given game depends on expectations of cooperation in future iterations of both good and bad games. We identify when sustaining cooperation on good games necessitates cooperation on bad games. We then characterize how a designer should optimally assign workers to teams and teams to tasks that involve varying arrival rates of good and bad games. Our results show how organizational design can be used to promote cooperation while minimizing corruption. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.03030 |
By: | Volker Nocke; Nicolas Schutz |
Abstract: | We adopt a potential games approach to study multiproduct-firm pricing games.We introduce the new concept of transformed potential and characterize the classes of demand systems that give rise to pricing games admitting such a potential. The resulting demand systems may contain nests (of closer substitutes) or baskets (of products that are purchased jointly), or combinations thereof. These demand systems allow for flexible substitution patterns, and can feature product complementarities arising from joint purchases and substitution away from the outside option. Combining the potential games approach with a competition-in-utility approach, we derive powerful results on existence of pure-strategy Nash equilibria. |
Keywords: | Multiproduct firms, potential game, oligopoly pricing, complementary goods, joint purchases, nests |
JEL: | L13 D43 |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2025_644_v2 |
By: | Daniel Rehsmann; B\'eatrice Roussillon; Paul Schweinzer |
Abstract: | We model competition on a credence goods market governed by an imperfect label, signaling high quality, as a rank-order tournament between firms. In this market interaction, asymmetric firms jointly and competitively control the aggregate precision of a label ranking the competitors' qualities by releasing individual information. While the labels and the aggregated information they are based on can be seen as a public good guiding the consumers' purchasing decisions, individual firms have incentives to strategically amplify or counteract the competitors' information emission, thereby manipulating the aggregate precision of product labeling, i.e., the underlying ranking's discriminatory power. Elements of the introduced theory are applicable to several (credence-good) industries that employ labels or rankings, including academic departments, ``green'' certification, movies, and investment opportunities. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.19837 |
By: | Abinash Panda; Anup Pramanik; Ragini Saxena |
Abstract: | We investigate preference domains where every unanimous and locally strategy-proof social choice function (scf) satisfies dictatorship. We identify a condition on domains called connected with two distinct neighbours which is necessary for unanimous and locally strategy-proof scfs to satisfy dictatorship. Further, we show that this condition is sufficient within the class of domains where every unanimous and locally strategy-proof scf satisfies tops-onlyness. While a complete characterization remains open, we make significant progress by showing that on connected with two distinct neighbours domains, unanimity and strategy-proofness (a stronger requirement) guarantee dictatorship. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.00913 |
By: | Florian Brandl; Andrew Mackenzie |
Abstract: | A perfectly divisible cake is to be divided among a group of agents. Each agent is entitled to a share between zero and one, and these entitlements are compatible in that they sum to one. The mediator does not know the preferences of the agents, but can query the agents to make cuts and appraise slices in order to learn. We prove that if one of the entitlements is irrational, then the mediator must use a protocol that involves an arbitrarily large number of queries in order to construct an allocation that respects the entitlements regardless of preferences. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.09004 |
By: | Teddy Mekonnen |
Abstract: | This paper examines how a profit-maximizing monopolist competes against a free but capacity-constrained public option. The monopolist strategically restricts its supply beyond standard monopoly levels, thereby intensifying congestion at the public option and increasing consumers' willingness-to-pay for guaranteed access. Expanding the capacity of the public option always reduces producer welfare and, counterintuitively, may also reduce consumer welfare. In contrast, introducing a monopolist to a market served only by a capacity-constrained public option unambiguously improves consumer welfare. These findings have implications for mixed public-private markets, such as housing, education, and healthcare. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.12779 |
By: | Jorge Arenas M. |
Abstract: | In this paper, I analyze pricing problems of a monopolistic platform that offers matching services to agents with heterogeneous preferences in multi-sided markets. First, I extend the Marx and Schummer (2021) model to multi-sided markets and show that its main result holds: the allocation of the price level across the sides of the market is not affected by the size imbalance across these sides. I then use preference simulations to address the price level problem in two-sided markets. I find that the optimal price level depends positively on: (i) the size of the market when it is balanced; and (ii) the imbalance of the market when it is unbalanced. The simulations also yield important implications for the relationship between the percentage of unmatched agents and market size and imbalance. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:chb:bcchwp:1033 |
By: | Federico Echenique; Sumit Goel; SangMok Lee |
Abstract: | We study fairness in the allocation of discrete goods. Exactly fair (envy-free) allocations are impossible, so we discuss notions of approximate fairness. In particular, we focus on allocations in which the swap of two items serves to eliminate any envy, either for the allocated bundles or with respect to a reference bundle. We propose an algorithm that, under some restrictions on agents' preferences, achieves an allocation with ``swap bounded envy.'' |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.09290 |
By: | Joshua S. Gans |
Abstract: | When should travellers leave for the airport? This paper develops a model for optimal airport arrival timing when travellers face uncertain travel times and can potentially board earlier flights. We show that access to earlier flights creates a ``recourse option" that fundamentally changes optimal behaviour. While earlier flights always reduce the probability of missing one's scheduled departure, they may paradoxically increase expected waiting time when travellers adjust their arrival strategies. Using renewal theory, we establish that with frequent service, the expected waiting time converges to half the headway between flights—a fundamental limit that cannot be improved through better planning. We connect the problem to newsvendor theory, showing how the fixed penalty for missing flights (rather than linear costs) leads to distinct optimality conditions. Our results explain why rational travellers should occasionally miss flights and provide practical guidelines for airlines designing standby policies and for travellers making departure decisions. |
JEL: | C44 D81 L93 R41 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34169 |
By: | Mikhail Drugov; Dmitry Ryvkin |
Abstract: | We characterize robust tournament design -- the prize scheme that maximizes the lowest effort in a rank-order tournament where the distribution of noise is unknown, except for an upper bound, $\bar{H}$, on its Shannon entropy. The robust tournament scheme awards positive prizes to all ranks except the last, with a distinct top prize. Asymptotically, the prizes follow the harmonic number sequence and induce an exponential distribution of noise with rate parameter $e^{-\bar{H}}$. The robust prize scheme is highly unequal, especially in small tournaments, but becomes more equitable as the number of participants grows, with the Gini coefficient approaching $1/2$. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.16348 |
By: | Ran Spiegler |
Abstract: | Can players sustain long-run trust when their equilibrium beliefs are shaped by machine-learning methods that penalize complexity? I study a game in which an infinite sequence of agents with one-period recall decides whether to place trust in their immediate successor. The cost of trusting is state-dependent. Each player's best response is based on a belief about others' behavior, which is a coarse fit of the true population strategy with respect to a partition of relevant contingencies. In equilibrium, this partition minimizes the sum of the mean squared prediction error and a complexity penalty proportional to its size. Relative to symmetric mixed-strategy Nash equilibrium, this solution concept significantly narrows the scope for trust. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.10363 |
By: | Bontems, Philippe; Hamilton, Stephen F.; Lepore, Jason |
Abstract: | Multisided platforms have emerged as an increasingly important market structure with the rise of the digital economy. In this paper, we consider sequential price setting behavior by platforms and demonstrate sequential pricing outcomes Pareto dominate simultaneous pricing outcomes in terms of firm and industry profits. We compare policy implications and find prices are more balanced across the platform and average prices are higher under sequential pricing than under simultaneous pricing. We also demonstrate that pricing power can be considered independently on each side of the market under multihoming behavior. |
Keywords: | Network Effects; Two-Sided Markets; Platform Competition |
JEL: | D43 L13 L40 L86 |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:tse:wpaper:130858 |
By: | William A. Brock; Bo Chen; Steven N. Durlauf; Shlomo Weber |
Abstract: | This paper analyzes patterns of majority language acquisition in an economy consisting of a majority group and multiple minority groups. We consider contexts in which allows individuals choose among three options: full learning, partial learning, or no learning of the majority language. The key innovation in our approach is the introduction of a conformity factor in language acquisition, where peer pressure and community status may outweigh communicative and economic incentives for some individuals. Notably, we identify a non-monotonic relationship between the level of conformity and the distribution of full learners, partial learners, and non-learners in equilibrium. This finding is significant for policy considerations, as small adjustments in language acquisition costs may unpredictably influence language acquisition patterns across minority groups. |
JEL: | C72 D61 J15 Z13 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34138 |
By: | José Ignacio Cuesta; Pietro Tebaldi |
Abstract: | A common approach to markets with adverse selection is to regulate competition to minimize inefficiencies, while preserving consumer choice among firms. We study the role of procurement auctions—leading to sole provision by the winning firm—as an alternative market design. Relative to regulated competition, auctions affect product variety, quality, markups, and remove cream-skimming incentives. We develop a framework to study this comparison and apply it to individual health insurance in the US. We find that procurement auctions would increase consumer welfare in most markets, mainly by limiting inefficiencies from adverse selection and market power, and by increasing quality. |
JEL: | H42 I13 L13 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34141 |
By: | Kawaguchi, Kohei; Qiu, Jeff; Yi, Zhang |
Abstract: | This paper analyzes how retailer competition affects the welfare implications of resale price maintenance (RPM) under demand uncertainty. We extend the classic model of Deneckere et al. (1997) by introducing imperfect competition among retailers, which creates tension between double marginalization and business-stealing effects. Our analysis reveals four distinct regimes determined by demand uncertainty and market concentration. In highly uncertain, competitive markets, minimum RPM enhances efficiency by encouraging inventory holding. However, in markets with lower uncertainty or more concentrated retail sectors, maximum RPM better promotes competition by mitigating double marginalization. The effectiveness of each RPM type depends on whether retailers optimize for all demand states or focus primarily on high-demand scenarios. These findings suggest that antitrust authorities should evaluate RPM cases by considering both the level of demand uncertainty and the degree of retail competition, as different market conditions may call for different forms of vertical price restrictions. For managers, our results provide actionable guidance on selecting the appropriate RPM strategy based on market structure and demand predictability. |
Date: | 2025–09–01 |
URL: | https://d.repec.org/n?u=RePEc:osf:socarx:7tcha_v1 |
By: | Hendrik Rommeswinkel |
Abstract: | The paper characterizes the Shannon (1948) and Tsallis (1988) entropies in a standard framework of decision theory, mixture sets. Procedural mixture sets are introduced as a variant of mixture sets in which it is not necessarily true that a mixture of two identical elements yields the same element. This allows the process of mixing itself to have an intrinsic value. The paper proves the surprising result that simply imposing the standard axioms of von Neumann-Morgenstern on preferences on a procedural mixture set yields the entropy as a representation of procedural value. An application of the theorem to decision processes and the relation between choice probabilities and decision times elucidates the difficulty of extending the drift-diffusion model to multi-alternative choice. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.07588 |
By: | Joshua S. Gans |
Abstract: | This paper develops a unified model of the cognitive division of labour in a knowledge economy. Building on recent frameworks for knowledge creation and decision making under uncertainty, it distinguishes between specialists, who engage in costly “on the spot” reasoning to generate tacit knowledge around a focal point, and generalists, who search for and interpolate existing knowledge but deliver answers subject to error. The model characterises how these two types of workers should be allocated across a continuum of questions, given the location of codified knowledge points and the distribution of problems. It shows that optimal task assignment depends on the cognitive process through which information is processed rather than on skill endowments or task complexity. When specialists operate around both knowledge points, their allocation is shaped by their absolute advantage over generalists, leading to non‐contiguous specialist domains interspersed with generalist regions. When specialists cluster around a single point, a natural boundary emerges between specialist and generalist domains that shifts but persists despite changes in question distribution. Extending the analysis to a two‐period setting, the paper identifies when specialists should sacrifice static efficiency to codify their tacit discoveries, creating bridges that allow generalists to operate more effectively in the future. These results provide a formal microfoundation for Babbage’s insights into the division of cognitive labour and offer predictions about how knowledge work responds to changes in the knowledge environment, the distribution of questions, and the patience of capital. |
JEL: | D83 J24 L23 L25 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34145 |