87 resultados para Adverse selection, contract theory, experiment, principal-agent problem
em Université de Lausanne, Switzerland
Resumo:
This paper aims to provide empirical support for the use of the principal-agent framework in the analysis of public sector and public policies. After reviewing the different conditions to be met for a relevant analysis of the relationship between population and government using the principal-agent theory, our paper focuses on the assumption of conflicting goals between the principal and the agent. A principal-agent analysis assumes in effect that inefficiencies may arise because principal and agent pursue different goals. Using data collected during an amalgamation project of two Swiss municipalities, we show the existence of a gap between the goals of the population and those of the government. Consequently, inefficiencies as predicted by the principal-agent model may arise during the implementation of a public policy, i.e. an amalgamation project. In a context of direct democracy where policies are regularly subjected to referendum, the conflict of objectives may even lead to a total failure of the policy at the polls.
Resumo:
In this article, we analyze the rationale for introducing outlier payments into a prospective payment system for hospitals under adverse selection and moral hazard. The payer has only two instruments: a fixed price for patients whose treatment cost is below a threshold and a cost-sharing rule for outlier patients. We show that a fixed-price policy is optimal when the hospital is sufficiently benevolent. When the hospital is weakly benevolent, a mixed policy solving a trade-off between rent extraction, efficiency, and dumping deterrence must be preferred. We show how the optimal combination of fixed price and partially cost-based payment depends on the degree of benevolence of the hospital, the social cost of public funds, and the distribution of patients severity. [Authors]
Resumo:
The theory of language has occupied a special place in the history of Indian thought. Indian philosophers give particular attention to the analysis of the cognition obtained from language, known under the generic name of śābdabodha. This term is used to denote, among other things, the cognition episode of the hearer, the content of which is described in the form of a paraphrase of a sentence represented as a hierarchical structure. Philosophers submit the meaning of the component items of a sentence and their relationship to a thorough examination, and represent the content of the resulting cognition as a paraphrase centred on a meaning element, that is taken as principal qualificand (mukhyaviśesya) which is qualified by the other meaning elements. This analysis is the object of continuous debate over a period of more than a thousand years between the philosophers of the schools of Mimāmsā, Nyāya (mainly in its Navya form) and Vyākarana. While these philosophers are in complete agreement on the idea that the cognition of sentence meaning has a hierarchical structure and share the concept of a single principal qualificand (qualified by other meaning elements), they strongly disagree on the question which meaning element has this role and by which morphological item it is expressed. This disagreement is the central point of their debate and gives rise to competing versions of this theory. The Mïmāmsakas argue that the principal qualificand is what they call bhāvanā ̒bringing into being̒, ̒efficient force̒ or ̒productive operation̒, expressed by the verbal affix, and distinct from the specific procedures signified by the verbal root; the Naiyāyikas generally take it to be the meaning of the word with the first case ending, while the Vaiyākaranas take it to be the operation expressed by the verbal root. All the participants rely on the Pāninian grammar, insofar as the Mimāmsakas and Naiyāyikas do not compose a new grammar of Sanskrit, but use different interpretive strategies in order to justify their views, that are often in overt contradiction with the interpretation of the Pāninian rules accepted by the Vaiyākaranas. In each of the three positions, weakness in one area is compensated by strength in another, and the cumulative force of the total argumentation shows that no position can be declared as correct or overall superior to the others. This book is an attempt to understand this debate, and to show that, to make full sense of the irreconcilable positions of the three schools, one must go beyond linguistic factors and consider the very beginnings of each school's concern with the issue under scrutiny. The texts, and particularly the late texts of each school present very complex versions of the theory, yet the key to understanding why these positions remain irreconcilable seems to lie elsewhere, this in spite of extensive argumentation involving a great deal of linguistic and logical technicalities. Historically, this theory arises in Mimāmsā (with Sabara and Kumārila), then in Nyāya (with Udayana), in a doctrinal and theological context, as a byproduct of the debate over Vedic authority. The Navya-Vaiyākaranas enter this debate last (with Bhattoji Dïksita and Kaunda Bhatta), with the declared aim of refuting the arguments of the Mïmāmsakas and Naiyāyikas by bringing to light the shortcomings in their understanding of Pāninian grammar. The central argument has focused on the capacity of the initial contexts, with the network of issues to which the principal qualificand theory is connected, to render intelligible the presuppositions and aims behind the complex linguistic justification of the classical and late stages of this debate. Reading the debate in this light not only reveals the rationality and internal coherence of each position beyond the linguistic arguments, but makes it possible to understand why the thinkers of the three schools have continued to hold on to three mutually exclusive positions. They are defending not only their version of the principal qualificand theory, but (though not openly acknowledged) the entire network of arguments, linguistic and/or extra-linguistic, to which this theory is connected, as well as the presuppositions and aims underlying these arguments.
Resumo:
Theory predicts that if most mutations are deleterious to both overall fitness and condition-dependent traits affecting mating success, sexual selection will purge mutation load and increase nonsexual fitness. We explored this possibility with populations of mutagenized Drosophila melanogaster exhibiting elevated levels of deleterious variation and evolving in the presence or absence of male-male competition and female choice. After 60 generations of experimental evolution, monogamous populations exhibited higher total reproductive output than polygamous populations. Parental environment also affected fitness measures - flies that evolved in the presence of sexual conflict showed reduced nonsexual fitness when their parents experienced a polygamous environment, indicating trans-generational effects of male harassment and highlighting the importance of a common garden design. This cost of parental promiscuity was nearly absent in monogamous lines, providing evidence for the evolution of reduced sexual antagonism. There was no overall difference in egg-to-adult viability between selection regimes. If mutation load was reduced by the action of sexual selection in this experiment, the resultant gain in fitness was not sufficient to overcome the costs of sexual antagonism.
Resumo:
In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.
Resumo:
This research examines the impacts of the Swiss reform of the allocation of tasks which was accepted in 2004 and implemented in 2008 to "re-assign" the responsibilities between the federal government and the cantons. The public tasks were redistributed, according to the leading and fundamental principle of subsidiarity. Seven tasks came under exclusive federal responsibility; ten came under the control of the cantons; and twenty-two "common tasks" were allocated to both the Confederation and the cantons. For these common tasks it wasn't possible to separate the management and the implementation. In order to deal with nineteen of them, the reform introduced the conventions-programs (CPs), which are public law contracts signed by the Confederation with each canton. These CPs are generally valid for periods of four years (2008-11, 2012-15 and 2016-19, respectively). The third period is currently being prepared. By using the principal-agent theory I examine how contracts can improve political relations between a principal (Confederation) and an agent (canton). I also provide a first qualitative analysis by examining the impacts of these contracts on the vertical cooperation and on the implication of different actors by focusing my study on five CPs - protection of cultural heritage and conservation of historic monuments, encouragement of the integration of foreigners, economic development, protection against noise and protection of the nature and landscape - applied in five cantons, which represents twenty-five cases studies.
Resumo:
Summary Throughout my thesis, I elaborate on how real and financing frictions affect corporate decision making under uncertainty, and I explore how firms time their investment and financing decisions given such frictions. While the macroeconomics literature has focused on the impact of real frictions on investment decisions assuming all equity financed firms, the financial economics literature has mainly focused on the study of financing frictions. My thesis therefore assesses the join interaction of real and financing frictions in firms' dynamic investment and financing decisions. My work provides a rationale for the documented poor empirical performance of neoclassical investment models based on the joint effect of real and financing frictions on investment. A major observation relies in how the infrequency of corporate decisions may affect standard empirical tests. My thesis suggests that the book to market sorts commonly used in the empirical asset pricing literature have economic content, as they control for the lumpiness in firms' optimal investment policies. My work also elaborates on the effects of asymmetric information and strategic interaction on firms' investment and financing decisions. I study how firms time their decision to raise public equity when outside investors lack information about their future investment prospects. I derive areal-options model that predicts either cold or hot markets for new stock issues conditional on adverse selection, and I provide a rational approach to study jointly the market timing of corporate decisions and announcement effects in stock returns. My doctoral dissertation therefore contributes to our understanding of how under real and financing frictions may bias standard empirical tests, elaborates on how adverse selection may induce hot and cold markets in new issues' markets, and suggests how the underlying economic behaviour of firms may induce alternative patterns in stock prices.
Resumo:
This article investigates the allocation of demand risk within an incomplete contract framework. We consider an incomplete contractual relationship between a public authority and a private provider (i.e. a public-private partnership), in which the latter invests in non-verifiable cost-reducing efforts and the former invests in non-verifiable adaptation efforts to respond to changing consumer demand over time. We show that the party that bears the demand risk has fewer hold-up opportunities and that this leads the other contracting party to make more effort. Thus, in our model, bearing less risk can lead to more effort, which we describe as a new example of âeuro~counter-incentivesâeuro?. We further show that when the benefits of adaptation are important, it is socially preferable to design a contract in which the demand risk remains with the private provider, whereas when the benefits of cost-reducing efforts are important, it is socially preferable to place the demand risk on the public authority. We then apply these results to explain two well-known case studies.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.