981 resultados para Asymmetric Information


Relevância:

60.00% 60.00%

Publicador:

Resumo:

[ES] Este artículo analiza los determinantes de la rentabilidad primaria de los bonos de titulización hipotecaria (conocidos en la literatura como mortgage backed securities, o MBS) emitidos en España durante el periodo 1993-2011, periodo en el que el mercado español llegó a convertirse en el más importante de Europa continental. Los resultados obtenidos sobre el análisis de la población completa de MBS emitidos (262 tramos configurados sobre 94 fondos de titulización) indican que la estructuración multitramo de los MBS ha ayudado a reducir el riesgo percibido global de las emisiones de MBS, mediante la generación de mercados más completos y la reducción de los problemas derivados de la existencia de asimetrías informativas implícitas en el proceso de selección de los activos transmitidos por parte de la entidad cedente. Esta reducción del riesgo percibido ha tenido un efecto directo sobre la rentabilidad ofrecida por los bonos de titulización emitidos. Además, no se encuentran evidencias de que la emisión de MBS persiga la transmisión efectiva de riesgos, más bien al contrario. Las Entidades de crédito, por lo general, han retenido los tramos de primeras pérdidas, lo que ha contribuido a mantener en niveles muy bajos (por debajo de la rentabilidad de la deuda soberana) la rentabilidad ofrecida por los MBS. Precisamente, el escaso diferencial ofrecido por los bonos de titulización se debe a que los tramos retenidos no han ofrecido primas de rentabilidad ajustadas al riesgo inherente.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.

In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.

In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.

In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation contains three essays on mechanism design. The common goal of these essays is to assist in the solution of different resource allocation problems where asymmetric information creates obstacles to the efficient allocation of resources. In each essay, we present a mechanism that satisfactorily solves the resource allocation problem and study some of its properties. In our first essay, ”Combinatorial Assignment under Dichotomous Preferences”, we present a class of problems akin to time scheduling without a pre-existing time grid, and propose a mechanism that is efficient, strategy-proof and envy-free. Our second essay, ”Monitoring Costs and the Management of Common-Pool Resources”, studies what can happen to an existing mechanism — the individual tradable quotas (ITQ) mechanism, also known as the cap-and-trade mechanism — when quota enforcement is imperfect and costly. Our third essay, ”Vessel Buyback”, coauthored with John O. Ledyard, presents an auction design that can be used to buy back excess capital in overcapitalized industries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Essa dissertação objetiva oferecer uma contribuição acadêmica sobre o nível de transparência pública federal nos relatórios de gestão anuais e os incentivos (político, institucional, governamental, social e financeiro) associados à divulgação da informação. Aborda-se, a perspectiva do conflito de agência, assimetria informacional e public accountability na divulgação da informação pública, isto é, os gestores governamentais tendem a disponibilizar uma informação assimétrica ao cidadão. A pesquisa é empírico-analítica com regressão linear múltipla e análise de corte transversal nos relatórios de gestão de 2010 de 115 entes públicos federais. Para tanto foi construído um índice de transparência pública federal (total, obrigatório e voluntário) dicotômico (binário) e policotômico (ponderado), baseado em estudos anteriores e na legislação nacional, adaptado ao cenário brasileiro. Os resultados apontam um baixo nível de transparência pública federal (50%) dos itens de evidenciação, deficiência de compliance com as práticas de evidenciação obrigatória (80%) e baixa aderência às práticas de evidenciação voluntária (19%). Ademais se verificou uma uniformidade na divulgação da informação pública (total, obrigatória e voluntária) entre os entes públicos da administração indireta (autarquias 54% e fundações 55%), mas diferenças estatísticas significativas quando considerados estes e os entes da administração direta (órgãos públicos 46%), que tendem a divulgar menos informação. Relativo aos incentivos se observa uma relação positiva do tipo do ente (incentivo governamental), da acessibilidade (incentivo social) e da demografia de pessoal (incentivo institucional) com o índice de transparência pública federal, enquanto a burocracia pública (incentivo governamental) apresenta uma relação negativa. Todavia o porte (incentivo político), tamanho do núcleo de gestão (incentivo institucional), receita orçamentária e dependência federal (incentivo financeiro) não apresentaram relação com o índice. Assim, a contribuição do estudo é revelar o atual estágio da transparência pública dos entes públicos federais, bem como os incentivos associados, e estas informações, podem ser oportunidades de melhorias na evidenciação da informação pública nos relatórios de gestão anuais.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O controle interno está associado ao contexto da governança das organizações. Na administração pública brasileira, compete aos Poderes Executivo, Legislativo e Judiciário a manutenção de um sistema de controle interno integrado, conforme previsto na Constituição Federal. Os aspectos relacionados à governança são contemplados na Teoria da Agência, em que a relação entre principal e agente é marcada pela assimetria de informações e pelos conflitos de interesse. O objetivo deste estudo é investigar a evidenciação de princípios de governança nos relatórios de auditoria elaborados pelo órgão de controle interno da Marinha do Brasil. Trata-se de pesquisa descritiva, documental e ex post facto, conduzida pelo método de estudo de caso no Centro de Controle Interno da Marinha (CCIMAR). Devido à quantidade de material disponibilizado pelo órgão, o estudo foi limitado à investigação dos relatórios de auditoria de avaliação da gestão de 2012, tendo as unidades auditadas sido previamente selecionadas pelo Tribunal de Contas da União (TCU). Em 2012, o CCIMAR produziu seis relatórios de auditoria de avaliação da gestão, representando, portanto, a amostra de conveniência desta pesquisa. Para orientar a investigação, definiu-se um quadro de referência contemplando e integrando os princípios de governança abordados pelos seguintes estudos: Cadbury Committee (1992); Nolan Committee (1995); Ministério das Finanças da Holanda Timmers (2000); IFAC (2001); ANAO (2003); OECD (2004); e IBGC (2009). Os princípios finalmente selecionados para investigação foram Accountability, Equidade, Integridade e Transparência, associados, respectivamente, às palavras-chave prestação (ões) de contas / prestar contas, tratamento justo, confiabilidade / fidedignidade das informações / dos dados e disponibilidade / divulgação das informações / dos dados, definidas pelos contextos dos significados destacados no quadro de referência. Sendo assim, os princípios e as palavras-chave formaram o referencial de análise para investigar os relatórios de auditoria e receberam tratamento quanti-qualitativo. Após exame das ocorrências dos princípios e das palavras-chave nos relatórios compulsados, os resultados indicaram que: (1) o princípio da Accountability estava associado ao cumprimento de prazos e formalidades legais requeridas nos processos de prestação de contas públicas; (2) o princípio da Equidade foi evidenciado, essencialmente, na perspectiva interna das unidades auditadas, sendo percebido nas recomendações que contemplavam a atuação mais consistente e efetiva dos respectivos conselhos de gestão no gerenciamento das organizações; (3) o princípio da Integridade foi abordado nos relatórios tanto como atributo pessoal (integridade moral) dos agentes públicos, quanto como característica necessária das informações reportadas nos documentos emitidos pelos órgãos públicos; e (4) a Transparência foi mencionada como o princípio que proporciona a diminuição da assimetria informacional entre os stakeholders, permitindo que tenham acesso às informações relevantes, tais como a aplicação dos recursos públicos destinados às organizações da Marinha do Brasil.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Consistent with the implications from a simple asymmetric information model for the bid-ask spread, we present empirical evidence that the size of the bid-ask spread in the foreign exchange market is positively related to the underlying exchange rate uncertainty. The estimation results are based on an ordered probit analysis that captures the discreteness in the spread distribution, with the uncertainty of the spot exchange rate being quantified through a GARCH type model. The data sets consists of more than 300,000 continuously recorded Deutschemark/dollar quotes over the period from April 1989 to June 1989. © 1994.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Agency problems within the firm are a significant hindrance to efficiency. We propose trust between coworkers as a superior alternative to the standard tools used to mitigate agency problems: increased monitoring and incentive-based pay. We model trust as mutual, reciprocal altruism between pairs of coworkers and show how it induces employees to work harder, relative to those at firms that use the standard tools. In addition, we show that employees at trusting firms have higher job satisfaction, and that these firms enjoy lower labor cost and higher profits. We conclude by discussing how trust may also be easier to use within the firm than the standard agency-mitigation tools. © 2002 Elsevier Science B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper represents the first research attempt to estimate the probabilities for Vietnamese patients to fall into destitution facing financial burdens occurring during their curative stay in hospital. The study models the risk against such factors as level of insurance coverage, location of patient, costliness of treatment, among others. The results show that very high probabilities of destitution, approximately 70%, apply to a large group of patients, who are nonresident, poor and ineligible for significant insurance coverage. There is also a probability of 58% that low-income patients who are seriously ill and face higher health care costs would quit their treatment. These facts will put Vietnamese government’s ambitious plan of increasing both universal coverage (UC) to 100% of expenditure and rate of UC beneficiaries to 100% at a serious test. The study also raises issues of asymmetric information and alternative financing options for the poor, who are most exposed to risk of destitution, following market-based health care reforms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main sources of financing for small and medium sized enterprises (SMEs) are equity (internally generated cash), trade credit paid on time, long and short term bank credits, delayed payment on trade credit and other debt. The marginal costs of each financing instrument are driven by asymmetric information (cost of gathering and analysing information) and transactions costs associated with non-payment (costs of collecting and selling collateral). According to the Pecking Order Theory, firms will choose the cheapest source in terms of cost. In the case of the static trade-off theory, firms choose finance so that the marginal costs across financing sources are all equal, thus an additional Euro of financing is obtained from all the sources whereas under the Pecking Order Theory the source is determined by how far down the Pecking Order the firm is presently located. In this paper, we argue that both of these theories miss the point that the marginal costs are dependent of the use of the funds, and the asset side of the balance sheet primarily determines the financing source for an additional Euro. An empirical analysis on a unique dataset of Portuguese SME's confirms that the composition of the asset side of the balance sheet has an impact of the type of financing used and the Pecking Order Theory and the traditional Static Trade-off theory are rejected.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recentemente a avaliação imobiliária levou ao colapso de instituições financeiras e à crise no Subprime. A presente investigação pretende contribuir para perceber quais os factores preponderantes na avaliação imobiliária. O trabalho aborda a problemática da assimetria de informação, os diferentes métodos de avaliação imobiliária e a importância das externalidades. Empiricamente há diversos casos analisados através do uso da metodologia da Regressão Linear, Análise de Clusters e Análise de Componentes Principais da Análise Factorial. O primeiro caso analisado é direccionado à avaliação das externalidades, onde os resultados indicam que as externalidades positivas principais são as seguintes: as vistas de marina são mais valorizadas que as vistas de mar, as vistas frontais são mais valorizadas que as vistas laterais e existem diferenças de valorização ao nível do piso de acordo com o tipo de habitação (residência ou férias). O segundo estudo analisa como o método do rendimento ajuda na explicação da realidade portuguesa, no qual foram obtidos três clusters de rendas e três clusters de yields para cada uma das amostras. Os resultados demonstram que: (a) ambos os clusters, das yields e das rendas são formados por diferentes elementos (b) que o valor da oferta é explicado pelo método do rendimento, pelo cluster das yields e pela densidade populacional. No terceiro estudo foram inquiridos 427 indivíduos que procuravam apartamento para residência. A partir da Análise de Componentes Principais da Análise Factorial efectuada obtiveram-se sete factores determinantes na procura de apartamento: as externalidades negativas, as externalidades positivas, a localização de negócios no rés-do-chão do edifício de apartamentos, os interesses racionais de proximidade, as variáveis secundárias na utilização do edifício, as variáveis de rendimento e as variáveis de interesses pessoais. A principal conclusão é que como é uma área transdisciplinar, é difícil chegar a um único modelo que incorpore os métodos de avaliação e as diferentes dinâmicas da procura. O avaliador, deve analisar e fazer o seu scoring, tendo em conta o equilíbrio entre a ciência da avaliação e a arte da apreciação.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Drawing from the extant literature, this paper explores the prevalent consumer opportunism in the insurance transactions, its links to consumers’ perception, and the relevance of marketing strategies in curbing the menace. It shows that insurance opportunism could be perpetrated by any party in the insurance transaction system and at any stage of the process involved. Among factors identified as prompting this conundrum are economic motive, resentment towards the insurance companies, laxity in the application processing/asymmetric information, and insiders’ collaborations. Nonetheless, the paper suggests that strong commitment of insurance marketers to creating and delivering value to the customers more robustly through a proactive and all-embracing implementation of marketing strategies vis-àvis relationship marketing could significantly enhance consumers’ positive perception of insurance business and consequently result in a healthier insurance industry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cette thèse est composée de trois essais en économie forestière. Les deux premiers s'intéressent à la fixation de la redevance optimale à laquelle fait face le propriétaire d'une ressource forestière dans un contexte d'information asymétrique. Le troisième analyse l'impact à long terme du recyclage sur la surface de terre affectée à la forêt. La gestion des ressources forestières implique souvent la délégation des droits de coupe par le propriétaire forestier à une entreprise exploitante. Cette délégation prend la forme d'un contrat de concession par lequel le propriétaire forestier octroie les droits d'exploitation aux compagnies forestières, en contrepartie d'une redevance (transfert monétaire). L'octroie des droits d'exploitation s'effectue généralement sous plusieurs modes, dont les plus répandus sont les appels d'offres publics et les contrats de gré à gré, où le propriétaire forestier et la firme exploitante spécifient entre autres la redevance dans les clauses d'exploitation de la forêt. Pour déterminer le mécanisme optimal (choix de la firme, âge de coupe et redevance), le propriétaire forestier a idéalement besoin de connaître les coûts de coupe et de reboisement. Or en réalité, les firmes sont mieux informées sur leurs coûts que le propriétaire forestier. Dans ce contexte d'information asymétrique, le mécanisme optimal doit donc prendre en considération des contraintes informationnelles. Les deux premiers essais caractérisent, sous ces conditions, l'âge de coupe optimal (la rotation optimale) et la redevance optimale. Le premier essai examine le contrat optimal quand le propriétaire forestier cède les droits de coupes à une firme par un accord de gré à gré ou par une procédure d'appel d'offre public au second prix. L'analyse du problème est menée premièrement dans un contexte statique, dans le sens que les coûts de coupe sont parfaitement corrélés dans le temps, puis dans un contexte dynamique, où les coûts sont indépendants dans le temps. L'examen en statique et en dynamique montre que la rotation optimale va satisfaire une version modifiée de la règle de Faustmann qui prévaudrait en information symétrique. Cette modification est nécessaire afin d'inciter la firme à révéler ses vrais coûts. Dans le cas statique, il en résulte que la rotation optimale est plus élevée en information asymétrique qu'en situation de pleine information. Nous montrons également comment le seuil maximal de coût de coupe peut être endogénéisé, afin de permettre au propriétaire d'accroître son profit espéré en s'assurant que les forêts non profitables ne seront pas exploitées. Nous comparons ensuite la redevance optimale en information asymétrique et symétrique. Les redevances forestières dans un arrangement de gré à gré étant généralement, en pratique, une fonction linéaire du volume de bois, nous dérivons le contrat optimal en imposant une telle forme de redevance et nous caractérisons la perte en terme de profit espéré qui résulte de l'utilisation de ce type de contrat plutôt que du contrat non linéaire plus général. Finalement, toujours dans le contexte statique, nous montrons à travers un mécanisme optimal d'enchère au second prix qu'en introduisant ainsi la compétition entre les firmes le propriétaire forestier augmente son profit espéré. Les résultats obtenus dans le contexte dynamique diffèrent pour la plupart de ceux obtenus dans le cas statique. Nous montrons que le contrat optimal prévoit alors que chaque type de firme, incluant celle ayant le coût le plus élevé, obtient une rente strictement positive, laquelle augmente dans le temps. Ceci est nécessaire pour obtenir la révélation à moindre coût à la période courante du véritable type de la firme. Comme implication, la rotation optimale s'accroît aussi dans le temps. Finalement, nous montrons qu'il y a distorsion en asymétrique d'information par rapport à l'optimum de pleine information même pour le coût le plus bas (la réalisation la plus favorable). La concurrence introduite dans le premier essai sous forme d'enchère au second prix suppose que chaque firme connaît exactement son propre coût de coupe. Dans le deuxième essai nous relâchons cette hypothèse. En réalité, ni le propriétaire forestier ni les firmes ne connaissent avec précision les coûts de coupe. Chaque firme observe de manière privée un signal sur son coût. Par exemple chaque firme est autorisée à visiter un lot pour avoir une estimation (signal) de son coût de coupe. Cependant cette évaluation est approximative. Ainsi, le coût de chaque firme va dépendre des estimations (signaux) d'autres firmes participantes. Nous sommes en présence d'un mécanisme à valeurs interdépendantes. Dans ce contexte, la valeur d'une allocation dépend des signaux de toutes les firmes. Le mécanisme optimal (attribution des droits d'exploitation, redevance et âge de coupe) est exploré. Nous déterminons les conditions sous lesquelles le mécanisme optimal peut être implémenté par une enchère au second prix et dérivons la rotation optimale et le prix de réserve dans le contexte de ce type d'enchère. Le troisième essai de la thèse analyse l'impact à long terme du recyclage sur la surface de terre affectée à la forêt. L'un des principaux arguments qui milite en faveur du recours au recyclage est que cela entraînerait une réduction de la coupe de bois, épargnant ainsi des arbres. L'objectif est donc d'aboutir à un nombre d'arbres plus important qu'en l'absence de recyclage. L'idée d'accroître le stock d'arbre tient au fait que les forêts génèrent des externalités: elles créent un flux de services récréatifs, freinent l'érosion des sols et des rives des cours d'eau et absorbent du dioxyde de carbone présent dans l'atmosphère. Étant donné la présence d'externalités, l'équilibre des marchés résulterait en un nombre d'arbre insuffisant, justifiant donc la mise en oeuvre de politiques visant à l'accroître. Le but de ce troisième essai est de voir dans quelle mesure la promotion du recyclage est un instrument approprié pour atteindre un tel objectif. En d'autres mots, comment le recyclage affecte-t-il à long terme la surface de terre en forêt et l'âge de coupe? Nous étudions cette question en spécifiant un modèle dynamique d'allocation d'un terrain donné, par un propriétaire forestier privé, entre la forêt et une utilisation alternative du terrain, comme l'agriculture. Une fois les arbres coupés, il décide d'une nouvelle allocation du terrain. Il le fait indéfiniment comme dans le cadre du modèle de Faustmann. Le bois coupé est transformé en produit final qui est en partie recyclé comme substitut du bois original. Ainsi, les outputs passés affectent le prix courant. Nous montrons que, paradoxalement, un accroissement du taux de recyclage réduira à long terme la surface forestière et donc diminuera le nombre d'arbres plantés. Par contre l'âge de coupe optimal va s'accroître. L'effet net sur le volume de bois offert sur le marché est ambigu. Le principal message cependant est qu'à long terme le recyclage va résulter en une surface en forêt plus petite et non plus grande. Donc, si le but est d'accroître la surface en forêt, il pourrait être préférable de faire appel à d'autres types d'instruments de politique que celui d'encourager le recyclage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses the problem of optimal design of a jurisdiction structure from the view point of a utilitarian social planner when individuals with identical utility functions for a non-rival public good and private consumption have private information about their contributive capacities. It shows that the superiority of a centralized provision of a non-rival public good over a federal one does not always hold. Specifically, when differences in individuals’ contributive capacities are large, it is better to provide the public good in several distinct jurisdictions rather than to pool these jurisdictions into a single one. In the specific situation where individuals have logarithmic utilities, the paper provides a complete characterization of the optimal jurisdiction structure in the two-type case.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En este trabajo se proponen dos tipos de contratos para los préstamos interbancarios con el fin de que los bancos suavicen sus choques de liquidez a través del mercado interbancario. En particular, se estudia la situación en la que los bancos con faltantes de liquidez que tienen bajo riesgo de crédito abandonan el mercado debido a que la tasa de interés es alta en relación a su fuente alterna de financiamiento. La asimetría en la información acerca del riesgo de crédito impide que los bancos con excedentes de liquidez ajusten la tasa de interés considerando el riesgo de su contraparte. Dado lo anterior, se diseñan dos contratos para los créditos interbancarios que se diferencian en las tasas de interés cobradas. Así, siempre que un banco constituya un depósito podrá obtener liquidez a bajas tasas de interés; en la situación contraria la tasa será más elevada.