50 resultados para Pareto optimality

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of allocating an infinitely divisible commodity among a group of agents with single-peaked preferences. A rule that has played a central role in the analysis of the problem is the so-called uniform rule. Chun (2001) proves that the uniform rule is the only rule satisfying Pareto optimality, no-envy, separability, and continuity (with respect to the social endowment). We obtain an alternative characterization by using a weak replication-invariance condition, called duplication-invariance, instead of continuity. Furthermore, we prove that Pareto optimality, equal division lower bound, and separability imply no-envy. Using this result, we strengthen one of Chun's (2001) characterizations of the uniform rule by showing that the uniform rule is the only rule satisfying Pareto optimality, equal división lower bound, separability, and either continuity or duplication-invariance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper studies the efficiency of equilibria in a productive OLG economy where the process of financial intermediation is characterized by costly state verification. Both competitive equilibria and Constrained Pareto Optimal allocations are characterized. It is shown that market outcomes can be socially inefficient, even when a weaker notion than Pareto optimality is considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper looks at the dynamic management of risk in an economy with discrete time consumption and endowments and continuous trading. I study how agents in such an economy deal with all the risk in the economy and attain their Pareto optimal allocations by trading in a few natural securities: private insurance contracts and a common set of derivatives on the aggregate endowment. The parsimonious nature ofthe implied securities needed for Pareto optimality suggests that insuch contexts complete markets is a very reasonable assumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show a standard model where the optimal tax reform is to cut labor taxes and leave capital taxes very high in the short and medium run. Only in the very long run would capital taxes be zero. Our model is a version of Chamley??s, with heterogeneous agents, without lump sum transfers, an upper bound on capital taxes, and a focus on Pareto improving plans. For our calibration labor taxes should be low for the first ten to twenty years, while capital taxes should be at their maximum. This policy ensures that all agents benefit from the tax reform and that capital grows quickly after when the reform begins. Therefore, the long run optimal tax mix is the opposite from the short and medium run tax mix. The initial labor tax cut is financed by deficits that lead to a positive long run level of government debt, reversing the standard prediction that government accumulates savings in models with optimal capital taxes. If labor supply is somewhat elastic benefits from tax reform are high and they can be shifted entirely to capitalists or workers by varying the length of the transition. With inelastic labor supply there is an increasing part of the equilibrium frontier, this means that the scope for benefitting the workers is limited and the total benefits from reforming taxes are much lower.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider linear optimization over a nonempty convex semi-algebraic feasible region F. Semidefinite programming is an example. If F is compact, then for almost every linear objective there is a unique optimal solution, lying on a unique \active" manifold, around which F is \partly smooth", and the second-order sufficient conditions hold. Perturbing the objective results in smooth variation of the optimal solution. The active manifold consists, locally, of these perturbed optimal solutions; it is independent of the representation of F, and is eventually identified by a variety of iterative algorithms such as proximal and projected gradient schemes. These results extend to unbounded sets F.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Assume that the problem Qo is not solvable in polynomial time. For theories T containing a sufficiently rich part of true arithmetic we characterize T U {ConT} as the minimal extension of T proving for some algorithm that it decides Qo as fast as any algorithm B with the property that T proves that B decides Qo. Here, ConT claims the consistency of T. Moreover, we characterize problems with an optimal algorithm in terms of arithmetical theories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the agency problem of a staff member managing microfinancing programs, who can abuse his discretion to embezzle borrowers' repayments. The fact that most borrowers of microfinancing programs are illiterate and live in rural areas where transportation costs are very high make staff's embezzlement particularly relevant as is documented by Mknelly and Kevane (2002). We study the trade-off between the optimal rigid lending contract and the optimal discretionary one and find that a rigid contract is optimal when the audit cost is larger than gains from insurance. Our analysis explains rigid repayment schedules used by the Grameen bank as an optimal response to the bank staff's agency problem. Joint liability reduces borrowers' burden of respecting the rigid repayment schedules by providing them with partial insurance. However, the same insurance can be provided byborrowers themselves under individual liability through a side-contract.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The approximants to regular continued fractions constitute `best approximations' to the numbers they converge to in two ways known as of the first and the second kind.This property of continued fractions provides a solution to Gosper's problem of the batting average: if the batting average of a baseball player is 0.334, what is the minimum number of times he has been at bat? In this paper, we tackle somehow the inverse question: given a rational number P/Q, what is the set of all numbers for which P/Q is a `best approximation' of one or the other kind? We prove that inboth cases these `Optimality Sets' are intervals and we give aprecise description of their endpoints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the analysis of equilibrium policies in a di erential game, if agents have different time preference rates, the cooperative (Pareto optimum) solution obtained by applying the Pontryagin's Maximum Principle becomes time inconsistent. In this work we derive a set of dynamic programming equations (in discrete and continuous time) whose solutions are time consistent equilibrium rules for N-player cooperative di erential games in which agents di er in their instantaneous utility functions and also in their discount rates of time preference. The results are applied to the study of a cake-eating problem describing the management of a common property exhaustible natural resource. The extension of the results to a simple common property renewable natural resource model in in nite horizon is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the analysis of equilibrium policies in a di erential game, if agents have different time preference rates, the cooperative (Pareto optimum) solution obtained by applying the Pontryagin's Maximum Principle becomes time inconsistent. In this work we derive a set of dynamic programming equations (in discrete and continuous time) whose solutions are time consistent equilibrium rules for N-player cooperative di erential games in which agents di er in their instantaneous utility functions and also in their discount rates of time preference. The results are applied to the study of a cake-eating problem describing the management of a common property exhaustible natural resource. The extension of the results to a simple common property renewable natural resource model in in nite horizon is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I analyze an economy with uncertainty in which a set of indivisible objects and a certain amount of money is to be distributed among agents. The set of intertemporally fair social choice functions based on envy-freeness and Pareto efficiency is characterized. I give a necessary and sufficient condition for its non-emptiness and propose a mechanism that implements the set of intertemporally fair allocations in Bayes-Nash equilibrium. Implementation at the ex ante stage is considered, too. I also generalize the existence result obtained with envy-freeness using a broader fairness concept, introducing the aspiration function.