50 resultados para Pareto optimality


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article studies the effects of interest rate restrictions on loan allocation. The British governmenttightened the usury laws in 1714, reducing the maximum permissible interest rate from 6% to5%. A sample of individual loan transactions reveals that average loan size and minimum loan sizeincreased strongly, while access to credit worsened for those with little social capital. Collateralisedcredits, which had accounted for a declining share of total lending, returned to their former role ofprominence. Our results suggest that the usury laws distorted credit markets significantly; we findno evidence that they offered a form of Pareto-improving social insurance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dubey and Geanakoplos [2002] have developed a theory of competitive pooling, which incorporates adverse selection and signaling into general equilibrium. By recasting the Rothschild-Stiglitz model of insurance in this framework, they find that a separating equilibrium always exists and is unique.We prove that their uniqueness result is not a consequence of the framework, but rather of their definition of refined equilibria. When other types of perturbations are used, the model allows for many pooling allocations to be supported as such: in particular, this is the case for pooling allocations that Pareto dominate the separating equilibrium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new direction of research in Competitive Location theory incorporatestheories of Consumer Choice Behavior in its models. Following thisdirection, this paper studies the importance of consumer behavior withrespect to distance or transportation costs in the optimality oflocations obtained by traditional Competitive Location models. To dothis, it considers different ways of defining a key parameter in thebasic Maximum Capture model (MAXCAP). This parameter will reflectvarious ways of taking into account distance based on several ConsumerChoice Behavior theories. The optimal locations and the deviation indemand captured when the optimal locations of the other models are usedinstead of the true ones, are computed for each model. A metaheuristicbased on GRASP and Tabu search procedure is presented to solve all themodels. Computational experience and an application to 55-node networkare also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyze a mutual fire insurance mechanism usedin Andorra, which is called La Crema in the locallanguage. This mechanism relies on households'announced property values to determine how much ahousehold is reimbursed in the case of a fire andhow payments are apportioned among other households.The only Pareto eficient allocation reachablethrough the mechanism requires that all householdshonestly report the true value of their property.However, such honest reporting is not an equilibriumexcept in the extreme case where the property valuesare identical for all households. Nevertheless, as the size of the society becomes large, thebenefits from deviating from truthful reportingvanish, and all of the non-degenerate equilibriaof the mechanism are nearly truthful andapproximately Pareto efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper shows how risk may aggravate fluctuations in economies with imperfect insurance and multiple assets. A two period job matching model is studied, in which risk averse agents act both as workers and as entrepreneurs. They choose between two types of investment: one type is riskless, while the other is a risky activity that creates jobs.Equilibrium is unique under full insurance. If investment is fully insured but unemployment risk is uninsured, then precautionary saving behavior dampens output fluctuations. However, if both investment and employment are uninsured, then an increase in unemployment gives agents an incentive to shift investment away from the risky asset, further increasing unemployment. This positive feedback may lead to multiple Pareto ranked equilibria. An overlapping generations version of the model may exhibit poverty traps or persistent multiplicity. Greater insurance is doubly beneficial in this context since it can both prevent multiplicity and promote risky investment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let a class $\F$ of densities be given. We draw an i.i.d.\ sample from a density $f$ which may or may not be in $\F$. After every $n$, one must make a guess whether $f \in \F$ or not. A class is almost surely testable if there exists such a testing sequence such that for any $f$, we make finitely many errors almost surely. In this paper, several results are given that allowone to decide whether a class is almost surely testable. For example, continuity and square integrability are not testable, but unimodality, log-concavity, and boundedness by a given constant are.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous works on asymmetric information in asset markets tendto focus on the potential gains in the asset market itself. We focus on the market for information and conduct an experimental study to explore, in a game of finite but uncertain duration, whether reputation can be an effective constraint on deliberate misinformation. At the beginning of each period, an uninformed potential asset buyer can purchase information, at a fixed price and from a fully-informed source, about the value of the asset in that period. The informational insiders cannot purchase the asset and are given short-term incentives to provide false information when the asset value is low. Our model predicts that, in accordance with the Folk Theorem, Pareto-superior outcomes featuring truthful revelation should be sustainable. However, this depends critically on beliefs about rationality and behavior. We find that, overall, sellers are truthful 89% of the time. More significantly, the observed frequency of truthfulness is 81% when the asset value is low. Our result is consistent with both mixed-strategy and trigger strategy interpretations and provides evidence that most subjects correctly anticipate rational behavior. We discuss applications to financial markets, media regulation, and the stability of cartels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider an agent who has to repeatedly make choices in an uncertainand changing environment, who has full information of the past, who discountsfuture payoffs, but who has no prior. We provide a learning algorithm thatperforms almost as well as the best of a given finite number of experts orbenchmark strategies and does so at any point in time, provided the agentis sufficiently patient. The key is to find the appropriate degree of forgettingdistant past. Standard learning algorithms that treat recent and distant pastequally do not have the sequential epsilon optimality property.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Considering a pure coordination game with a large number of equivalentequilibria, we argue, first, that a focal point that is itself not a Nash equilibriumand is Pareto dominated by all Nash equilibria, may attract the players'choices. Second, we argue that such a non-equilibrium focal point may act asan equilibrium selection device that the players use to coordinate on a closelyrelated small subset of Nash equilibria. We present theoretical as well asexperimental support for these two new roles of focal points as coordinationdevices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sometimes, behind the entrepreneurial profit the degree of monopoly is concealed. What it occurs with the models of investments evaluation when the skill of the employer consists on having capacity to capture the regulator?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Catalan, sequences of sibilants are never pronounced as such. In most contexts all varieties coincide in the «strategies» used to avoid these sequences, namely epenthesis or deletion. Variation is only found in the domain of pronominal clitics (but not with other types of clitics). One source of variation is accounted for by decomposing a general constraint into two specific ones, which implies partial constraint reranking. The other source of variation, which involves a case of apparent opacity, is explained through an Output-Output constraint that makes reference to paradigmatic relations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Daily precipitation is recorded as the total amount of water collected by a rain-gauge in 24h. Events are modelled as a Poisson process and the 24h precipitation by a Generalized Pareto Distribution (GPD) of excesses. Hazard assessment is complete when estimates of the Poisson rate and the distribution parameters, together with a measure of their uncertainty, are obtained. The shape parameter of the GPD determines the support of the variable: Weibull domain of attraction (DA) corresponds to finite support variables, as should be for natural phenomena. However, Fréchet DA has been reported for daily precipitation, which implies an infinite support and a heavy-tailed distribution. We use the fact that a log-scale is better suited to the type of variable analyzed to overcome this inconsistency, thus showing that using the appropriate natural scale can be extremely important for proper hazard assessment. The approach is illustrated with precipitation data from the Eastern coast of the Iberian Peninsula affected by severe convective precipitation. The estimation is carried out by using Bayesian techniques