99 resultados para Distorted probabilities
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densitiesby generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
Entrevista a Carles Cuadras Avellana
Resumo:
In this paper a novel methodology aimed at minimizing the probability of network failure and the failure impact (in terms of QoS degradation) while optimizing the resource consumption is introduced. A detailed study of MPLS recovery techniques and their GMPLS extensions are also presented. In this scenario, some features for reducing the failure impact and offering minimum failure probabilities at the same time are also analyzed. Novel two-step routing algorithms using this methodology are proposed. Results show that these methods offer high protection levels with optimal resource consumption
Resumo:
This paper studies the limits of discrete time repeated games with public monitoring. We solve and characterize the Abreu, Milgrom and Pearce (1991) problem. We found that for the "bad" ("good") news model the lower (higher) magnitude events suggest cooperation, i.e., zero punishment probability, while the highrt (lower) magnitude events suggest defection, i.e., punishment with probability one. Public correlation is used to connect these two sets of signals and to make the enforceability to bind. The dynamic and limit behavior of the punishment probabilities for variations in ... (the discount rate) and ... (the time interval) are characterized, as well as the limit payo¤s for all these scenarios (We also introduce uncertainty in the time domain). The obtained ... limits are to the best of my knowledge, new. The obtained ... limits coincide with Fudenberg and Levine (2007) and Fudenberg and Olszewski (2011), with the exception that we clearly state the precise informational conditions that cause the limit to converge from above, to converge from below or to degenerate. JEL: C73, D82, D86. KEYWORDS: Repeated Games, Frequent Monitoring, Random Pub- lic Monitoring, Moral Hazard, Stochastic Processes.
Resumo:
The log-ratio methodology makes available powerful tools for analyzing compositionaldata. Nevertheless, the use of this methodology is only possible for those data setswithout null values. Consequently, in those data sets where the zeros are present, aprevious treatment becomes necessary. Last advances in the treatment of compositionalzeros have been centered especially in the zeros of structural nature and in the roundedzeros. These tools do not contemplate the particular case of count compositional datasets with null values. In this work we deal with \count zeros" and we introduce atreatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichletprobability distribution as a prior and we estimate the posterior probabilities. Then weapply a multiplicative modi¯cation for the non-zero values. We present a case studywhere this new methodology is applied.Key words: count data, multiplicative replacement, composition, log-ratio analysis
Resumo:
This paper focuses on one of the methods for bandwidth allocation in an ATM network: the convolution approach. The convolution approach permits an accurate study of the system load in statistical terms by accumulated calculations, since probabilistic results of the bandwidth allocation can be obtained. Nevertheless, the convolution approach has a high cost in terms of calculation and storage requirements. This aspect makes real-time calculations difficult, so many authors do not consider this approach. With the aim of reducing the cost we propose to use the multinomial distribution function: the enhanced convolution approach (ECA). This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements and makes a simple deconvolution process possible. The ECA is used in connection acceptance control, and some results are presented
Resumo:
The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved
Resumo:
A novel test of spatial independence of the distribution of crystals or phases in rocksbased on compositional statistics is introduced. It improves and generalizes the commonjoins-count statistics known from map analysis in geographic information systems.Assigning phases independently to objects in RD is modelled by a single-trial multinomialrandom function Z(x), where the probabilities of phases add to one and areexplicitly modelled as compositions in the K-part simplex SK. Thus, apparent inconsistenciesof the tests based on the conventional joins{count statistics and their possiblycontradictory interpretations are avoided. In practical applications we assume that theprobabilities of phases do not depend on the location but are identical everywhere inthe domain of de nition. Thus, the model involves the sum of r independent identicalmultinomial distributed 1-trial random variables which is an r-trial multinomialdistributed random variable. The probabilities of the distribution of the r counts canbe considered as a composition in the Q-part simplex SQ. They span the so calledHardy-Weinberg manifold H that is proved to be a K-1-affine subspace of SQ. This isa generalisation of the well-known Hardy-Weinberg law of genetics. If the assignmentof phases accounts for some kind of spatial dependence, then the r-trial probabilitiesdo not remain on H. This suggests the use of the Aitchison distance between observedprobabilities to H to test dependence. Moreover, when there is a spatial uctuation ofthe multinomial probabilities, the observed r-trial probabilities move on H. This shiftcan be used as to check for these uctuations. A practical procedure and an algorithmto perform the test have been developed. Some cases applied to simulated and realdata are presented.Key words: Spatial distribution of crystals in rocks, spatial distribution of phases,joins-count statistics, multinomial distribution, Hardy-Weinberg law, Hardy-Weinbergmanifold, Aitchison geometry
Resumo:
We study the social, demographic and economic origins of social security. The data for the U.S. and for a cross section of countries suggest that urbanization and industrialization are associated with the rise of social insurance. We describe an OLG model in which demographics, technology, and social security are linked together in a political economy equilibrium. In the model economy, there are two locations (sectors), the farm (agricultural) and the city (industrial) and the decision to migrate from rural to urban locations is endogenous and linked to productivity differences between the two locations and survival probabilities. Farmers rely on land inheritance for their old age and do not support a pay-as-you-go social security system. With structural change, people migrate to the city, the land loses its importance and support for social security arises. We show that a calibrated version of this economy, where social security taxes are determined by majority voting, is consistent with the historical transformation in the United States.
Resumo:
The dynamics of homogeneously heated granular gases which fragment due to particle collisions is analyzed. We introduce a kinetic model which accounts for correlations induced at the grain collisions and analyze both the kinetics and relevant distribution functions these systems develop. The work combines analytical and numerical studies based on direct simulation Monte Carlo calculations. A broad family of fragmentation probabilities is considered, and its implications for the system kinetics are discussed. We show that generically these driven materials evolve asymptotically into a dynamical scaling regime. If the fragmentation probability tends to a constant, the grain number diverges at a finite time, leading to a shattering singularity. If the fragmentation probability vanishes, then the number of grains grows monotonously as a power law. We consider different homogeneous thermostats and show that the kinetics of these systems depends weakly on both the grain inelasticity and driving. We observe that fragmentation plays a relevant role in the shape of the velocity distribution of the particles. When the fragmentation is driven by local stochastic events, the longvelocity tail is essentially exponential independently of the heating frequency and the breaking rule. However, for a Lowe-Andersen thermostat, numerical evidence strongly supports the conjecture that the scaled velocity distribution follows a generalized exponential behavior f (c)~exp (−cⁿ), with n ≈1.2, regarding less the fragmentation mechanisms
Resumo:
In this paper we present a simple theory-based measure of the variations in aggregate economic efficiency: the gap between the marginal product of labor and the household s consumption/leisure tradeoff. We show that this indicator corresponds to the inverse of the markup of price over social marginal cost, and give some evidence in support of this interpretation. We then show that, with some auxilliary assumptions our gap variable may be used to measure the efficiency costs of business fluctuations. We find that the latter costs are modest on average. However, to the extent the flexible price equilibrium is distorted, the gross efficiency losses from recessions and gains from booms may be large. Indeed, we find that the major recessions involved large efficiency losses. These results hold for reasonable parameterizations of the Frisch elasticity of labor supply, the coefficient of relative risk aversion, and steady state distortions.
Resumo:
One of the assumptions of the Capacitated Facility Location Problem (CFLP) is thatdemand is known and fixed. Most often, this is not the case when managers take somestrategic decisions such as locating facilities and assigning demand points to thosefacilities. In this paper we consider demand as stochastic and we model each of thefacilities as an independent queue. Stochastic models of manufacturing systems anddeterministic location models are put together in order to obtain a formula for thebacklogging probability at a potential facility location.Several solution techniques have been proposed to solve the CFLP. One of the mostrecently proposed heuristics, a Reactive Greedy Adaptive Search Procedure, isimplemented in order to solve the model formulated. We present some computationalexperiments in order to evaluate the heuristics performance and to illustrate the use ofthis new formulation for the CFLP. The paper finishes with a simple simulationexercise.
Resumo:
Revenue management practices often include overbooking capacity to account for customerswho make reservations but do not show up. In this paper, we consider the network revenuemanagement problem with no-shows and overbooking, where the show-up probabilities are specificto each product. No-show rates differ significantly by product (for instance, each itinerary andfare combination for an airline) as sale restrictions and the demand characteristics vary byproduct. However, models that consider no-show rates by each individual product are difficultto handle as the state-space in dynamic programming formulations (or the variable space inapproximations) increases significantly. In this paper, we propose a randomized linear program tojointly make the capacity control and overbooking decisions with product-specific no-shows. Weestablish that our formulation gives an upper bound on the optimal expected total profit andour upper bound is tighter than a deterministic linear programming upper bound that appearsin the existing literature. Furthermore, we show that our upper bound is asymptotically tightin a regime where the leg capacities and the expected demand is scaled linearly with the samerate. We also describe how the randomized linear program can be used to obtain a bid price controlpolicy. Computational experiments indicate that our approach is quite fast, able to scale to industrialproblems and can provide significant improvements over standard benchmarks.
Resumo:
Scoring rules that elicit an entire belief distribution through the elicitation of point beliefsare time-consuming and demand considerable cognitive e¤ort. Moreover, the results are validonly when agents are risk-neutral or when one uses probabilistic rules. We investigate a classof rules in which the agent has to choose an interval and is rewarded (deterministically) onthe basis of the chosen interval and the realization of the random variable. We formulatean e¢ ciency criterion for such rules and present a speci.c interval scoring rule. For single-peaked beliefs, our rule gives information about both the location and the dispersion of thebelief distribution. These results hold for all concave utility functions.
Resumo:
This paper explores biases in the elicitation of utilities under risk and the contribution that generalizations of expected utility can make to the resolution of these biases. We used five methods to measure utilities under risk and found clear violations of expected utility. Of the theories studies, prospect theory was most consistent with our data. The main improvement of prospect theory over expected utility was in comparisons between a riskless and a risky prospect(riskless-risk methods). We observed no improvement over expected utility in comparisons between two risky prospects (risk-risk methods). An explanation why we found no improvement of prospect theory over expected utility in risk-risk methods may be that there was less overweighting of small probabilities in our study than has commonly been observed.