951 resultados para Algorithmic Probability
Resumo:
Voting power is commonly measured using a probability. But what kind of probability is this? Is it a degree of belief or an objective chance or some other sort of probability? The aim of this paper is to answer this question. The answer depends on the use to which a measure of voting power is put. Some objectivist interpretations of probabilities are appropriate when we employ such a measure for descriptive purposes. By contrast, when voting power is used to normatively assess voting rules, the probabilities are best understood as classical probabilities, which count possibilities. This is so because, from a normative stance, voting power is most plausibly taken to concern rights and thus possibilities. The classical interpretation also underwrites the use of the Bernoulli model upon which the Penrose/Banzhaf measure is based.
Resumo:
This article provides an importance sampling algorithm for computing the probability of ruin with recuperation of a spectrally negative Lévy risk process with light-tailed downwards jumps. Ruin with recuperation corresponds to the following double passage event: for some t∈(0,∞)t∈(0,∞), the risk process starting at level x∈[0,∞)x∈[0,∞) falls below the null level during the period [0,t][0,t] and returns above the null level at the end of the period tt. The proposed Monte Carlo estimator is logarithmic efficient, as t,x→∞t,x→∞, when y=t/xy=t/x is constant and below a certain bound.
Resumo:
A large deviations type approximation to the probability of ruin within a finite time for the compound Poisson risk process perturbed by diffusion is derived. This approximation is based on the saddlepoint method and generalizes the approximation for the non-perturbed risk process by Barndorff-Nielsen and Schmidli (Scand Actuar J 1995(2):169–186, 1995). An importance sampling approximation to this probability of ruin is also provided. Numerical illustrations assess the accuracy of the saddlepoint approximation using importance sampling as a benchmark. The relative deviations between saddlepoint approximation and importance sampling are very small, even for extremely small probabilities of ruin. The saddlepoint approximation is however substantially faster to compute.
Resumo:
reduce costs and labor associated with predicting the genotypic mean (GM) of a synthetic variety (SV) of maize (Zea mays L.), breeders can develop SVs from L lines and s single crosses (SynL,SC) instead of L+2s lines (SynL). The objective of this work was to derive and study formulae for the inbreeding coefficient (IC) and GM of SynL,SC, SynL, and the SV derived from (L+2s)/2 single crosses (SynSC). All SVs were derived from the same L+2s unrelated lines whose IC is FL, and each parent of a SV was represented by m plants. An a priori probability equation for the IC was used. Important results were: 1) the largest and smallest GMs correspond to SynL and SynL,SC, respectively; 2) the GM predictors with the largest and intermediate precision are those for SynL and SynL,SC, respectively; 3) only when FL=1, or m is large, SynL and SynSC are the same population, but only with SynSC prediction costs and labor undergo the maximum decrease, although its prediction precision is the lowest. To determine the SV to be developed, breeders should also consider the availability of lines, single crosses, manpower and land area; besides budget, target farmers, target environments, etc.
Resumo:
Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey uncertainties inherent to spatial data and analysis.