63 resultados para Probabilities.
Resumo:
We consider evaluating the UK Monetary Policy Committee's inflation density forecasts using probability integral transform goodness-of-fit tests. These tests evaluate the whole forecast density. We also consider whether the probabilities assigned to inflation being in certain ranges are well calibrated, where the ranges are chosen to be those of particular relevance to the MPC, given its remit of maintaining inflation rates in a band around per annum. Finally, we discuss the decision-based approach to forecast evaluation in relation to the MPC forecasts
Resumo:
Techniques are proposed for evaluating forecast probabilities of events. The tools are especially useful when, as in the case of the Survey of Professional Forecasters (SPF) expected probability distributions of inflation, recourse cannot be made to the method of construction in the evaluation of the forecasts. The tests of efficiency and conditional efficiency are applied to the forecast probabilities of events of interest derived from the SPF distributions, and supplement a whole-density evaluation of the SPF distributions based on the probability integral transform approach.
Resumo:
We consider different methods for combining probability forecasts. In empirical exercises, the data generating process of the forecasts and the event being forecast is not known, and therefore the optimal form of combination will also be unknown. We consider the properties of various combination schemes for a number of plausible data generating processes, and indicate which types of combinations are likely to be useful. We also show that whether forecast encompassing is found to hold between two rival sets of forecasts or not may depend on the type of combination adopted. The relative performances of the different combination methods are illustrated, with an application to predicting recession probabilities using leading indicators.
Resumo:
Climate model ensembles are widely heralded for their potential to quantify uncertainties and generate probabilistic climate projections. However, such technical improvements to modeling science will do little to deliver on their ultimate promise of improving climate policymaking and adaptation unless the insights they generate can be effectively communicated to decision makers. While some of these communicative challenges are unique to climate ensembles, others are common to hydrometeorological modeling more generally, and to the tensions arising between the imperatives for saliency, robustness, and richness in risk communication. The paper reviews emerging approaches to visualizing and communicating climate ensembles and compares them to the more established and thoroughly evaluated communication methods used in the numerical weather prediction domains of day-to-day weather forecasting (in particular probabilities of precipitation), hurricane and flood warning, and seasonal forecasting. This comparative analysis informs recommendations on best practice for climate modelers, as well as prompting some further thoughts on key research challenges to improve the future communication of climate change uncertainties.
Resumo:
Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.
Resumo:
Solar wind/magnetosheath plasma in the magnetosphere can be identified using a component that has a higher charge state, lower density and, at least soon after their entry into the magnetosphere, lower energy than plasma from a terrestrial source. We survey here observations taken over 3 years of He2+ ions made by the Magnetospheric Ion Composition Sensor (MICS) of the Charge and Mass Mgnetospheric Ion Composition Experiment (CAMMICE) instrument aboard POLAR. The occurrence probability of these solar wind ions is then plotted as a function of Magnetic Local Time (MLT) and invariant latitude (3) for various energy ranges. For all energies observed by MICS (1.8–21.4 keV) and all solar wind conditions, the occurrence probabilities peaked around the cusp region and along the dawn flank. The solar wind conditions were filtered to see if this dawnward asymmetry is controlled by the Svalgaard-Mansurov effect (and so depends on the BY component of the interplanetary magnetic field, IMF) or by Fermi acceleration of He2+ at the bow shock (and so depends on the IMF ratio BX/BY ). It is shown that the asymmetry remained persistently on the dawn flank, suggesting it was not due to effects associated with direct entry into the magnetosphere. This asymmetry, with enhanced fluxes on the dawn flank, persisted for lower energy ions (below a “cross-over” energy of about 23 keV) but reversed sense to give higher fluxes on the dusk flank at higher energies. This can be explained by the competing effects of gradient/curvature drifts and the convection electric field on ions that are convecting sunward on re-closed field lines. The lower-energy He2+ ions E × B drift dawnwards as they move earthward, whereas the higher energy ions curvature/gradient drift towards dusk. The convection electric field in the tail is weaker for northward IMF. Ions then need less energy to drift to the dusk flank, so that the cross-over energy, at which the asymmetry changes sense, is reduced.
Resumo:
Traditionally, the cusp has been described in terms of a time-stationary feature of the magnetosphere which allows access of magnetosheath-like plasma to low altitudes. Statistical surveys of data from low-altitude spacecraft have shown the average characteristics and position of the cusp. Recently, however, it has been suggested that the ionospheric footprint of flux transfer events (FTEs) may be identified as variations of the “cusp” on timescales of a few minutes. In this model, the cusp can vary in form between a steady-state feature in one limit and a series of discrete ionospheric FTE signatures in the other limit. If this time-dependent cusp scenario is correct, then the signatures of the transient reconnection events must be able, on average, to reproduce the statistical cusp occurrence previously determined from the satellite observations. In this paper, we predict the precipitation signatures which are associated with transient magnetopause reconnection, following recent observations of the dependence of dayside ionospheric convection on the orientation of the IMF. We then employ a simple model of the longitudinal motion of FTE signatures to show how such events can easily reproduce the local time distribution of cusp occurrence probabilities, as observed by low-altitude satellites. This is true even in the limit where the cusp is a series of discrete events. Furthermore, we investigate the existence of double cusp patches predicted by the simple model and show how these events may be identified in the data.
Resumo:
Linear theory, model ion-density profiles and MSIS neutral thermospheric predictions are used to investigate the stability of the auroral, topside ionosphere to oxygen cyclotron waves: variations of the critical height, above which the plasma is unstable, with field-aligned current, thermal ion density and exospheric temperature are considered. In addition, probabilities are assessed that interactions with neutral atomic gases prevent O+ ions from escaping into the magnetosphere after they have been transversely accelerated by these waves. The two studies are combined to give a rough estimate of the total O+ escape flux as a function of the field-aligned current density for an assumed rise in the perpendicular ion temperature. Charge exchange with neutral oxygen, not hydrogen, is shown to be the principle limitation to the escape of O+ ions, which occurs when the waves are driven unstable down to low altitudes. It is found that the largest observed field-aligned current densities can heat a maximum of about 5×1014 O+ ions m−2 to a threshold above which they are subsequently able to escape into the magnetosphere in the following 500s. Averaged over this period, this would constitute a flux of 1012 m−2 s−1 and in steady-state the peak outflow would then be limited to about 1013 m−2 s−1 by frictional drag on thermal O+ at lower altitudes. Maximum escape is at low plasma density unless the O+ scale height is very large. The outflow decreases with decreasing field-aligned current density and, to a lesser extent, with increasing exospheric temperature. Upward flowing ion events are evaluated as a source of O+ ions for the magnetosphere and as an explanation of the observed solar cycle variation of ring current O+ abundance.
Resumo:
This paper takes the concept of a discouraged borrower originally formulated by Kon and Storey [Kon, Y., Storey, D.J., 2003. A theory of discouraged borrowers. Small Business Economics 21, 37–49] and examines whether discouragement is an efficient self-rationing mechanism. Using US data it finds riskier borrowers have higher probabilities of discouragement, which increase with longer financial relationships, suggesting discouragement is an efficient self-rationing mechanism. It also finds low risk borrowers are less likely to be discouraged in concentrated markets than in competitive markets and that, in concentrated markets, high risk borrowers are more likely to be discouraged the longer their financial relationships. We conclude discouragement is more efficient in concentrated, than in competitive, markets.
Resumo:
We report between-subject results on the effect of monetary stakes on risk attitudes. While we find the typical risk seeking for small probabilities, risk seeking is reduced under high stakes. This suggests that utility is not consistently concave.
Resumo:
We systematically explore decision situations in which a decision maker bears responsibility for somebody else's outcomes as well as for her own in situations of payoff equality. In the gain domain we confirm the intuition that being responsible for somebody else's payoffs increases risk aversion. This is however not attributable to a 'cautious shift' as often thought. Indeed, looking at risk attitudes in the loss domain, we find an increase in risk seeking under responsibility. This raises issues about the nature of various decision biases under risk, and to what extent changed behavior under responsibility may depend on a social norm of caution in situations of responsibility versus naive corrections from perceived biases. To further explore this issue, we designed a second experiment to explore risk-taking behavior for gain prospects offering very small or very large probabilities of winning. For large probabilities, we find increased risk aversion, thus confirming our earlier finding. For small probabilities however, we find an increase of risk seeking under conditions of responsibility. The latter finding thus discredits hypotheses of a social rule dictating caution under responsibility, and can be explained through flexible self-correction models predicting an accentuation of the fourfold pattern of risk attitudes predicted by prospect theory. An additional accountability mechanism does not change risk behavior, except for mixed prospects, in which it reduces loss aversion. This indicates that loss aversion is of a fundamentally different nature than probability weighting or utility curvature. Implications for debiasing are discussed.
Resumo:
Economic theory makes no predictions about social factors affecting decisions under risk. We examine situations in which a decision maker decides for herself and another person under conditions of payoff equality, and compare them to individual decisions. By estimating a structural model, we find that responsibility leaves utility curvature unaffected, but accentuates the subjective distortion of very small and very large probabilities for both gains and losses. We also find that responsibility reduces loss aversion, but that these results only obtain under some specific definitions of the latter. These results serve to generalize and reconcile some of the still largely contradictory findings in the literature. They also have implications for financial agency, which we discuss.
Resumo:
An ability to quantify the reliability of probabilistic flood inundation predictions is a requirement not only for guiding model development but also for their successful application. Probabilistic flood inundation predictions are usually produced by choosing a method of weighting the model parameter space, but previous study suggests that this choice leads to clear differences in inundation probabilities. This study aims to address the evaluation of the reliability of these probabilistic predictions. However, a lack of an adequate number of observations of flood inundation for a catchment limits the application of conventional methods of evaluating predictive reliability. Consequently, attempts have been made to assess the reliability of probabilistic predictions using multiple observations from a single flood event. Here, a LISFLOOD-FP hydraulic model of an extreme (>1 in 1000 years) flood event in Cockermouth, UK, is constructed and calibrated using multiple performance measures from both peak flood wrack mark data and aerial photography captured post-peak. These measures are used in weighting the parameter space to produce multiple probabilistic predictions for the event. Two methods of assessing the reliability of these probabilistic predictions using limited observations are utilized; an existing method assessing the binary pattern of flooding, and a method developed in this paper to assess predictions of water surface elevation. This study finds that the water surface elevation method has both a better diagnostic and discriminatory ability, but this result is likely to be sensitive to the unknown uncertainties in the upstream boundary condition
Resumo:
Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.
Resumo:
In 2013 the Warsaw International Mechanism (WIM) for loss and damage (L&D) associated with climate change impacts was established under the United Nations Framework Convention on Climate Change (UNFCCC). For scientists, L&D raises ques- tions around the extent that such impacts can be attributed to anthropogenic climate change, which may generate complex results and be controversial in the policy arena. This is particularly true in the case of probabilistic event attribution (PEA) science, a new and rapidly evolving field that assesses whether changes in the probabilities of extreme events are attributable to GHG emissions. If the potential applications of PEA are to be considered responsibly, dialogue between scientists and policy makers is fundamental. Two key questions are considered here through a literature review and key stakeholder interviews with representatives from the science and policy sectors underpinning L&D. These provided the opportunity for in-depth insights into stakeholders’ views on firstly, how much is known and understood about PEA by those associated with the L&D debate? Secondly, how might PEA inform L&D and wider climate policy? Results show debate within the climate science community, and limited understanding among other stakeholders, around the sense in which extreme events can be attributed to climate change. However, stake- holders do identify and discuss potential uses for PEA in the WIM and wider policy, but it remains difficult to explore precise applications given the ambiguity surrounding L&D. This implies a need for stakeholders to develop greater understandings of alternative conceptions of L&D and the role of science, and also identify how PEA can best be used to support policy, and address associated challenges.