63 resultados para GLAUCOMA PROBABILITY SCORE
Resumo:
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.
Resumo:
A new sparse kernel density estimator is introduced. Our main contribution is to develop a recursive algorithm for the selection of significant kernels one at time using the minimum integrated square error (MISE) criterion for both kernel selection. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
The paper traces the evolution of the tally from a receipt for cash payments into the treasury, to proof of payments made by royal officials outside of the treasury and finally to an assignment of revenue to be paid out by royal officials. Each of these processes is illustrated by examples drawn from the Exchequer records and explains their significance for royal finance and for historians working on the Exchequer records.
Resumo:
In this paper the properties of a hydro-meteorological forecasting system for forecasting river flows have been analysed using a probabilistic forecast convergence score (FCS). The focus on fixed event forecasts provides a forecaster's approach to system behaviour and adds an important perspective to the suite of forecast verification tools commonly used in this field. A low FCS indicates a more consistent forecast. It can be demonstrated that the FCS annual maximum decreases over the last 10 years. With lead time, the FCS of the ensemble forecast decreases whereas the control and high resolution forecast increase. The FCS is influenced by the lead time, threshold and catchment size and location. It indicates that one should use seasonality based decision rules to issue flood warnings.
Resumo:
Techniques are proposed for evaluating forecast probabilities of events. The tools are especially useful when, as in the case of the Survey of Professional Forecasters (SPF) expected probability distributions of inflation, recourse cannot be made to the method of construction in the evaluation of the forecasts. The tests of efficiency and conditional efficiency are applied to the forecast probabilities of events of interest derived from the SPF distributions, and supplement a whole-density evaluation of the SPF distributions based on the probability integral transform approach.
Resumo:
We consider different methods for combining probability forecasts. In empirical exercises, the data generating process of the forecasts and the event being forecast is not known, and therefore the optimal form of combination will also be unknown. We consider the properties of various combination schemes for a number of plausible data generating processes, and indicate which types of combinations are likely to be useful. We also show that whether forecast encompassing is found to hold between two rival sets of forecasts or not may depend on the type of combination adopted. The relative performances of the different combination methods are illustrated, with an application to predicting recession probabilities using leading indicators.
Resumo:
We consider whether survey respondents’ probability distributions, reported as histograms, provide reliable and coherent point predictions, when viewed through the lens of a Bayesian learning model. We argue that a role remains for eliciting directly-reported point predictions in surveys of professional forecasters.
Resumo:
We discuss the characteristics of magnetosheath plasma precipitation in the “cusp” ionosphere for when the reconnection at the dayside magnetopause takes place only in a series of pulses. It is shown that even in this special case, the low-altitude cusp precipitation is continuous, unless the intervals between the pulses are longer than observed intervals between magnetopause flux transfer event (FTE) signatures. We use FTE observation statistics to predict, for this case of entirely pulsed reconnection, the occurrence frequency, the distribution of latitudinal widths, and the number of ion dispersion steps of the cusp precipitation for a variety of locations of the reconnection site and a range of values of the local de-Hoffman Teller velocity. It is found that the cusp occurrence frequency is comparable with observed values for virtually all possible locations of the reconnection site. The distribution of cusp width is also comparable with observations and is shown to be largely dependent on the distribution of the mean reconnection rate, but pulsing the reconnection does very slightly increase the width of that distribution compared with the steady state case. We conclude that neither cusp occurrence probability nor width can be used to evaluate the relative occurrence of reconnection behaviors that are entirely pulsed, pulsed but continuous and quasi-steady. We show that the best test of the relative frequency of these three types of reconnection is to survey the distribution of steps in the cusp ion dispersion characteristics.
Resumo:
There is an on-going debate on the environmental effects of genetically modified crops to which this paper aims to contribute. First, data on environmental impacts of genetically modified (GM) and conventional crops are collected from peer-reviewed journals, and secondly an analysis is conducted in order to examine which crop type is less harmful for the environment. Published data on environmental impacts are measured using an array of indicators, and their analysis requires their normalisation and aggregation. Taking advantage of composite indicators literature, this paper builds composite indicators to measure the impact of GM and conventional crops in three dimensions: (1) non-target key species richness, (2) pesticide use, and (3) aggregated environmental impact. The comparison between the three composite indicators for both crop types allows us to establish not only a ranking to elucidate which crop is more convenient for the environment but the probability that one crop type outperforms the other from an environmental perspective. Results show that GM crops tend to cause lower environmental impacts than conventional crops for the analysed indicators.
Resumo:
While state-of-the-art models of Earth's climate system have improved tremendously over the last 20 years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of these models not only with each other but also with empirical models can reveal the space and time scales on which simulation models exploit their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project is contrasted with several empirical models. Both the ENSEMBLES models and a “dynamic climatology” empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The dynamic climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today's state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium-range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extent to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision support. Ultimately, the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress toward that goal, which is not available in model–model intercomparisons.
Resumo:
A new class of parameter estimation algorithms is introduced for Gaussian process regression (GPR) models. It is shown that the integration of the GPR model with probability distance measures of (i) the integrated square error and (ii) Kullback–Leibler (K–L) divergence are analytically tractable. An efficient coordinate descent algorithm is proposed to iteratively estimate the kernel width using golden section search which includes a fast gradient descent algorithm as an inner loop to estimate the noise variance. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.
Resumo:
This paper examines the impact of the auction process of residential properties that whilst unsuccessful at auction sold subsequently. The empirical analysis considers both the probability of sale and the premium of the subsequent sale price over the guide price, reserve and opening bid. The findings highlight that the final achieved sale price is influenced by key price variables revealed both prior to and during the auction itself. Factors such as auction participation, the number of individual bidders and the number of bids are significant in a number of the alternative specifications.
Resumo:
Cocoa flavanol (CF) intake improves endothelial function in patients with cardiovascular risk factors and disease. We investigated the effects of CF on surrogate markers of cardiovascular health in low risk, healthy, middle-aged individuals without history, signs or symptoms of CVD. In a 1-month, open-label, one-armed pilot study, bi-daily ingestion of 450 mg of CF led to a time-dependent increase in endothelial function (measured as flow-mediated vasodilation (FMD)) that plateaued after 2 weeks. Subsequently, in a randomised, controlled, double-masked, parallel-group dietary intervention trial (Clinicaltrials.gov: NCT01799005), 100 healthy, middle-aged (35–60 years) men and women consumed either the CF-containing drink (450 mg) or a nutrient-matched CF-free control bi-daily for 1 month. The primary end point was FMD. Secondary end points included plasma lipids and blood pressure, thus enabling the calculation of Framingham Risk Scores and pulse wave velocity. At 1 month, CF increased FMD over control by 1·2 % (95 % CI 1·0, 1·4 %). CF decreased systolic and diastolic blood pressure by 4·4 mmHg (95 % CI 7·9, 0·9 mmHg) and 3·9 mmHg (95 % CI 6·7, 0·9 mmHg), pulse wave velocity by 0·4 m/s (95 % CI 0·8, 0·04 m/s), total cholesterol by 0·20 mmol/l (95 % CI 0·39, 0·01 mmol/l) and LDL-cholesterol by 0·17 mmol/l (95 % CI 0·32, 0·02 mmol/l), whereas HDL-cholesterol increased by 0·10 mmol/l (95 % CI 0·04, 0·17 mmol/l). By applying the Framingham Risk Score, CF predicted a significant lowering of 10-year risk for CHD, myocardial infarction, CVD, death from CHD and CVD. In healthy individuals, regular CF intake improved accredited cardiovascular surrogates of cardiovascular risk, demonstrating that dietary flavanols have the potential to maintain cardiovascular health even in low-risk subjects.
Resumo:
We report between-subject results on the effect of monetary stakes on risk attitudes. While we find the typical risk seeking for small probabilities, risk seeking is reduced under high stakes. This suggests that utility is not consistently concave.