65 resultados para extinction probability
Resumo:
Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.
Resumo:
References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.
Resumo:
The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.
Resumo:
1. It has been postulated that climate warming may pose the greatest threat species in the tropics, where ectotherms have evolved more thermal specialist physiologies. Although species could rapidly respond to environmental change through adaptation, little is known about the potential for thermal adaptation, especially in tropical species. 2. In the light of the limited empirical evidence available and predictions from mutation-selection theory, we might expect tropical ectotherms to have limited genetic variance to enable adaptation. However, as a consequence of thermodynamic constraints, we might expect this disadvantage to be at least partially offset by a fitness advantage, that is, the ‘hotter-is-better’ hypothesis. 3. Using an established quantitative genetics model and metabolic scaling relationships, we integrate the consequences of the opposing forces of thermal specialization and thermodynamic constraints on adaptive potential by evaluating extinction risk under climate warming. We conclude that the potential advantage of a higher maximal development rate can in theory more than offset the potential disadvantage of lower genetic variance associated with a thermal specialist strategy. 4. Quantitative estimates of extinction risk are fundamentally very sensitive to estimates of generation time and genetic variance. However, our qualitative conclusion that the relative risk of extinction is likely to be lower for tropical species than for temperate species is robust to assumptions regarding the effects of effective population size, mutation rate and birth rate per capita. 5. With a view to improving ecological forecasts, we use this modelling framework to review the sensitivity of our predictions to the model’s underpinning theoretical assumptions and the empirical basis of macroecological patterns that suggest thermal specialization and fitness increase towards the tropics. We conclude by suggesting priority areas for further empirical research.
Resumo:
In this paper I analyze the general equilibrium in a random Walrasian economy. Dependence among agents is introduced in the form of dependency neighborhoods. Under the uncertainty, an agent may fail to survive due to a meager endowment in a particular state (direct effect), as well as due to unfavorable equilibrium price system at which the value of the endowment falls short of the minimum needed for survival (indirect terms-of-trade effect). To illustrate the main result I compute the stochastic limit of equilibrium price and probability of survival of an agent in a large Cobb-Douglas economy.
Resumo:
We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.
Resumo:
This paper examines the impact of the auction process of residential properties that whilst unsuccessful at auction sold subsequently. The empirical analysis considers both the probability of sale and the premium of the subsequent sale price over the guide price, reserve and opening bid. The findings highlight that the final achieved sale price is influenced by key price variables revealed both prior to and during the auction itself. Factors such as auction participation, the number of individual bidders and the number of bids are significant in a number of the alternative specifications.
Resumo:
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.
Resumo:
A new sparse kernel density estimator is introduced. Our main contribution is to develop a recursive algorithm for the selection of significant kernels one at time using the minimum integrated square error (MISE) criterion for both kernel selection. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
Techniques are proposed for evaluating forecast probabilities of events. The tools are especially useful when, as in the case of the Survey of Professional Forecasters (SPF) expected probability distributions of inflation, recourse cannot be made to the method of construction in the evaluation of the forecasts. The tests of efficiency and conditional efficiency are applied to the forecast probabilities of events of interest derived from the SPF distributions, and supplement a whole-density evaluation of the SPF distributions based on the probability integral transform approach.
Resumo:
We consider different methods for combining probability forecasts. In empirical exercises, the data generating process of the forecasts and the event being forecast is not known, and therefore the optimal form of combination will also be unknown. We consider the properties of various combination schemes for a number of plausible data generating processes, and indicate which types of combinations are likely to be useful. We also show that whether forecast encompassing is found to hold between two rival sets of forecasts or not may depend on the type of combination adopted. The relative performances of the different combination methods are illustrated, with an application to predicting recession probabilities using leading indicators.
Resumo:
We consider whether survey respondents’ probability distributions, reported as histograms, provide reliable and coherent point predictions, when viewed through the lens of a Bayesian learning model. We argue that a role remains for eliciting directly-reported point predictions in surveys of professional forecasters.