904 resultados para streamflow forecasts
Resumo:
The CWRF is developed as a climate extension of the Weather Research and Forecasting model (WRF) by incorporating numerous improvements in the representation of physical processes and integration of external (top, surface, lateral) forcings that are crucial to climate scales, including interactions between land, atmosphere, and ocean; convection and microphysics; and cloud, aerosol, and radiation; and system consistency throughout all process modules. This extension inherits all WRF functionalities for numerical weather prediction while enhancing the capability for climate modeling. As such, CWRF can be applied seamlessly to weather forecast and climate prediction. The CWRF is built with a comprehensive ensemble of alternative parameterization schemes for each of the key physical processes, including surface (land, ocean), planetary boundary layer, cumulus (deep, shallow), microphysics, cloud, aerosol, and radiation, and their interactions. This facilitates the use of an optimized physics ensemble approach to improve weather or climate prediction along with a reliable uncertainty estimate. The CWRF also emphasizes the societal service capability to provide impactrelevant information by coupling with detailed models of terrestrial hydrology, coastal ocean, crop growth, air quality, and a recently expanded interactive water quality and ecosystem model. This study provides a general CWRF description and basic skill evaluation based on a continuous integration for the period 1979– 2009 as compared with that of WRF, using a 30-km grid spacing over a domain that includes the contiguous United States plus southern Canada and northern Mexico. In addition to advantages of greater application capability, CWRF improves performance in radiation and terrestrial hydrology over WRF and other regional models. Precipitation simulation, however, remains a challenge for all of the tested models.
Resumo:
We consider the forecasting performance of two SETAR exchange rate models proposed by Kräger and Kugler [J. Int. Money Fin. 12 (1993) 195]. Assuming that the models are good approximations to the data generating process, we show that whether the non-linearities inherent in the data can be exploited to forecast better than a random walk depends on both how forecast accuracy is assessed and on the ‘state of nature’. Evaluation based on traditional measures, such as (root) mean squared forecast errors, may mask the superiority of the non-linear models. Generalized impulse response functions are also calculated as a means of portraying the asymmetric response to shocks implied by such models.
Resumo:
A number of methods of evaluating the validity of interval forecasts of financial data are analysed, and illustrated using intraday FTSE100 index futures returns. Some existing interval forecast evaluation techniques, such as the Markov chain approach of Christoffersen (1998), are shown to be inappropriate in the presence of periodic heteroscedasticity. Instead, we consider a regression-based test, and a modified version of Christoffersen's Markov chain test for independence, and analyse their properties when the financial time series exhibit periodic volatility. These approaches lead to different conclusions when interval forecasts of FTSE100 index futures returns generated by various GARCH(1,1) and periodic GARCH(1,1) models are evaluated.
Resumo:
We consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting models are differentially mis-specified, and is likely to occur when the DGP is subject to location shifts. Moreover, averaging may then dominate over estimated weights in the combination. Finally, it cannot be proved that only non-encompassed devices should be retained in the combination. Empirical and Monte Carlo illustrations confirm the analysis.
Resumo:
We consider evaluating the UK Monetary Policy Committee's inflation density forecasts using probability integral transform goodness-of-fit tests. These tests evaluate the whole forecast density. We also consider whether the probabilities assigned to inflation being in certain ranges are well calibrated, where the ranges are chosen to be those of particular relevance to the MPC, given its remit of maintaining inflation rates in a band around per annum. Finally, we discuss the decision-based approach to forecast evaluation in relation to the MPC forecasts
Resumo:
Techniques are proposed for evaluating forecast probabilities of events. The tools are especially useful when, as in the case of the Survey of Professional Forecasters (SPF) expected probability distributions of inflation, recourse cannot be made to the method of construction in the evaluation of the forecasts. The tests of efficiency and conditional efficiency are applied to the forecast probabilities of events of interest derived from the SPF distributions, and supplement a whole-density evaluation of the SPF distributions based on the probability integral transform approach.
Resumo:
Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors: the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main findings are that the Heterogenous Autoregressive model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts
Resumo:
A comparison of the point forecasts and the probability distributions of inflation and output growth made by individual respondents to the US Survey of Professional Forecasters indicates that the two sets of forecasts are sometimes inconsistent. We evaluate a number of possible explanations, and find that not all forecasters update their histogram forecasts as new information arrives. This is supported by the finding that the point forecasts are more accurate than the histograms in terms of first-moment prediction.
Resumo:
We consider different methods for combining probability forecasts. In empirical exercises, the data generating process of the forecasts and the event being forecast is not known, and therefore the optimal form of combination will also be unknown. We consider the properties of various combination schemes for a number of plausible data generating processes, and indicate which types of combinations are likely to be useful. We also show that whether forecast encompassing is found to hold between two rival sets of forecasts or not may depend on the type of combination adopted. The relative performances of the different combination methods are illustrated, with an application to predicting recession probabilities using leading indicators.
Resumo:
We consider whether survey respondents’ probability distributions, reported as histograms, provide reliable and coherent point predictions, when viewed through the lens of a Bayesian learning model. We argue that a role remains for eliciting directly-reported point predictions in surveys of professional forecasters.
Resumo:
Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.
Resumo:
Research has highlighted the usefulness of the Gilt–Equity Yield Ratio (GEYR) as a predictor of UK stock returns. This paper extends recent studies by endogenising the threshold at which the GEYR switches from being low to being high or vice versa, thus improving the arbitrary nature of the determination of the threshold employed in the extant literature. It is observed that a decision rule for investing in equities or bonds, based on the forecasts from a regime switching model, yields higher average returns with lower variability than a static portfolio containing any combinations of equities and bonds. A closer inspection of the results reveals that the model has power to forecast when investors should steer clear of equities, although the trading profits generated are insufficient to outweigh the associated transaction costs.