891 resultados para Automatic forecasting
Resumo:
It is widely accepted that some of the most accurate Value-at-Risk (VaR) estimates are based on an appropriately specified GARCH process. But when the forecast horizon is greater than the frequency of the GARCH model, such predictions have typically required time-consuming simulations of the aggregated returns distributions. This paper shows that fast, quasi-analytic GARCH VaR calculations can be based on new formulae for the first four moments of aggregated GARCH returns. Our extensive empirical study compares the Cornish–Fisher expansion with the Johnson SU distribution for fitting distributions to analytic moments of normal and Student t, symmetric and asymmetric (GJR) GARCH processes to returns data on different financial assets, for the purpose of deriving accurate GARCH VaR forecasts over multiple horizons and significance levels.
Resumo:
Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.
Resumo:
We consider forecasting with factors, variables and both, modeling in-sample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulse-indicator saturation. A forecast-error taxonomy for factor models highlights the impacts of location shifts on forecast-error biases. Forecasting US GDP over 1-, 4- and 8-step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or short-term forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies, such as intercept corrections or differencing, when location shifts occur as in the recent financial crisis.
Resumo:
Many macroeconomic series, such as U.S. real output growth, are sampled quarterly, although potentially useful predictors are often observed at a higher frequency. We look at whether a mixed data-frequency sampling (MIDAS) approach can improve forecasts of output growth. The MIDAS specification used in the comparison uses a novel way of including an autoregressive term. We find that the use of monthly data on the current quarter leads to significant improvement in forecasting current and next quarter output growth, and that MIDAS is an effective way to exploit monthly data compared with alternative methods.
Resumo:
In the ten years since the first edition of this book appeared there have been significant developments in food process engineering, notably in biotechnology and membrane application. Advances have been made in the use of sensors for process control, and the growth of information technology and on-line computer applications continues apace. In addition, plant investment decisions are increasingly determined by quality assurance considerations and have to incorporate a greater emphasis on health and safety issues. The content of this edition has been rearranged to include descriptions of recent developments and to reflect the influence of new technology on the control and operations of automated plant. Original examples have been retained where relevant and these, together with many new illustrations, provide a comprehensive guide to good practice.
Resumo:
Causal attribution has been one of the most influential frameworks in the literature of achievement motivation, but previous studies considered achievement attribution as relatively deliberate and effortful processes. In the current study, we tested the hypothesis that people automatically attribute their achievement failure to their ability, but reduce the ability attribution in a controlled manner. To address this hypothesis, we measured participants’ causal attribution belief for their task failure either under the cognitive load (load condition) or with full attention (no-load condition). Across two studies, participants attributed task performance to their ability more in the load than in the no-load condition. The increased ability attribution under cognitive load further affected intrinsic motivation. These results indicate that cognitive resources available after feedback play crucial roles in determining causal attribution belief, as well as achievement motivations. (PsycINFO Database Record (c) 2013 APA, all rights reserved)(journal abstract)
Effects of temporal resolution of input precipitation on the performance of hydrological forecasting
Resumo:
Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.
Resumo:
We present a case study using the TIGGE database for flood warning in the Upper Huai catchment (ca. 30 672 km2). TIGGE ensemble forecasts from 6 meteorological centres with 10-day lead time were extracted and disaggregated to drive the Xinanjiang model to forecast discharges for flood events in July-September 2008. The results demonstrated satisfactory flood forecasting skills with clear signals of floods up to 10 days in advance. The forecasts occasionally show discrepancies both in time and space. Forecasting quality could potentially be improved by using temporal and spatial corrections of the forecasted precipitation.
Resumo:
Following trends in operational weather forecasting, where ensemble prediction systems (EPS) are now increasingly the norm, flood forecasters are beginning to experiment with using similar ensemble methods. Most of the effort to date has focused on the substantial technical challenges of developing coupled rainfall-runoff systems to represent the full cascade of uncertainties involved in predicting future flooding. As a consequence much less attention has been given to the communication and eventual use of EPS flood forecasts. Drawing on interviews and other research with operational flood forecasters from across Europe, this paper highlights a number of challenges to communicating and using ensemble flood forecasts operationally. It is shown that operational flood forecasters understand the skill, operational limitations, and informational value of EPS products in a variety of different and sometimes contradictory ways. Despite the efforts of forecasting agencies to design effective ways to communicate EPS forecasts to non-experts, operational flood forecasters were often skeptical about the ability of forecast recipients to understand or use them appropriately. It is argued that better training and closer contacts between operational flood forecasters and EPS system designers can help ensure the uncertainty represented by EPS forecasts is represented in ways that are most appropriate and meaningful for their intended consumers, but some fundamental political and institutional challenges to using ensembles, such as differing attitudes to false alarms and to responsibility for management of blame in the event of poor or mistaken forecasts are also highlighted. Copyright © 2010 Royal Meteorological Society.
Resumo:
Operational medium range flood forecasting systems are increasingly moving towards the adoption of ensembles of numerical weather predictions (NWP), known as ensemble prediction systems (EPS), to drive their predictions. We review the scientific drivers of this shift towards such ‘ensemble flood forecasting’ and discuss several of the questions surrounding best practice in using EPS in flood forecasting systems. We also review the literature evidence of the ‘added value’ of flood forecasts based on EPS and point to remaining key challenges in using EPS successfully.