982 resultados para Forecasting methods
Resumo:
Due to the variability of wind power, it is imperative to accurately and timely forecast the wind generation to enhance the flexibility and reliability of the operation and control of real-time power. Special events such as ramps, spikes are hard to predict with traditional methods using solely recently measured data. In this paper, a new Gaussian Process model with hybrid training data taken from both the local time and historic dataset is proposed and applied to make short-term predictions from 10 minutes to one hour ahead. A key idea is that the similar pattern data in history are properly selected and embedded in Gaussian Process model to make predictions. The results of the proposed algorithms are compared to those of standard Gaussian Process model and the persistence model. It is shown that the proposed method not only reduces magnitude error but also phase error.
Resumo:
This paper applies Gaussian estimation methods to continuous time models for modelling overseas visitors into the UK. The use of continuous time modelling is widely used in economics and finance but not in tourism forecasting. Using monthly data for 1986–2010, various continuous time models are estimated and compared to autoregressive integrated moving average (ARIMA) and autoregressive fractionally integrated moving average (ARFIMA) models. Dynamic forecasts are obtained over different periods. The empirical results show that the ARIMA model performs very well, but that the constant elasticity of variance (CEV) continuous time model has the lowest root mean squared error (RMSE) over a short period.
Resumo:
Load forecasting has gradually becoming a major field of research in electricity industry. Therefore, Load forecasting is extremely important for the electric sector under deregulated environment as it provides a useful support to the power system management. Accurate power load forecasting models are required to the operation and planning of a utility company, and they have received increasing attention from researches of this field study. Many mathematical methods have been developed for load forecasting. This work aims to develop and implement a load forecasting method for short-term load forecasting (STLF), based on Holt-Winters exponential smoothing and an artificial neural network (ANN). One of the main contributions of this paper is the application of Holt-Winters exponential smoothing approach to the forecasting problem and, as an evaluation of the past forecasting work, data mining techniques are also applied to short-term Load forecasting. Both ANN and Holt-Winters exponential smoothing approaches are compared and evaluated.
Resumo:
For the past 20 years, researchers have applied the Kalman filter to the modeling and forecasting the term structure of interest rates. Despite its impressive performance in in-sample fitting yield curves, little research has focused on the out-of-sample forecast of yield curves using the Kalman filter. The goal of this thesis is to develop a unified dynamic model based on Diebold and Li (2006) and Nelson and Siegel’s (1987) three-factor model, and estimate this dynamic model using the Kalman filter. We compare both in-sample and out-of-sample performance of our dynamic methods with various other models in the literature. We find that our dynamic model dominates existing models in medium- and long-horizon yield curve predictions. However, the dynamic model should be used with caution when forecasting short maturity yields
Resumo:
Production Planning and Control (PPC) systems have grown and changed because of the developments in planning tools and models as well as the use of computers and information systems in this area. Though so much is available in research journals, practice of PPC is lagging behind and does not use much from published research. The practices of PPC in SMEs lag behind because of many reasons, which need to be explored This research work deals with the effect of identified variables such as forecasting, planning and control methods adopted, demographics of the key person, standardization practices followed, effect of training, learning and IT usage on firm performance. A model and framework has been developed based on literature. Empirical testing of the model has been done after collecting data using a questionnaire schedule administered among the selected respondents from Small and Medium Enterprises (SMEs) in India. Final data included 382 responses. Hypotheses linking SME performance with the use of forecasting, planning and controlling were formed and tested. Exploratory factor analysis was used for data reduction and for identifying the factor structure. High and low performing firms were classified using a Logistic Regression model. A confirmatory factor analysis was used to study the structural relationship between firm performance and dependent variables.
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult to achieve because the relative values of the forecast components often fail to behave in a way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It has been shown that cause-specic mortality forecasts are pessimistic when compared with all-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approach of using log mortality rates and forecasts the density of deaths in the life table. Since these values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbing state), they are intrinsically relative rather than absolute values across decrements as well as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison (1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that the unit sum constraint is honoured. The structure of the best-known, single-decrement mortality-rate forecasting model, devised by Lee and Carter (1992), is expressed in compositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortality by cause of death for Japan
Resumo:
El foc bacterià és una malaltia que afecta a plantes de la família de la rosàcies, causada pel bacteri Erwinia amylovora. El seu rang d'hostes inclou arbres fruiters, com la perera, la pomera o el codonyer, i plantes ornamentals de gran interès comercial i econòmic. Actualment, la malaltia s'ha dispersat i es troba àmpliament distribuïda en totes les zones de clima temperat del món. A Espanya, on la malaltia no és endèmica, el foc bacterià es va detectar per primer cop al 1995 al nord del país (Euskadi) i posteriorment, han aparegut varis focus en altres localitzacions, que han estat convenientment eradicats. El control del foc bacterià, és molt poc efectiu en plantes afectades per la malaltia, de manera que es basa en mesures encaminades a evitar la dispersió del patogen, i la introducció de la malaltia en regions no endèmiques. En aquest treball, la termoteràpia ha estat avaluada com a mètode d'eradicació d'E. amylovora de material vegetal de propagació asimptomàtic. S'ha demostrat que la termoteràpia és un mètode viable d'eradicar E. amylovora de material de propagació. Gairebé totes les espècies i varietats de rosàcies mantingudes en condicions d'humitat sobrevivien 7 hores a 45 ºC i més de 3 hores a 50 ºC, mentre que més d'1 hora d'exposició a 50 ºC amb calor seca produïa danys en el material vegetal i reduïa la brotació. Tractaments de 60 min a 45 ºC o 30 min a 50 ºC van ser suficients per reduir la població epífita d'E. amylovora a nivells no detectables (5 x 102 ufc g-1 p.f.) en branques de perera. Els derivats dels fosfonats i el benzotiadiazol són efectius en el control del foc bacterià en perera i pomera, tant en condicions de laboratori, com d'hivernacle i camp. Els inductors de defensa de les plantes redueixen els nivells de malaltia fins al 40-60%. Els intervals de temps mínims per aconseguir el millor control de la malaltia van ser 5 dies pel fosetil-Al, i 7 dies per l'etefon i el benzotiadiazol, i les dosis òptimes pel fosetil-Al i el benzotiadiazol van ser 3.72 g HPO32- L-1 i 150 mg i.a. L-1, respectivament. Es millora l'eficàcia del fosetil-Al i del benzotiadiazol en el control del foc bacterià, quan es combinen amb els antibiòtics a la meitat de la dosi d'aquests últims. Tot i que l'estratègia de barrejar productes és més pràctica i fàcil de dur a terme a camp, que l'estratègia de combinar productes, el millor nivell de control de la malaltia s'aconsegueix amb l'estratègia de combinar productes. Es va analitzar a nivell histològic i ultrastructural l'efecte del benzotiadiazol i dels fosfonats en la interacció Erwinia amylovora-perera. Ni el benzotiadiazol, ni el fosetil-Al, ni l'etefon van induir canvis estructurals en els teixits de perera 7 dies després de la seva aplicació. No obstant, després de la inoculació d'E. amylovora es va observar en plantes tractades amb fosetil-Al i etefon una desorganització estructural cel·lular, mentre que en les plantes tractades amb benzotiadiazol aquestes alteracions tissulars van ser retardades. S'han avaluat dos models (Maryblyt, Cougarblight) en un camp a Espanya afectat per la malaltia, per determinar la precisió de les prediccions. Es van utilitzar dos models per elaborar el mapa de risc, el BRS-Powell combinat i el BIS95 modificat. Els resultats van mostrar dos zones amb elevat i baix risc de la malaltia. Maryblyt i Cougarblight són dos models de fàcil ús, tot i que la seva implementació en programes de maneig de la malaltia requereix que siguin avaluats i validats per un període de temps més llarg i en àrees on la malaltia hi estigui present.
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
A new method of clear-air turbulence (CAT) forecasting based on the Lighthill–Ford theory of spontaneous imbalance and emission of inertia–gravity waves has been derived and applied on episodic and seasonal time scales. A scale analysis of this shallow-water theory for midlatitude synoptic-scale flows identifies advection of relative vorticity as the leading-order source term. Examination of leading- and second-order terms elucidates previous, more empirically inspired CAT forecast diagnostics. Application of the Lighthill–Ford theory to the Upper Mississippi and Ohio Valleys CAT outbreak of 9 March 2006 results in good agreement with pilot reports of turbulence. Application of Lighthill–Ford theory to CAT forecasting for the 3 November 2005–26 March 2006 period using 1-h forecasts of the Rapid Update Cycle (RUC) 2 1500 UTC model run leads to superior forecasts compared to the current operational version of the Graphical Turbulence Guidance (GTG1) algorithm, the most skillful operational CAT forecasting method in existence. The results suggest that major improvements in CAT forecasting could result if the methods presented herein become operational.
Resumo:
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub-optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out-of-sample forecasting performance of various linear and GARCH-type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decisionmaking.
Resumo:
A set of random variables is exchangeable if its joint distribution function is invariant under permutation of the arguments. The concept of exchangeability is discussed, with a view towards potential application in evaluating ensemble forecasts. It is argued that the paradigm of ensembles being an independent draw from an underlying distribution function is probably too narrow; allowing ensemble members to be merely exchangeable might be a more versatile model. The question is discussed whether established methods of ensemble evaluation need alteration under this model, with reliability being given particular attention. It turns out that the standard methodology of rank histograms can still be applied. As a first application of the exchangeability concept, it is shown that the method of minimum spanning trees to evaluate the reliability of high dimensional ensembles is mathematically sound.
Resumo:
The Normal Quantile Transform (NQT) has been used in many hydrological and meteorological applications in order to make the Cumulated Distribution Function (CDF) of the observed, simulated and forecast river discharge, water level or precipitation data Gaussian. It is also the heart of the meta-Gaussian model for assessing the total predictive uncertainty of the Hydrological Uncertainty Processor (HUP) developed by Krzysztofowicz. In the field of geo-statistics this transformation is better known as the Normal-Score Transform. In this paper some possible problems caused by small sample sizes when applying the NQT in flood forecasting systems will be discussed and a novel way to solve the problem will be outlined by combining extreme value analysis and non-parametric regression methods. The method will be illustrated by examples of hydrological stream-flow forecasts.
Resumo:
Many macroeconomic series, such as U.S. real output growth, are sampled quarterly, although potentially useful predictors are often observed at a higher frequency. We look at whether a mixed data-frequency sampling (MIDAS) approach can improve forecasts of output growth. The MIDAS specification used in the comparison uses a novel way of including an autoregressive term. We find that the use of monthly data on the current quarter leads to significant improvement in forecasting current and next quarter output growth, and that MIDAS is an effective way to exploit monthly data compared with alternative methods.
Resumo:
Following trends in operational weather forecasting, where ensemble prediction systems (EPS) are now increasingly the norm, flood forecasters are beginning to experiment with using similar ensemble methods. Most of the effort to date has focused on the substantial technical challenges of developing coupled rainfall-runoff systems to represent the full cascade of uncertainties involved in predicting future flooding. As a consequence much less attention has been given to the communication and eventual use of EPS flood forecasts. Drawing on interviews and other research with operational flood forecasters from across Europe, this paper highlights a number of challenges to communicating and using ensemble flood forecasts operationally. It is shown that operational flood forecasters understand the skill, operational limitations, and informational value of EPS products in a variety of different and sometimes contradictory ways. Despite the efforts of forecasting agencies to design effective ways to communicate EPS forecasts to non-experts, operational flood forecasters were often skeptical about the ability of forecast recipients to understand or use them appropriately. It is argued that better training and closer contacts between operational flood forecasters and EPS system designers can help ensure the uncertainty represented by EPS forecasts is represented in ways that are most appropriate and meaningful for their intended consumers, but some fundamental political and institutional challenges to using ensembles, such as differing attitudes to false alarms and to responsibility for management of blame in the event of poor or mistaken forecasts are also highlighted. Copyright © 2010 Royal Meteorological Society.
Resumo:
We compare linear autoregressive (AR) models and self-exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two-regime SETAR process is used as the data-generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non-linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data