888 resultados para Forecasting accuracy
Resumo:
In this paper, we develop a new decision making model and apply it in political Surveys of economic climate collect opinions of managers about the short-term future evolution of their business. Interviews are carried out on a regular basis and responses measure optimistic, neutral or pessimistic views about the economic perspectives. We propose a method to evaluate the sampling error of the average opinion derived from a particular type of survey data. Our variance estimate is useful to interpret historical trends and to decide whether changes in the index from one period to another are due to a structural change or whether ups and downs can be attributed to sampling randomness. An illustration using real data from a survey of business managers opinions is discussed.
Resumo:
A network of 25 sonic stage sensors were deployed in the Squaw Creek basin upstream from Ames Iowa to determine if the state-of-the-art distributed hydrological model CUENCAS can produce reliable information for all road crossings including those that cross small creeks draining basins as small as 1 sq. mile. A hydraulic model was implemented for the major tributaries of the Squaw Creek where IFC sonic instruments were deployed and it was coupled to CUENCAS to validate the predictions made at small tributaries in the basin. This study demonstrates that the predictions made by the hydrological model at internal locations in the basins are as accurate as the predictions made at the outlet of the basin. Final rating curves based on surveyed cross sections were developed for the 22 IFC-bridge sites that are currently operating, and routine forecast is provided at those locations (see IFIS). Rating curves were developed for 60 additional bridge locations in the basin, however, we do not use those rating curves for routine forecast because the lack of accuracy of LiDAR derived cross sections is not optimal. The results of our work form the basis for two papers that have been submitted for publication to the Journal of Hydrological Engineering. Peer review of our work will gives a strong footing to our ability to expand our results from the pilot Squaw Creek basin to all basins in Iowa.
Resumo:
The main objective of this master’s thesis was to quantitatively study the reliability of market and sales forecasts of a certain company by measuring bias, precision and accuracy of these forecasts by comparing forecasts against actual values. Secondly, the differences of bias, precision and accuracy between markets were explained by various macroeconomic variables and market characteristics. Accuracy and precision of the forecasts seems to vary significantly depending on the market that is being forecasted, the variable that is being forecasted, the estimation period, the length of the estimated period, the forecast horizon and the granularity of the data. High inflation, low income level and high year-on-year market volatility seems to be related with higher annual market forecast uncertainty and high year-on-year sales volatility with higher sales forecast uncertainty. When quarterly market size is forecasted, correlation between macroeconomic variables and forecast errors reduces. Uncertainty of the sales forecasts cannot be explained with macroeconomic variables. Longer forecasts are more uncertain, shorter estimated period leads to higher uncertainty, and usually more recent market forecasts are less uncertain. Sales forecasts seem to be more uncertain than market forecasts, because they incorporate both market size and market share risks. When lead time is more than one year, forecast risk seems to grow as a function of root forecast horizon. When lead time is less than year, sequential error terms are typically correlated, and therefore forecast errors are trending or mean-reverting. The bias of forecasts seems to change in cycles, and therefore the future forecasts cannot be systematically adjusted with it. The MASE cannot be used to measure whether the forecast can anticipate year-on-year volatility. Instead, we constructed a new relative accuracy measure to cope with this particular situation.
Resumo:
Budget forecasts have become increasingly important as a tool of fiscal management to influence expectations of bond markets and the public at large. The inherent difficulty in projecting macroeconomic variables – together with political bias – thwart the accuracy of budget forecasts. We improve accuracy by combining the forecasts of both private and public agencies for Italy over the period 1993-2012. A weighted combined forecast of the deficit/ ratio is superior to any single forecast. Deficits are hard to predict due to shifting economic conditions and political events. We test and compare predictive accuracy over time and although a weighted combined forecast is robust to breaks, there is no significant improvement over a simple RW model.
Resumo:
Budget forecasts have become increasingly important as a tool of fiscal management to influence expectations of bond markets and the public at large. The inherent difficulty in projecting macroeconomic variables – together with political bias – thwart the accuracy of budget forecasts. We improve accuracy by combining the forecasts of both private and public agencies for Italy over the period 1993-2012. A weighted combined forecast of the deficit/ ratio is superior to any single forecast. Deficits are hard to predict due to shifting economic conditions and political events. We test and compare predictive accuracy over time and although a weighted combined forecast is robust to breaks, there is no significant improvement over a simple RW model.
Resumo:
The growing population in cities increases the energy demand and affects the environment by increasing carbon emissions. Information and communications technology solutions which enable energy optimization are needed to address this growing energy demand in cities and to reduce carbon emissions. District heating systems optimize the energy production by reusing waste energy with combined heat and power plants. Forecasting the heat load demand in residential buildings assists in optimizing energy production and consumption in a district heating system. However, the presence of a large number of factors such as weather forecast, district heating operational parameters and user behavioural parameters, make heat load forecasting a challenging task. This thesis proposes a probabilistic machine learning model using a Naive Bayes classifier, to forecast the hourly heat load demand for three residential buildings in the city of Skellefteå, Sweden over a period of winter and spring seasons. The district heating data collected from the sensors equipped at the residential buildings in Skellefteå, is utilized to build the Bayesian network to forecast the heat load demand for horizons of 1, 2, 3, 6 and 24 hours. The proposed model is validated by using four cases to study the influence of various parameters on the heat load forecast by carrying out trace driven analysis in Weka and GeNIe. Results show that current heat load consumption and outdoor temperature forecast are the two parameters with most influence on the heat load forecast. The proposed model achieves average accuracies of 81.23 % and 76.74 % for a forecast horizon of 1 hour in the three buildings for winter and spring seasons respectively. The model also achieves an average accuracy of 77.97 % for three buildings across both seasons for the forecast horizon of 1 hour by utilizing only 10 % of the training data. The results indicate that even a simple model like Naive Bayes classifier can forecast the heat load demand by utilizing less training data.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
This paper reviews nine software packages with particular reference to their GARCH model estimation accuracy when judged against a respected benchmark. We consider the numerical consistency of GARCH and EGARCH estimation and forecasting. Our results have a number of implications for published research and future software development. Finally, we argue that the establishment of benchmarks for other standard non-linear models is long overdue.
Resumo:
This paper uses appropriately modified information criteria to select models from the GARCH family, which are subsequently used for predicting US dollar exchange rate return volatility. The out of sample forecast accuracy of models chosen in this manner compares favourably on mean absolute error grounds, although less favourably on mean squared error grounds, with those generated by the commonly used GARCH(1, 1) model. An examination of the orders of models selected by the criteria reveals that (1, 1) models are typically selected less than 20% of the time.
Resumo:
Abstract We present a refined parametric model for forecasting electricity demand which performed particularly well in the recent Global Energy Forecasting Competition (GEFCom 2012). We begin by motivating and presenting a simple parametric model, treating the electricity demand as a function of the temperature and day of the data. We then set out a series of refinements of the model, explaining the rationale for each, and using the competition scores to demonstrate that each successive refinement step increases the accuracy of the model’s predictions. These refinements include combining models from multiple weather stations, removing outliers from the historical data, and special treatments of public holidays.
Resumo:
Factor forecasting models are shown to deliver real-time gains over autoregressive models for US real activity variables during the recent period, but are less successful for nominal variables. The gains are largely due to the Financial Crisis period, and are primarily at the shortest (one quarter ahead) horizon. Excluding the pre-Great Moderation years from the factor forecasting model estimation period (but not from the data used to extract factors) results in a marked fillip in factor model forecast accuracy, but does the same for the AR model forecasts. The relative performance of the factor models compared to the AR models is largely unaffected by whether the exercise is in real time or is pseudo out-of-sample.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
In this paper we assess opinion polls, prediction markets, expert opinion and statistical modelling over a large number of US elections in order to determine which perform better in terms of forecasting outcomes. In line with existing literature, we bias-correct opinion polls. We consider accuracy, bias and precision over different time horizons before an election, and we conclude that prediction markets appear to provide the most precise forecasts and are similar in terms of bias to opinion polls. We find that our statistical model struggles to provide competitive forecasts, while expert opinion appears to be of value. Finally we note that the forecast horizon matters; whereas prediction market forecasts tend to improve the nearer an election is, opinion polls appear to perform worse, while expert opinion performs consistently throughout. We thus contribute to the growing literature comparing election forecasts of polls and prediction markets.
Resumo:
This paper characterizes the dynamics of jumps and analyzes their importance for volatility forecasting. Using high-frequency data on four prominent energy markets, we perform a model-free decomposition of realized variance into its continuous and discontinuous components. We find strong evidence of jumps in energy markets between 2007 and 2012. We then investigate the importance of jumps for volatility forecasting. To this end, we estimate and analyze the predictive ability of several Heterogenous Autoregressive (HAR) models that explicitly capture the dynamics of jumps. Conducting extensive in-sample and out-of-sample analyses, we establish that explicitly modeling jumps does not significantly improve forecast accuracy. Our results are broadly consistent across our four energy markets, forecasting horizons, and loss functions