915 resultados para Forecasting Volatility


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrical load forecasting plays a vital role in order to achieve the concept of next generation power system such as smart grid, efficient energy management and better power system planning. As a result, high forecast accuracy is required for multiple time horizons that are associated with regulation, dispatching, scheduling and unit commitment of power grid. Artificial Intelligence (AI) based techniques are being developed and deployed worldwide in on Varity of applications, because of its superior capability to handle the complex input and output relationship. This paper provides the comprehensive and systematic literature review of Artificial Intelligence based short term load forecasting techniques. The major objective of this study is to review, identify, evaluate and analyze the performance of Artificial Intelligence (AI) based load forecast models and research gaps. The accuracy of ANN based forecast model is found to be dependent on number of parameters such as forecast model architecture, input combination, activation functions and training algorithm of the network and other exogenous variables affecting on forecast model inputs. Published literature presented in this paper show the potential of AI techniques for effective load forecasting in order to achieve the concept of smart grid and buildings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wind farms are producing a considerable portion of the world renewable energy. Since the output power of any wind farm is highly dependent on the wind speed, the power extracted from a wind park is not always a constant value. In order to have a non-disruptive supply of electricity, it is important to have a good scheduling and forecasting system for the energy output of any wind park. In this paper, a new hybrid swarm technique (HAP) is used to forecast the energy output of a real wind farm located in Binaloud, Iran. The technique consists of the hybridization of the ant colony optimization (ACO) and particle swarm optimization (PSO) which are two meta-heuristic techniques under the category of swarm intelligence. The hybridization of the two algorithms to optimize the forecasting model leads to a higher quality result with a faster convergence profile. The empirical hourly wind power output of Binaloud Wind Farm for 364 days is collected and used to train and test the prepared model. The meteorological data consisting of wind speed and ambient temperature is used as the inputs to the mathematical model. The results indicate that the proposed technique can estimate the output wind power based on the wind speed and the ambient temperature with an MAPE of 3.513%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines the relation between aggregate volatility risk and the cross-section of stock returns in Australia. We use a stock's sensitivity to innovations in the ASX200 implied volatility (VIX) as a proxy for aggregate volatility risk. Consistent with theoretical predictions, aggregate volatility risk is negatively related to the cross-section of stock returns only when market volatility is rising. The asymmetric volatility effect is persistent throughout the sample period and is robust after controlling for size, book-to-market, momentum, and liquidity issues. There is some evidence that aggregate volatility risk is a priced factor, especially in months with increasing market volatility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forecasting bike sharing demand is of paramount importance for management of fleet in city level. Rapidly changing demand in this service is due to a number of factors including workday, weekend, holiday and weather condition. These nonlinear dependencies make the prediction a difficult task. This work shows that type-1 and type-2 fuzzy inference-based prediction mechanisms can capture this highly variable trend with good accuracy. Wang-Mendel rule generation method is utilized to generate rule base and then only current information like date related information and weather condition is used to forecast bike share demand at any given point in future. Simulation results reveal that fuzzy inference predictors can potentially outperform traditional feed forward neural network in terms of prediction accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper makes use of the idea of prediction intervals (PIs) to capture the uncertainty associated with wind power generation in power systems. Since the forecasting errors cannot be appropriately modeled using distribution probability functions, here we employ a powerful nonparametric approach called lower upper bound estimation (LUBE) method to construct the PIs. The proposed LUBE method uses a new framework based on a combination of PIs to overcome the performance instability of neural networks (NNs) used in the LUBE method. Also, a new fuzzy-based cost function is proposed with the purpose of having more freedom and flexibility in adjusting NN parameters used for construction of PIs. In comparison with the other cost functions in the literature, this new formulation allows the decision-makers to apply their preferences for satisfying the PI coverage probability and PI normalized average width individually. As the optimization tool, bat algorithm with a new modification is introduced to solve the problem. The feasibility and satisfying performance of the proposed method are examined using datasets taken from different wind farms in Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel design of interval type-2 fuzzy logic systems (IT2FLS) by utilizing the theory of extreme learning machine (ELM) for electricity load demand forecasting. ELM has become a popular learning algorithm for single hidden layer feed-forward neural networks (SLFN). From the functional equivalence between the SLFN and fuzzy inference system, a hybrid of fuzzy-ELM has gained attention of the researchers. This paper extends the concept of fuzzy-ELM to an IT2FLS based on ELM (IT2FELM). In the proposed design the antecedent membership function parameters of the IT2FLS are generated randomly, whereas the consequent part parameters are determined analytically by the Moore-Penrose pseudo inverse. The ELM strategy ensures fast learning of the IT2FLS as well as optimality of the parameters. Effectiveness of the proposed design of IT2FLS is demonstrated with the application of forecasting nonlinear and chaotic data sets. Nonlinear data of electricity load from the Australian National Electricity Market for the Victoria region and from the Ontario Electricity Market are considered here. The proposed model is also applied to forecast Mackey-glass chaotic time series data. Comparative analysis of the proposed model is conducted with some traditional models such as neural networks (NN) and adaptive neuro fuzzy inference system (ANFIS). In order to verify the structure of the proposed design of IT2FLS an alternate design of IT2FLS based on Kalman filter (KF) is also utilized for the comparison purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a highly urbanized and flood prone region, Flanders has experienced multiple floods causing significant damage in the past. In response to the floods of 1998 and 2002 the Flemish Environment Agency, responsible for managing 1 400 km of unnavigable rivers, started setting up a real time flood forecasting system in 2003. Currently the system covers almost 2 000 km of unnavigable rivers, for which flood forecasts are accessible online (www.waterinfo.be). The forecasting system comprises more than 1 000 hydrologic and 50 hydrodynamic models which are supplied with radar rainfall, rainfall forecasts and on-site observations. Forecasts for the next 2 days are generated hourly, while 10 day forecasts are generated twice a day. Additionally, twice daily simulations based on percentile rainfall forecasts (from EPS predictions) result in uncertainty bands for the latter. Subsequent flood forecasts use the most recent rainfall predictions and observed parameters at any time while uncertainty on the longer-term is taken into account. The flood forecasting system produces high resolution dynamic flood maps and graphs at about 200 river gauges and more than 3 000 forecast points. A customized emergency response system generates phone calls and text messages to a team of hydrologists initiating a pro-active response to prevent upcoming flood damage. The flood forecasting system of the Flemish Environment Agency is constantly evolving and has proven to be an indispensable tool in flood crisis management. This was clearly the case during the November 2010 floods, when the agency issued a press release 2 days in advance allowing water managers, emergency services and civilians to take measures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We compare three frequently used volatility modelling techniques: GARCH, Markovian switching and cumulative daily volatility models. Our primary goal is to highlight a practical and systematic way to measure the relative effectiveness of these techniques. Evaluation comprises the analysis of the validity of the statistical requirements of the various models and their performance in simple options hedging strategies. The latter puts them to test in a "real life" application. Though there was not much difference between the three techniques, a tendency in favour of the cumulative daily volatility estimates, based on tick data, seems dear. As the improvement is not very big, the message for the practitioner - out of the restricted evidence of our experiment - is that he will probably not be losing much if working with the Markovian switching method. This highlights that, in terms of volatility estimation, no clear winner exists among the more sophisticated techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By mixing together inequalities based on cyclical variables, such as unemployment, and on structural variables, such as education, usual measurements of income inequality add objects of a di§erent economic nature. Since jobs are not acquired or lost as fast as education or skills, this aggreagation leads to a loss of relavant economic information. Here I propose a di§erent procedure for the calculation of inequality. The procedure uses economic theory to construct an inequality measure of a long-run character, the calculation of which can be performed, though, with just one set of cross-sectional observations. Technically, the procedure is based on the uniqueness of the invariant distribution of wage o§ers in a job-search model. Workers should be pre-grouped by the distribution of wage o§ers they see, and only between-group inequalities should be considered. This construction incorporates the fact that the average wages of all workers in the same group tend to be equalized by the continuous turnover in the job market.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimating the parameters of the instantaneous spot interest rate process is of crucial importance for pricing fixed income derivative securities. This paper presents an estimation for the parameters of the Gaussian interest rate model for pricing fixed income derivatives based on the term structure of volatility. We estimate the term structure of volatility for US treasury rates for the period 1983 - 1995, based on a history of yield curves. We estimate both conditional and first differences term structures of volatility and subsequently estimate the implied parameters of the Gaussian model with non-linear least squares estimation. Results for bond options illustrate the effects of differing parameters in pricing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.