915 resultados para Forecasting Volatility


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines the information content of alternative implied volatility measures for the 30 components of the Dow Jones Industrial Average Index from 1996 until 2007. Along with the popular Black-Scholes and \model-free" implied volatility expectations, the recently proposed corridor implied volatil- ity (CIV) measures are explored. For all pair-wise comparisons, it is found that a CIV measure that is closely related to the model-free implied volatility, nearly always delivers the most accurate forecasts for the majority of the firms. This finding remains consistent for different forecast horizons, volatility definitions, loss functions and forecast evaluation settings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We model the effects of quantitative easing on the volatility of returns to individual gilts, examining both the effects of QE overall and of the specific days of asset purchases. The action of QE successfully neutralized the six fold increase in volatility that had been experienced by gilts since the start of the financial crisis. The volatility of longer term bonds reduced more quickly than the volatility of short to medium term bonds. The reversion of the volatility of shorter term bonds to pre-crisis levels was found to be more sensitive to the specific operational actions of QE, particularly where they experienced relatively greater purchase activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article deals with problems of forecasting of economic macroparameters on the basis of the principle of «subjective multideterminism», i.e. an expert account of maximal amount of interrelated «objective» and «subjective» causes. A description is given of the system of support of decision-making in forecasting the level of inflation and gross domestic product on the basis of the tree solution method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article is dedicated to the vital problem of the creation of GIS-systems for the monitoring, prognostication and control of technogenic natural catastrophes. The decrease of risks, the protection of economic objects, averting the human victims, caused by the dynamism of avalanche centers, depends on the effectiveness of the prognostication procedures of avalanche danger used. In the article the structure of a prognostication subsystem information input is developed and the technology for the complex forecast of avalanche-prone situations is proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article there are considered problems of forecasting economical macroparameters, and in the first place, index of inflation. Concept of development of synthetical forecasting methods which use directly specified expert information as well as calculation result on the basis of objective economical and mathematical models for forecasting separate “slowly changeable parameters” are offered. This article discusses problems of macroparameters operation on the basis of analysis of received prognostic magnitude.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 62M20, 62-07, 62J05, 62P20.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62M20, 62M10, 62-07.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since wind has an intrinsically complex and stochastic nature, accurate wind power forecasts are necessary for the safety and economics of wind energy utilization. In this paper, we investigate a combination of numeric and probabilistic models: one-day-ahead wind power forecasts were made with Gaussian Processes (GPs) applied to the outputs of a Numerical Weather Prediction (NWP) model. Firstly the wind speed data from NWP was corrected by a GP. Then, as there is always a defined limit on power generated in a wind turbine due the turbine controlling strategy, a Censored GP was used to model the relationship between the corrected wind speed and power output. To validate the proposed approach, two real world datasets were used for model construction and testing. The simulation results were compared with the persistence method and Artificial Neural Networks (ANNs); the proposed model achieves about 11% improvement in forecasting accuracy (Mean Absolute Error) compared to the ANN model on one dataset, and nearly 5% improvement on another.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 65M06, 65M12.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology changes rapidly over years providing continuously more options for computer alternatives and making life easier for economic, intra-relation or any other transactions. However, the introduction of new technology “pushes” old Information and Communication Technology (ICT) products to non-use. E-waste is defined as the quantities of ICT products which are not in use and is bivariate function of the sold quantities, and the probability that specific computers quantity will be regarded as obsolete. In this paper, an e-waste generation model is presented, which is applied to the following regions: Western and Eastern Europe, Asia/Pacific, Japan/Australia/New Zealand, North and South America. Furthermore, cumulative computer sales were retrieved for selected countries of the regions so as to compute obsolete computer quantities. In order to provide robust results for the forecasted quantities, a selection of forecasting models, namely (i) Bass, (ii) Gompertz, (iii) Logistic, (iv) Trend model, (v) Level model, (vi) AutoRegressive Moving Average (ARMA), and (vii) Exponential Smoothing were applied, depicting for each country that model which would provide better results in terms of minimum error indices (Mean Absolute Error and Mean Square Error) for the in-sample estimation. As new technology does not diffuse in all the regions of the world with the same speed due to different socio-economic factors, the lifespan distribution, which provides the probability of a certain quantity of computers to be considered as obsolete, is not adequately modeled in the literature. The time horizon for the forecasted quantities is 2014-2030, while the results show a very sharp increase in the USA and United Kingdom, due to the fact of decreasing computer lifespan and increasing sales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 37F21, 70H20, 37L40, 37C40, 91G80, 93E20.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the potential of pre-setting 11kV overhead line ratings over a time period of sufficient length to be useful to the real-time management of overhead lines. This forecast is based on short and long term freely available weather forecasts and is used to help investigate the potential for realising dynamic rating benefits on the electricity network. A comparison between the realisable benefits in ratings using this forecast data, over the period of a year has been undertaken.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents out-of-sample inflation forecasting results based on relative price variability and skewness. It is demonstrated that forecasts on long horizons of 1.5-2 years are significantly improved if the forecast equation is augmented with skewness. © 2010 Taylor & Francis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regressiontechniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a nave random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists' long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies. © 2010 Elsevier B.V. All rights reserved.