874 resultados para volatility forecasting
Resumo:
The aim of this paper is to analyze the forecasting ability of the CARR model proposed by Chou (2005) using the S&P 500. We extend the data sample, allowing for the analysis of different stock market circumstances and propose the use of various range estimators in order to analyze their forecasting performance. Our results show that there are two range-based models that outperform the forecasting ability of the GARCH model. The Parkinson model is better for upward trends and volatilities which are higher and lower than the mean while the CARR model is better for downward trends and mean volatilities.
Resumo:
Rapport de recherche présenté à la Faculté des arts et des sciences en vue de l'obtention du grade de Maîtrise en sciences économiques.
Resumo:
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub-optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out-of-sample forecasting performance of various linear and GARCH-type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decisionmaking.
Resumo:
In this paper, we study the role of the volatility risk premium for the forecasting performance of implied volatility. We introduce a non-parametric and parsimonious approach to adjust the model-free implied volatility for the volatility risk premium and implement this methodology using more than 20 years of options and futures data on three major energy markets. Using regression models and statistical loss functions, we find compelling evidence to suggest that the risk premium adjusted implied volatility significantly outperforms other models, including its unadjusted counterpart. Our main finding holds for different choices of volatility estimators and competing time-series models, underlying the robustness of our results.
Resumo:
This paper characterizes the dynamics of jumps and analyzes their importance for volatility forecasting. Using high-frequency data on four prominent energy markets, we perform a model-free decomposition of realized variance into its continuous and discontinuous components. We find strong evidence of jumps in energy markets between 2007 and 2012. We then investigate the importance of jumps for volatility forecasting. To this end, we estimate and analyze the predictive ability of several Heterogenous Autoregressive (HAR) models that explicitly capture the dynamics of jumps. Conducting extensive in-sample and out-of-sample analyses, we establish that explicitly modeling jumps does not significantly improve forecast accuracy. Our results are broadly consistent across our four energy markets, forecasting horizons, and loss functions
Resumo:
unpublished
Resumo:
This paper evaluates the forecasting performance of a continuous stochastic volatility model with two factors of volatility (SV2F) and compares it to those of GARCH and ARFIMA models. The empirical results show that the volatility forecasting ability of the SV2F model is better than that of the GARCH and ARFIMA models, especially when volatility seems to change pattern. We use ex-post volatility as a proxy of the realized volatility obtained from intraday data and the forecasts from the SV2F are calculated using the reprojection technique proposed by Gallant and Tauchen (1998).
Resumo:
This paper presents gamma stochastic volatility models and investigates its distributional and time series properties. The parameter estimators obtained by the method of moments are shown analytically to be consistent and asymptotically normal. The simulation results indicate that the estimators behave well. The insample analysis shows that return models with gamma autoregressive stochastic volatility processes capture the leptokurtic nature of return distributions and the slowly decaying autocorrelation functions of squared stock index returns for the USA and UK. In comparison with GARCH and EGARCH models, the gamma autoregressive model picks up the persistence in volatility for the US and UK index returns but not the volatility persistence for the Canadian and Japanese index returns. The out-of-sample analysis indicates that the gamma autoregressive model has a superior volatility forecasting performance compared to GARCH and EGARCH models.
Resumo:
We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.
Resumo:
The goal of this paper is twofold. First, using five of the most actively traded stocks in the Brazilian financial market, this paper shows that the normality assumption commonly used in the risk management area to describe the distributions of returns standardized by volatilities is not compatible with volatilities estimated by EWMA or GARCH models. In sharp contrast, when the information contained in high frequency data is used to construct the realized volatilies measures, we attain the normality of the standardized returns, giving promise of improvements in Value at Risk statistics. We also describe the distributions of volatilities of the Brazilian stocks, showing that the distributions of volatilities are nearly lognormal. Second, we estimate a simple linear model to the log of realized volatilities that differs from the ones in other studies. The main difference is that we do not find evidence of long memory. The estimated model is compared with commonly used alternatives in an out-of-sample experiment.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Departamento de Administração, Programa de Pós-graduação em Administração, 2016.
Resumo:
This paper uses appropriately modified information criteria to select models from the GARCH family, which are subsequently used for predicting US dollar exchange rate return volatility. The out of sample forecast accuracy of models chosen in this manner compares favourably on mean absolute error grounds, although less favourably on mean squared error grounds, with those generated by the commonly used GARCH(1, 1) model. An examination of the orders of models selected by the criteria reveals that (1, 1) models are typically selected less than 20% of the time.
Resumo:
The goal of this paper is to present a comprehensive emprical analysis of the return and conditional variance of four Brazilian …nancial series using models of the ARCH class. Selected models are then compared regarding forecasting accuracy and goodness-of-…t statistics. To help understanding the empirical results, a self-contained theoretical discussion of ARCH models is also presented in such a way that it is useful for the applied researcher. Empirical results show that although all series share ARCH and are leptokurtic relative to the Normal, the return on the US$ has clearly regime switching and no asymmetry for the variance, the return on COCOA has no asymmetry, while the returns on the CBOND and TELEBRAS have clear signs of asymmetry favoring the leverage e¤ect. Regarding forecasting, the best model overall was the EGARCH(1; 1) in its Gaussian version. Regarding goodness-of-…t statistics, the SWARCH model did well, followed closely by the Student-t GARCH(1; 1)
Resumo:
Aiming at empirical findings, this work focuses on applying the HEAVY model for daily volatility with financial data from the Brazilian market. Quite similar to GARCH, this model seeks to harness high frequency data in order to achieve its objectives. Four variations of it were then implemented and their fit compared to GARCH equivalents, using metrics present in the literature. Results suggest that, in such a market, HEAVY does seem to specify daily volatility better, but not necessarily produces better predictions for it, what is, normally, the ultimate goal. The dataset used in this work consists of intraday trades of U.S. Dollar and Ibovespa future contracts from BM&FBovespa.
Resumo:
In recent years fractionally differenced processes have received a great deal of attention due to its flexibility in financial applications with long memory. This paper considers a class of models generated by Gegenbauer polynomials, incorporating the long memory in stochastic volatility (SV) components in order to develop the General Long Memory SV (GLMSV) model. We examine the statistical properties of the new model, suggest using the spectral likelihood estimation for long memory processes, and investigate the finite sample properties via Monte Carlo experiments. We apply the model to three exchange rate return series. Overall, the results of the out-of-sample forecasts show the adequacy of the new GLMSV model.