916 resultados para Multivariate volatility models
Resumo:
We develop a general model to price VIX futures contracts. The model is adapted to test both the constant elasticity of variance (CEV) and the Cox–Ingersoll–Ross formulations, with and without jumps. Empirical tests on VIX futures prices provide out-of-sample estimates within 2% of the actual futures price for almost all futures maturities. We show that although jumps are present in the data, the models with jumps do not typically outperform the others; in particular, we demonstrate the important benefits of the CEV feature in pricing futures contracts. We conclude by examining errors in the model relative to the VIX characteristics
Resumo:
Internal risk management models of the kind popularized by J. P. Morgan are now used widely by the world’s most sophisticated financial institutions as a means of measuring risk. Using the returns on three of the most popular futures contracts on the London International Financial Futures Exchange, in this paper we investigate the possibility of using multivariate generalized autoregressive conditional heteroscedasticity (GARCH) models for the calculation of minimum capital risk requirements (MCRRs). We propose a method for the estimation of the value at risk of a portfolio based on a multivariate GARCH model. We find that the consideration of the correlation between the contracts can lead to more accurate, and therefore more appropriate, MCRRs compared with the values obtained from a univariate approach to the problem.
Resumo:
We discuss the modeling of dielectric responses of electromagnetically excited networks which are composed of a mixture of capacitors and resistors. Such networks can be employed as lumped-parameter circuits to model the response of composite materials containing conductive and insulating grains. The dynamics of the excited network systems are studied using a state space model derived from a randomized incidence matrix. Time and frequency domain responses from synthetic data sets generated from state space models are analyzed for the purpose of estimating the fraction of capacitors in the network. Good results were obtained by using either the time-domain response to a pulse excitation or impedance data at selected frequencies. A chemometric framework based on a Successive Projections Algorithm (SPA) enables the construction of multiple linear regression (MLR) models which can efficiently determine the ratio of conductive to insulating components in composite material samples. The proposed method avoids restrictions commonly associated with Archie’s law, the application of percolation theory or Kohlrausch-Williams-Watts models and is applicable to experimental results generated by either time domain transient spectrometers or continuous-wave instruments. Furthermore, it is quite generic and applicable to tomography, acoustics as well as other spectroscopies such as nuclear magnetic resonance, electron paramagnetic resonance and, therefore, should be of general interest across the dielectrics community.
Resumo:
Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors: the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main findings are that the Heterogenous Autoregressive model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts
Resumo:
We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
In 2007 futures contracts were introduced based upon the listed real estate market in Europe. Following their launch they have received increasing attention from property investors, however, few studies have considered the impact their introduction has had. This study considers two key elements. Firstly, a traditional Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model, the approach of Bessembinder & Seguin (1992) and the Gray’s (1996) Markov-switching-GARCH model are used to examine the impact of futures trading on the European real estate securities market. The results show that futures trading did not destabilize the underlying listed market. Importantly, the results also reveal that the introduction of a futures market has improved the speed and quality of information flowing to the spot market. Secondly, we assess the hedging effectiveness of the contracts using two alternative strategies (naïve and Ordinary Least Squares models). The empirical results also show that the contracts are effective hedging instruments, leading to a reduction in risk of 64 %.
Resumo:
This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.
Resumo:
This paper explores a number of statistical models for predicting the daily stock return volatility of an aggregate of all stocks traded on the NYSE. An application of linear and non-linear Granger causality tests highlights evidence of bidirectional causality, although the relationship is stronger from volatility to volume than the other way around. The out-of-sample forecasting performance of various linear, GARCH, EGARCH, GJR and neural network models of volatility are evaluated and compared. The models are also augmented by the addition of a measure of lagged volume to form more general ex-ante forecasting models. The results indicate that augmenting models of volatility with measures of lagged volume leads only to very modest improvements, if any, in forecasting performance.
Resumo:
This article examines the role of idiosyncratic volatility in explaining the cross-sectional variation of size- and value-sorted portfolio returns. We show that the premium for bearing idiosyncratic volatility varies inversely with the number of stocks included in the portfolios. This conclusion is robust within various multifactor models based on size, value, past performance, liquidity and total volatility and also holds within an ICAPM specification of the risk–return relationship. Our findings thus indicate that investors demand an additional return for bearing the idiosyncratic volatility of poorly-diversified portfolios.
Resumo:
In this paper, we study the role of the volatility risk premium for the forecasting performance of implied volatility. We introduce a non-parametric and parsimonious approach to adjust the model-free implied volatility for the volatility risk premium and implement this methodology using more than 20 years of options and futures data on three major energy markets. Using regression models and statistical loss functions, we find compelling evidence to suggest that the risk premium adjusted implied volatility significantly outperforms other models, including its unadjusted counterpart. Our main finding holds for different choices of volatility estimators and competing time-series models, underlying the robustness of our results.
Resumo:
We consider the forecasting of macroeconomic variables that are subject to revisions, using Bayesian vintage-based vector autoregressions. The prior incorporates the belief that, after the first few data releases, subsequent ones are likely to consist of revisions that are largely unpredictable. The Bayesian approach allows the joint modelling of the data revisions of more than one variable, while keeping the concomitant increase in parameter estimation uncertainty manageable. Our model provides markedly more accurate forecasts of post-revision values of inflation than do other models in the literature.
Resumo:
This paper characterizes the dynamics of jumps and analyzes their importance for volatility forecasting. Using high-frequency data on four prominent energy markets, we perform a model-free decomposition of realized variance into its continuous and discontinuous components. We find strong evidence of jumps in energy markets between 2007 and 2012. We then investigate the importance of jumps for volatility forecasting. To this end, we estimate and analyze the predictive ability of several Heterogenous Autoregressive (HAR) models that explicitly capture the dynamics of jumps. Conducting extensive in-sample and out-of-sample analyses, we establish that explicitly modeling jumps does not significantly improve forecast accuracy. Our results are broadly consistent across our four energy markets, forecasting horizons, and loss functions
Resumo:
In this paper, we introduce a Bayesian analysis for survival multivariate data in the presence of a covariate vector and censored observations. Different ""frailties"" or latent variables are considered to capture the correlation among the survival times for the same individual. We assume Weibull or generalized Gamma distributions considering right censored lifetime data. We develop the Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods.
Resumo:
In this paper, we introduce a Bayesian analysis for bioequivalence data assuming multivariate pharmacokinetic measures. With the introduction of correlation parameters between the pharmacokinetic measures or between the random effects in the bioequivalence models, we observe a good improvement in the bioequivalence results. These results are of great practical interest since they can yield higher accuracy and reliability for the bioequivalence tests, usually assumed by regulatory offices. An example is introduced to illustrate the proposed methodology by comparing the usual univariate bioequivalence methods with multivariate bioequivalence. We also consider some usual existing discrimination Bayesian methods to choose the best model to be used in bioequivalence studies.
Resumo:
The multivariate skew-t distribution (J Multivar Anal 79:93-113, 2001; J R Stat Soc, Ser B 65:367-389, 2003; Statistics 37:359-363, 2003) includes the Student t, skew-Cauchy and Cauchy distributions as special cases and the normal and skew-normal ones as limiting cases. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis of repeated measures, pretest/post-test data, under multivariate null intercept measurement error model (J Biopharm Stat 13(4):763-771, 2003) where the random errors and the unobserved value of the covariate (latent variable) follows a Student t and skew-t distribution, respectively. The results and methods are numerically illustrated with an example in the field of dentistry.