39 resultados para eigenfunction stochastic volatility models
Resumo:
Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors: the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main findings are that the Heterogenous Autoregressive model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts
Resumo:
In 2007 futures contracts were introduced based upon the listed real estate market in Europe. Following their launch they have received increasing attention from property investors, however, few studies have considered the impact their introduction has had. This study considers two key elements. Firstly, a traditional Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model, the approach of Bessembinder & Seguin (1992) and the Gray’s (1996) Markov-switching-GARCH model are used to examine the impact of futures trading on the European real estate securities market. The results show that futures trading did not destabilize the underlying listed market. Importantly, the results also reveal that the introduction of a futures market has improved the speed and quality of information flowing to the spot market. Secondly, we assess the hedging effectiveness of the contracts using two alternative strategies (naïve and Ordinary Least Squares models). The empirical results also show that the contracts are effective hedging instruments, leading to a reduction in risk of 64 %.
Resumo:
This paper models the transmission of shocks between the US, Japanese and Australian equity markets. Tests for the existence of linear and non-linear transmission of volatility across the markets are performed using parametric and non-parametric techniques. In particular the size and sign of return innovations are important factors in determining the degree of spillovers in volatility. It is found that a multivariate asymmetric GARCH formulation can explain almost all of the non-linear causality between markets. These results have important implications for the construction of models and forecasts of international equity returns.
Resumo:
This paper explores a number of statistical models for predicting the daily stock return volatility of an aggregate of all stocks traded on the NYSE. An application of linear and non-linear Granger causality tests highlights evidence of bidirectional causality, although the relationship is stronger from volatility to volume than the other way around. The out-of-sample forecasting performance of various linear, GARCH, EGARCH, GJR and neural network models of volatility are evaluated and compared. The models are also augmented by the addition of a measure of lagged volume to form more general ex-ante forecasting models. The results indicate that augmenting models of volatility with measures of lagged volume leads only to very modest improvements, if any, in forecasting performance.
Resumo:
This article examines the role of idiosyncratic volatility in explaining the cross-sectional variation of size- and value-sorted portfolio returns. We show that the premium for bearing idiosyncratic volatility varies inversely with the number of stocks included in the portfolios. This conclusion is robust within various multifactor models based on size, value, past performance, liquidity and total volatility and also holds within an ICAPM specification of the risk–return relationship. Our findings thus indicate that investors demand an additional return for bearing the idiosyncratic volatility of poorly-diversified portfolios.
Resumo:
In this paper, we study the role of the volatility risk premium for the forecasting performance of implied volatility. We introduce a non-parametric and parsimonious approach to adjust the model-free implied volatility for the volatility risk premium and implement this methodology using more than 20 years of options and futures data on three major energy markets. Using regression models and statistical loss functions, we find compelling evidence to suggest that the risk premium adjusted implied volatility significantly outperforms other models, including its unadjusted counterpart. Our main finding holds for different choices of volatility estimators and competing time-series models, underlying the robustness of our results.
Resumo:
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
Resumo:
Individual-based models (IBMs) can simulate the actions of individual animals as they interact with one another and the landscape in which they live. When used in spatially-explicit landscapes IBMs can show how populations change over time in response to management actions. For instance, IBMs are being used to design strategies of conservation and of the exploitation of fisheries, and for assessing the effects on populations of major construction projects and of novel agricultural chemicals. In such real world contexts, it becomes especially important to build IBMs in a principled fashion, and to approach calibration and evaluation systematically. We argue that insights from physiological and behavioural ecology offer a recipe for building realistic models, and that Approximate Bayesian Computation (ABC) is a promising technique for the calibration and evaluation of IBMs. IBMs are constructed primarily from knowledge about individuals. In ecological applications the relevant knowledge is found in physiological and behavioural ecology, and we approach these from an evolutionary perspective by taking into account how physiological and behavioural processes contribute to life histories, and how those life histories evolve. Evolutionary life history theory shows that, other things being equal, organisms should grow to sexual maturity as fast as possible, and then reproduce as fast as possible, while minimising per capita death rate. Physiological and behavioural ecology are largely built on these principles together with the laws of conservation of matter and energy. To complete construction of an IBM information is also needed on the effects of competitors, conspecifics and food scarcity; the maximum rates of ingestion, growth and reproduction, and life-history parameters. Using this knowledge about physiological and behavioural processes provides a principled way to build IBMs, but model parameters vary between species and are often difficult to measure. A common solution is to manually compare model outputs with observations from real landscapes and so to obtain parameters which produce acceptable fits of model to data. However, this procedure can be convoluted and lead to over-calibrated and thus inflexible models. Many formal statistical techniques are unsuitable for use with IBMs, but we argue that ABC offers a potential way forward. It can be used to calibrate and compare complex stochastic models and to assess the uncertainty in their predictions. We describe methods used to implement ABC in an accessible way and illustrate them with examples and discussion of recent studies. Although much progress has been made, theoretical issues remain, and some of these are outlined and discussed.
Resumo:
This paper characterizes the dynamics of jumps and analyzes their importance for volatility forecasting. Using high-frequency data on four prominent energy markets, we perform a model-free decomposition of realized variance into its continuous and discontinuous components. We find strong evidence of jumps in energy markets between 2007 and 2012. We then investigate the importance of jumps for volatility forecasting. To this end, we estimate and analyze the predictive ability of several Heterogenous Autoregressive (HAR) models that explicitly capture the dynamics of jumps. Conducting extensive in-sample and out-of-sample analyses, we establish that explicitly modeling jumps does not significantly improve forecast accuracy. Our results are broadly consistent across our four energy markets, forecasting horizons, and loss functions