12 resultados para Multivariate volatility models

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the integration of the European peripheral financial markets with Germany, France, and the UK using a combination of tests for structural breaks and return correlations derived from several multivariate stochastic volatility models. Our findings suggest that financial integration intensified in anticipation of the Euro, further strengthened by the EMU inception, and amplified in response to the 2007/2008 financial crisis. Hence, no evidence is found of decoupling of the equity markets in more troubled European countries from the core. Interestingly, the UK, despite staying outside the EMU, is not worse integrated with the GIPSI than Germany or France. © 2013 Elsevier B.V.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Enterprise Risk Management (ERM) and Knowledge Management (KM) both encompass top-down and bottom-up approaches developing and embedding risk knowledge concepts and processes in strategy, policies, risk appetite definition, the decision-making process and business processes. The capacity to transfer risk knowledge affects all stakeholders and understanding of the risk knowledge about the enterprise's value is a key requirement in order to identify protection strategies for business sustainability. There are various factors that affect this capacity for transferring and understanding. Previous work has established that there is a difference between the influence of KM variables on Risk Control and on the perceived value of ERM. Communication among groups appears as a significant variable in improving Risk Control but only as a weak factor in improving the perceived value of ERM. However, the ERM mandate requires for its implementation a clear understanding, of risk management (RM) policies, actions and results, and the use of the integral view of RM as a governance and compliance program to support the value driven management of the organization. Furthermore, ERM implementation demands better capabilities for unification of the criteria of risk analysis, alignment of policies and protection guidelines across the organization. These capabilities can be affected by risk knowledge sharing between the RM group and the Board of Directors and other executives in the organization. This research presents an exploratory analysis of risk knowledge transfer variables used in risk management practice. A survey to risk management executives from 65 firms in various industries was undertaken and 108 answers were analyzed. Potential relationships among the variables are investigated using descriptive statistics and multivariate statistical models. The level of understanding of risk management policies and reports by the board is related to the quality of the flow of communication in the firm and perceived level of integration of the risk policy in the business processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Previous experimental models suggest that vitamin E may ameliorate periodontitis. However, epidemiologic studies show inconsistent evidence in supporting this plausible association. Objective: We aimed to investigate the association between serum α-tocopherol (αT) and γ-tocopherol (γT) and periodontitis in a large cross-sectional US population. Methods: This study included 4708 participants in the 1999–2001 NHANES. Serum tocopherols were measured by HPLC and values were adjusted by total cholesterol (TC). Periodontal status was assessed by mean clinical attachment loss (CAL) and probing pocket depth (PPD). Total periodontitis (TPD) was defined as the sum of mild, moderate, and severe periodontitis. All measurements were performed by NHANES. Results: Means ± SDs of serum αT:TC ratio from low to high quartiles were 4.0 ± 0.4, 4.8 ± 0.2, 5.7 ± 0.4, and 9.1 ± 2.7 μmol/mmol. In multivariate regression models, αT:TC quartiles were inversely associated with mean CAL (P-trend = 0.06), mean PPD (P-trend < 0.001), and TPD (P-trend < 0.001) overall. Adjusted mean differences (95% CIs) between the first and fourth quartile of αT:TC were 0.12 mm (0.03, 0.20; P-difference = 0.005) for mean CAL and 0.12 mm (0.06, 0.17; P < 0.001) for mean PPD, whereas corresponding OR for TPD was 1.65 (95% CI: 1.26, 2.16; P-difference = 0.001). In a dose-response analysis, a clear inverse association between αT:TC and mean CAL, mean PPD, and TPD was observed among participants with relatively low αT:TC. No differences were seen in participants with higher αT:TC ratios. Participants with γT:TC ratio in the interquartile range showed a significantly lower mean PPD than those in the highest quartile. Conclusions: A nonlinear inverse association was observed between serum αT and severity of periodontitis, which was restricted to adults with normal but relatively low αT status. These findings warrant further confirmation in longitudinal or intervention settings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Previous experimental models suggest that vitamin E may ameliorate periodontitis. However, epidemiologic studies show inconsistent evidence in supporting this plausible association. We aimed to investigate the association between serum α-tocopherol (αT) and γ-tocopherol (γT) and periodontitis in a large cross-sectional US population. This study included 4708 participants in the 1999–2001 NHANES. Serum tocopherols were measured by HPLC and values were adjusted by total cholesterol (TC). Periodontal status was assessed by mean clinical attachment loss (CAL) and probing pocket depth (PPD). Total periodontitis (TPD) was defined as the sum of mild, moderate, and severe periodontitis. All measurements were performed by NHANES. Means ± SDs of serum αT:TC ratio from low to high quartiles were 4.0 ± 0.4, 4.8 ± 0.2, 5.7 ± 0.4, and 9.1 ± 2.7 μmol/mmol. In multivariate regression models, αT:TC quartiles were inversely associated with mean CAL (P-trend = 0.06), mean PPD (P-trend < 0.001), and TPD (P-trend < 0.001) overall. Adjusted mean differences (95% CIs) between the first and fourth quartile of αT:TC were 0.12 mm (0.03, 0.20; P-difference = 0.005) for mean CAL and 0.12 mm (0.06, 0.17; P < 0.001) for mean PPD, whereas corresponding OR for TPD was 1.65 (95% CI: 1.26, 2.16; P-difference = 0.001). In a dose-response analysis, a clear inverse association between αT:TC and mean CAL, mean PPD, and TPD was observed among participants with relatively low αT:TC. No differences were seen in participants with higher αT:TC ratios. Participants with γT:TC ratio in the interquartile range showed a significantly lower mean PPD than those in the highest quartile. A nonlinear inverse association was observed between serum αT and severity of periodontitis, which was restricted to adults with normal but relatively low αT status. These findings warrant further confirmation in longitudinal or intervention settings.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This preliminary report describes work carried out as part of work package 1.2 of the MUCM research project. The report is split in two parts: the ?rst part (Sections 1 and 2) summarises the state of the art in emulation of computer models, while the second presents some initial work on the emulation of dynamic models. In the ?rst part, we describe the basics of emulation, introduce the notation and put together the key results for the emulation of models with single and multiple outputs, with or without the use of mean function. In the second part, we present preliminary results on the chaotic Lorenz 63 model. We look at emulation of a single time step, and repeated application of the emulator for sequential predic- tion. After some design considerations, the emulator is compared with the exact simulator on a number of runs to assess its performance. Several general issues related to emulating dynamic models are raised and discussed. Current work on the larger Lorenz 96 model (40 variables) is presented in the context of dimension reduction, with results to be provided in a follow-up report. The notation used in this report are summarised in appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models (HMMs) to identify the lag (or delay) between different variables for such data. We first present a method using maximum likelihood estimation and propose a simple algorithm which is capable of identifying associations between variables. We also adopt an information-theoretic approach and develop a novel procedure for training HMMs to maximise the mutual information between delayed time series. Both methods are successfully applied to real data. We model the oil drilling process with HMMs and estimate a crucial parameter, namely the lag for return.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates whether equity market volatility in one major market is related to volatility elsewhere. This paper models the daily conditional volatility of equity market wide returns as a GARCH-(1,1) process. Such a model will capture the changing nature of the conditional variance through time. It is found that the correlation between the conditional variances of major equity markets has increased substantially over the last two decades. This supports work which has been undertaken on conditional mean returns which indicates there has been an increase in equity market integration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amongst all the objectives in the study of time series, uncovering the dynamic law of its generation is probably the most important. When the underlying dynamics are not available, time series modelling consists of developing a model which best explains a sequence of observations. In this thesis, we consider hidden space models for analysing and describing time series. We first provide an introduction to the principal concepts of hidden state models and draw an analogy between hidden Markov models and state space models. Central ideas such as hidden state inference or parameter estimation are reviewed in detail. A key part of multivariate time series analysis is identifying the delay between different variables. We present a novel approach for time delay estimating in a non-stationary environment. The technique makes use of hidden Markov models and we demonstrate its application for estimating a crucial parameter in the oil industry. We then focus on hybrid models that we call dynamical local models. These models combine and generalise hidden Markov models and state space models. Probabilistic inference is unfortunately computationally intractable and we show how to make use of variational techniques for approximating the posterior distribution over the hidden state variables. Experimental simulations on synthetic and real-world data demonstrate the application of dynamical local models for segmenting a time series into regimes and providing predictive distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies the vector AR-DCC-FIAPARCH model to eight national stock market indices' daily returns from 1988 to 2010, taking into account the structural breaks of each time series linked to the Asian and the recent Global financial crisis. We find significant cross effects, as well as long range volatility dependence, asymmetric volatility response to positive and negative shocks, and the power of returns that best fits the volatility pattern. One of the main findings of the model analysis is the higher dynamic correlations of the stock markets after a crisis event, which means increased contagion effects between the markets. The fact that during the crisis the conditional correlations remain on a high level indicates a continuous herding behaviour during these periods of increased market volatility. Finally, during the recent Global financial crisis the correlations remain on a much higher level than during the Asian financial crisis.