916 resultados para Multivariate volatility models
Resumo:
This paper investigates whether equity market volatility in one major market is related to volatility elsewhere. This paper models the daily conditional volatility of equity market wide returns as a GARCH-(1,1) process. Such a model will capture the changing nature of the conditional variance through time. It is found that the correlation between the conditional variances of major equity markets has increased substantially over the last two decades. This supports work which has been undertaken on conditional mean returns which indicates there has been an increase in equity market integration.
Resumo:
Amongst all the objectives in the study of time series, uncovering the dynamic law of its generation is probably the most important. When the underlying dynamics are not available, time series modelling consists of developing a model which best explains a sequence of observations. In this thesis, we consider hidden space models for analysing and describing time series. We first provide an introduction to the principal concepts of hidden state models and draw an analogy between hidden Markov models and state space models. Central ideas such as hidden state inference or parameter estimation are reviewed in detail. A key part of multivariate time series analysis is identifying the delay between different variables. We present a novel approach for time delay estimating in a non-stationary environment. The technique makes use of hidden Markov models and we demonstrate its application for estimating a crucial parameter in the oil industry. We then focus on hybrid models that we call dynamical local models. These models combine and generalise hidden Markov models and state space models. Probabilistic inference is unfortunately computationally intractable and we show how to make use of variational techniques for approximating the posterior distribution over the hidden state variables. Experimental simulations on synthetic and real-world data demonstrate the application of dynamical local models for segmenting a time series into regimes and providing predictive distributions.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).
Resumo:
This paper applies the vector AR-DCC-FIAPARCH model to eight national stock market indices' daily returns from 1988 to 2010, taking into account the structural breaks of each time series linked to the Asian and the recent Global financial crisis. We find significant cross effects, as well as long range volatility dependence, asymmetric volatility response to positive and negative shocks, and the power of returns that best fits the volatility pattern. One of the main findings of the model analysis is the higher dynamic correlations of the stock markets after a crisis event, which means increased contagion effects between the markets. The fact that during the crisis the conditional correlations remain on a high level indicates a continuous herding behaviour during these periods of increased market volatility. Finally, during the recent Global financial crisis the correlations remain on a much higher level than during the Asian financial crisis.
Resumo:
Our approach for knowledge presentation is based on the idea of expert system shell. At first we will build a graph shell of both possible dependencies and possible actions. Then, reasoning by means of Loglinear models, we will activate some nodes and some directed links. In this way a Bayesian network and networks presenting loglinear models are generated.
Resumo:
2000 Mathematics Subject Classification: 62G08, 62P30.
Resumo:
Ennek a cikknek az a célja, hogy áttekintést adjon annak a folyamatnak néhány főbb állomásáról, amit Black, Scholes és Merton opcióárazásról írt cikkei indítottak el a 70-es évek elején, és ami egyszerre forradalmasította a fejlett nyugati pénzügyi piacokat és a pénzügyi elméletet. / === / This review article compares the development of financial theory within and outside Hungary in the last three decades starting with the Black-Scholes revolution. Problems like the term structure of interest rate volatilities which is in the focus of many research internationally has not received the proper attention among the Hungarian economists. The article gives an overview of no-arbitrage pricing, the partial differential equation approach and the related numerical techniques, like the lattice methods in pricing financial derivatives. The relevant concepts of the martingal approach are overviewed. There is a special focus on the HJM framework of the interest rate development. The idea that the volatility and the correlation can be traded is a new horizon to the Hungarian capital market.
Resumo:
Individuals of Hispanic origin are the nation's largest minority (13.4%). Therefore, there is a need for models and methods that are culturally appropriate for mental health research with this burgeoning population. This is an especially salient issue when applying family systems theories to Hispanics, who are heavily influenced by family bonds in a way that appears to be different from the more individualistic non-Hispanic White culture. Bowen asserted that his family systems' concept of differentiation of self, which values both individuality and connectedness, could be universally applied. However, there is a paucity of research systematically assessing the applicability of the differentiation of self construct in ethnic minority populations. ^ This dissertation tested a multivariate model of differentiation of self with a Hispanic sample. The manner in which the construct of differentiation of self was being assessed and how accurately it represented this particular ethnic minority group's functioning was examined. Additionally, the proposed model included key contextual variables (e.g., anxiety, relationship satisfaction, attachment and acculturation related variables) which have been shown to be related to the differentiation process. ^ The results from structural equation modeling (SEM) analyses confirmed and extended previous research, and helped to illuminate the complex relationships between key factors that need to be considered in order to better understand individuals with this cultural background. Overall results indicated that the manner in which Hispanic individuals negotiate the boundaries of interconnectedness with a sense of individual expression appears to be different from their non-Hispanic White counterparts in some important ways. These findings illustrate the need for research on Hispanic individuals that provides a more culturally sensitive framework. ^
Resumo:
Key life history traits such as breeding time and clutch size are frequently both heritable and under directional selection, yet many studies fail to document micro-evolutionary responses. One general explanation is that selection estimates are biased by the omission of correlated traits that have causal effects on fitness, but few valid tests of this exist. Here we show, using a quantitative genetic framework and six decades of life-history data on two free-living populations of great tits Parus major, that selection estimates for egg-laying date and clutch size are relatively unbiased. Predicted responses to selection based on the Robertson-Price Identity were similar to those based on the multivariate breeder’s equation, indicating that unmeasured covarying traits were not missing from the analysis. Changing patterns of phenotypic selection on these traits (for laying date, linked to climate change) therefore reflect changing selection on breeding values, and genetic constraints appear not to limit their independent evolution. Quantitative genetic analysis of correlational data from pedigreed populations can be a valuable complement to experimental approaches to help identify whether apparent associations between traits and fitness are biased by missing traits, and to parse the roles of direct versus indirect selection across a range of environments.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
We estimate the monthly volatility of the US economy from 1968 to 2006 by extending the coincidentindex model of Stock and Watson (1991). Our volatility index, which we call VOLINX, hasfour applications. First, it sheds light on the Great Moderation. VOLINX captures the decrease in thevolatility in the mid-80s as well as the different episodes of stress over the sample period. In the 70sand early 80s the stagflation and the two oil crises marked the pace of the volatility whereas 09/11 is themost relevant shock after the moderation. Second, it helps to understand the economic indicators thatcause volatility. While the main determinant of the coincident index is industrial production, VOLINXis mainly affected by employment and income. Third, it adapts the confidence bands of the forecasts.In and out-of-sample evaluations show that the confidence bands may differ up to 50% with respect to amodel with constant variance. Last, the methodology we use permits us to estimate monthly GDP, whichhas conditional volatility that is partly explained by VOLINX. These applications can be used by policymakers for monitoring and surveillance of the stress of the economy.
Resumo:
This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.
Resumo:
Este trabajo predice la volatilidad de la rentabilidad diaria del precio del azúcar, en el período comprendido entre 1 de junio de 2011 y el 24 de octubre de 2013. Los datos diarios utilizados fueron los precios del azúcar, del etanol y la tasa de cambio de la moneda de Brasil (Real) en dólares. Se usaron modelos multivariados de volatilidad autoregresiva condicional generalizada. A partir de la predicción de los precios del azúcar se calcula la razón de cobertura de mínima varianza. Los resultados muestran, que la razón de cobertura es 0.37, esto significa que, si un productor adverso al riesgo, que tiene la intención de eliminar un porcentaje de la volatilidad de la rentabilidad diaria del mercado mundial del azúcar, y espera vender 25 contratos de azúcar, cada uno de ellos de 50,84 toneladas (1.271 toneladas), el número de contratos optimo tomando cobertura a futuro será 9 y el número de contratos sin tomar cobertura (de contado) será 16.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.