900 resultados para bivariate GARCH-M
Resumo:
A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The solution of a bivariate population balance equation (PBE) for aggregation of particles necessitates a large 2-d domain to be covered. A correspondingly large number of discretized equations for particle populations on pivots (representative sizes for bins) are solved, although at the end only a relatively small number of pivots are found to participate in the evolution process. In the present work, we initiate solution of the governing PBE on a small set of pivots that can represent the initial size distribution. New pivots are added to expand the computational domain in directions in which the evolving size distribution advances. A self-sufficient set of rules is developed to automate the addition of pivots, taken from an underlying X-grid formed by intersection of the lines of constant composition and constant particle mass. In order to test the robustness of the rule-set, simulations carried out with pivotwise expansion of X-grid are compared with those obtained using sufficiently large fixed X-grids for a number of composition independent and composition dependent aggregation kernels and initial conditions. The two techniques lead to identical predictions, with the former requiring only a fraction of the computational effort. The rule-set automatically reduces aggregation of particles of same composition to a 1-d problem. A midway change in the direction of expansion of domain, effected by the addition of particles of different mean composition, is captured correctly by the rule-set. The evolving shape of a computational domain carries with it the signature of the aggregation process, which can be insightful in complex and time dependent aggregation conditions. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.
Resumo:
Revised: 2006-07
Resumo:
This paper models the mean and volatility spillovers of prices within the integrated Iberian and the interconnected Spanish and French electricity markets. Using the constant (CCC) and dynamic conditional correlation (DCC) bivariate models with three different specifications of the univariate variance processes, we study the extent to which increasing interconnection and harmonization in regulation have favoured price convergence. The data consist of daily prices calculated as the arithmetic mean of the hourly prices over a span from July 1st 2007 until February 29th 2012. The DCC model in which the variances of the univariate processes are specified with a VARMA(1,1) fits the data best for the integrated MIBEL whereas a CCC model with a GARCH(1,1) specification for the univariate variance processes is selected to model the price series in Spain and France. Results show that there are significant mean and volatility spillovers in the MIBEL, indicating strong interdependence between the two markets, while there is a weaker evidence of integration between the Spanish and French markets. We provide new evidence that the EU target of achieving a single electricity market largely depends on increasing trade between countries and homogeneous rules of market functioning.
Resumo:
The Logit-Logistic (LL), Johnson's SB, and the Beta (GBD) are flexible four-parameter probability distribution models in terms of the (skewness-kurtosis) region covered, and each has been used for modeling tree diameter distributions in forest stands. This article compares bivariate forms of these models in terms of their adequacy in representing empirical diameter-height distributions from 102 sample plots. Four bivariate models are compared: SBB, the natural, well-known, and much-used bivariate generalization of SB; the bivariate distributions with LL, SB, and Beta as marginals, constructed using Plackett's method (LL-2P, etc.). All models are fitted using maximum likelihood, and their goodness-of-fits are compared using minus log-likelihood (equivalent to Akaike's Information Criterion, the AIC). The performance ranking in this case study was SBB, LL-2P, GBD-2P, and SB-2P
Resumo:
Background: Heckman-type selection models have been used to control HIV prevalence estimates for selection bias when participation in HIV testing and HIV status are associated after controlling for observed variables. These models typically rely on the strong assumption that the error terms in the participation and the outcome equations that comprise the model are distributed as bivariate normal.
Methods: We introduce a novel approach for relaxing the bivariate normality assumption in selection models using copula functions. We apply this method to estimating HIV prevalence and new confidence intervals (CI) in the 2007 Zambia Demographic and Health Survey (DHS) by using interviewer identity as the selection variable that predicts participation (consent to test) but not the outcome (HIV status).
Results: We show in a simulation study that selection models can generate biased results when the bivariate normality assumption is violated. In the 2007 Zambia DHS, HIV prevalence estimates are similar irrespective of the structure of the association assumed between participation and outcome. For men, we estimate a population HIV prevalence of 21% (95% CI = 16%–25%) compared with 12% (11%–13%) among those who consented to be tested; for women, the corresponding figures are 19% (13%–24%) and 16% (15%–17%).
Conclusions: Copula approaches to Heckman-type selection models are a useful addition to the methodological toolkit of HIV epidemiology and of epidemiology in general. We develop the use of this approach to systematically evaluate the robustness of HIV prevalence estimates based on selection models, both empirically and in a simulation study.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Master's Double Degree in Finance from the NOVA School of Business and Economics / Masters Degree in Economics from Insper
Resumo:
Rapport de recherche
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.