9 resultados para Bivariate BEKK-GARCH

em Indian Institute of Science - Bangalore - Índia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solution of a bivariate population balance equation (PBE) for aggregation of particles necessitates a large 2-d domain to be covered. A correspondingly large number of discretized equations for particle populations on pivots (representative sizes for bins) are solved, although at the end only a relatively small number of pivots are found to participate in the evolution process. In the present work, we initiate solution of the governing PBE on a small set of pivots that can represent the initial size distribution. New pivots are added to expand the computational domain in directions in which the evolving size distribution advances. A self-sufficient set of rules is developed to automate the addition of pivots, taken from an underlying X-grid formed by intersection of the lines of constant composition and constant particle mass. In order to test the robustness of the rule-set, simulations carried out with pivotwise expansion of X-grid are compared with those obtained using sufficiently large fixed X-grids for a number of composition independent and composition dependent aggregation kernels and initial conditions. The two techniques lead to identical predictions, with the former requiring only a fraction of the computational effort. The rule-set automatically reduces aggregation of particles of same composition to a 1-d problem. A midway change in the direction of expansion of domain, effected by the addition of particles of different mean composition, is captured correctly by the rule-set. The evolving shape of a computational domain carries with it the signature of the aggregation process, which can be insightful in complex and time dependent aggregation conditions. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We compare two popular methods for estimating the power spectrum from short data windows, namely the adaptive multivariate autoregressive (AMVAR) method and the multitaper method. By analyzing a simulated signal (embedded in a background Ornstein-Uhlenbeck noise process) we demonstrate that the AMVAR method performs better at detecting short bursts of oscillations compared to the multitaper method. However, both methods are immune to jitter in the temporal location of the signal. We also show that coherence can still be detected in noisy bivariate time series data by the AMVAR method even if the individual power spectra fail to show any peaks. Finally, using data from two monkeys performing a visuomotor pattern discrimination task, we demonstrate that the AMVAR method is better able to determine the termination of the beta oscillations when compared to the multitaper method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of the signals recorded in experiments are inevitably contaminated by measurement noise. Hence, it is important to understand the effect of such noise on estimating causal relations between such signals. A primary tool for estimating causality is Granger causality. Granger causality can be computed by modeling the signal using a bivariate autoregressive (AR) process. In this paper, we greatly extend the previous analysis of the effect of noise by considering a bivariate AR process of general order p. From this analysis, we analytically obtain the dependence of Granger causality on various noise-dependent system parameters. In particular, we show that measurement noise can lead to spurious Granger causality and can suppress true Granger causality. These results are verified numerically. Finally, we show how true causality can be recovered numerically using the Kalman expectation maximization algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In contemporary orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE), LTE-Advanced, and WiMAX, a codeword is transmitted over a group of subcarriers. Since different subcarriers see different channel gains in frequency-selective channels, the modulation and coding scheme (MCS) of the codeword must be selected based on the vector of signal-to-noise-ratios (SNRs) of these subcarriers. Exponential effective SNR mapping (EESM) maps the vector of SNRs into an equivalent flat-fading SNR, and is widely used to simplify this problem. We develop a new analytical framework to characterize the throughput of EESM-based rate adaptation in such wideband channels in the presence of feedback delays. We derive a novel accurate approximation for the throughput as a function of feedback delay. We also propose a novel bivariate gamma distribution to model the time evolution of EESM between the times of estimation and data transmission, which facilitates the analysis. These are then generalized to a multi-cell, multi-user scenario with various frequency-domain schedulers. Unlike prior work, most of which is simulation-based, our framework encompasses both correlated and independent subcarriers and various multiple antenna diversity modes; it is accurate over a wide range of delays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work sets forth a `hybrid' discretization scheme utilizing bivariate simplex splines as kernels in a polynomial reproducing scheme constructed over a conventional Finite Element Method (FEM)-like domain discretization based on Delaunay triangulation. Careful construction of the simplex spline knotset ensures the success of the polynomial reproduction procedure at all points in the domain of interest, a significant advancement over its precursor, the DMS-FEM. The shape functions in the proposed method inherit the global continuity (Cp-1) and local supports of the simplex splines of degree p. In the proposed scheme, the triangles comprising the domain discretization also serve as background cells for numerical integration which here are near-aligned to the supports of the shape functions (and their intersections), thus considerably ameliorating an oft-cited source of inaccuracy in the numerical integration of mesh-free (MF) schemes. Numerical experiments show the proposed method requires lower order quadrature rules for accurate evaluation of integrals in the Galerkin weak form. Numerical demonstrations of optimal convergence rates for a few test cases are given and the method is also implemented to compute crack-tip fields in a gradient-enhanced elasticity model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interannual variation of Indian summer monsoon rainfall (ISMR) is linked to El Nino-Southern oscillation (ENSO) as well as the Equatorial Indian Ocean oscillation (EQUINOO) with the link with the seasonal value of the ENSO index being stronger than that with the EQUINOO index. We show that the variation of a composite index determined through bivariate analysis, explains 54% of ISMR variance, suggesting a strong dependence of the skill of monsoon prediction on the skill of prediction of ENSO and EQUINOO. We explored the possibility of prediction of the Indian rainfall during the summer monsoon season on the basis of prior values of the indices. We find that such predictions are possible for July-September rainfall on the basis of June indices and for August-September rainfall based on the July indices. This will be a useful input for second and later stage forecasts made after the commencement of the monsoon season.