989 resultados para Covariance matrix decomposition
Resumo:
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.
Resumo:
Unraveling the effect of selection vs. drift on the evolution of quantitative traits is commonly achieved by one of two methods. Either one contrasts population differentiation estimates for genetic markers and quantitative traits (the Q(st)-F(st) contrast) or multivariate methods are used to study the covariance between sets of traits. In particular, many studies have focused on the genetic variance-covariance matrix (the G matrix). However, both drift and selection can cause changes in G. To understand their joint effects, we recently combined the two methods into a single test (accompanying article by Martin et al.), which we apply here to a network of 16 natural populations of the freshwater snail Galba truncatula. Using this new neutrality test, extended to hierarchical population structures, we studied the multivariate equivalent of the Q(st)-F(st) contrast for several life-history traits of G. truncatula. We found strong evidence of selection acting on multivariate phenotypes. Selection was homogeneous among populations within each habitat and heterogeneous between habitats. We found that the G matrices were relatively stable within each habitat, with proportionality between the among-populations (D) and the within-populations (G) covariance matrices. The effect of habitat heterogeneity is to break this proportionality because of selection for habitat-dependent optima. Individual-based simulations mimicking our empirical system confirmed that these patterns are expected under the selective regime inferred. We show that homogenizing selection can mimic some effect of drift on the G matrix (G and D almost proportional), but that incorporating information from molecular markers (multivariate Q(st)-F(st)) allows disentangling the two effects.
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.
Resumo:
En el presente documento se descompone la estructura a términos de las tasas de interés de los bonos soberanos de EE.UU. y Colombia. Se utiliza un modelo afín de cuatro factores, donde el primero de ellos corresponde a un factor de pronóstico de los retornos y, los demás, a los tres primeros componentes principales de la matriz de varianza-covarianza de las tasas de interés. Para la descomposición de las tasas de interés de Colombia se utiliza el factor de pronóstico de EE.UU. para capturar efectos de spillovers. Se logra concluir que las tasas en EE.UU. no tienen un efecto sobre el nivel de tasas en Colombia pero sí influyen en los excesos de retorno esperado de los bonos y también existen efectos sobre los factores locales, aunque el factor determinante de la dinámica de las tasas locales es el “nivel”. De la descomposición se obtienen las expectativas de la tasa corta y la prima por vencimiento. En ese sentido, se observa que el valor de la prima por vencimiento y su volatilidad incrementa con el vencimiento y que este valor ha venido disminuyendo en el tiempo.
Resumo:
The study of the genetic variance/covariance matrix (G-matrix) is a recent and fruitful approach in evolutionary biology, providing a window of investigating for the evolution of complex characters. Although G-matrix studies were originally conducted for microevolutionary timescales, they could be extrapolated to macroevolution as long as the G-matrix remains relatively constant, or proportional, along the period of interest. A promising approach to investigating the constancy of G-matrices is to compare their phenotypic counterparts (P-matrices) in a large group of related species; if significant similarity is found among several taxa, it is very likely that the underlying G-matrices are also equivalent. Here we study the similarity of covariance and correlation structure in a broad sample of Old World monkeys and apes (Catarrhini). We made phylogenetically structured comparisons of correlation and covariance matrices derived from 39 skull traits, ranging from between species to the superfamily level. We also compared the overall magnitude of integration between skull traits (r(2)) for all Catarrhim genera. Our results show that P-matrices were not strictly constant among catarrhines, but the amount of divergence observed among taxa was generally low. There was significant and positive correlation between the amount of divergence in correlation and covariance patterns among the 30 genera and their phylogenetic distances derived from a recently proposed phylogenetic hypothesis. Our data demonstrate that the P-matrices remained relatively similar along the evolutionary history of catarrhines, and comparisons with the G-matrix available for a New World monkey genus (Saguinus) suggests that the same holds for all anthropoids. The magnitude of integration, in contrast, varied considerably among genera, indicating that evolution of the magnitude, rather than the pattern of inter-trait correlations, might have played an important role in the diversification of the catarrhine skull. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this paper was to evaluate attributes derived from fully polarimetric PALSAR data to discriminate and map macrophyte species in the Amazon floodplain wetlands. Fieldwork was carried out almost simultaneously to the radar acquisition, and macrophyte biomass and morphological variables were measured in the field. Attributes were calculated from the covariance matrix [C] derived from the single-look complex data. Image attributes and macrophyte variables were compared and analyzed to investigate the sensitivity of the attributes for discriminating among species. Based on these analyses, a rule-based classification was applied to map macrophyte species. Other classification approaches were tested and compared to the rule-based method: a classification based on the Freeman-Durden and Cloude-Pottier decomposition models, a hybrid classification (Wishart classifier with the input classes based on the H/a plane), and a statistical-based classification (supervised classification using Wishart distance measures). The findings show that attributes derived from fully polarimetric L-band data have good potential for discriminating herbaceous plant species based on morphology and that estimation of plant biomass and productivity could be improved by using these polarimetric attributes.
Resumo:
We propose an alternative formalism to simulate cosmic microwave background (CMB) temperature maps in Lambda CDM universes with nontrivial spatial topologies. This formalism avoids the need to explicitly compute the eigenmodes of the Laplacian operator in the spatial sections. Instead, the covariance matrix of the coefficients of the spherical harmonic decomposition of the temperature anisotropies is expressed in terms of the elements of the covering group of the space. We obtain a decomposition of the correlation matrix that isolates the topological contribution to the CMB temperature anisotropies out of the simply connected contribution. A further decomposition of the topological signature of the correlation matrix for an arbitrary topology allows us to compute it in terms of correlation matrices corresponding to simpler topologies, for which closed quadrature formulas might be derived. We also use this decomposition to show that CMB temperature maps of (not too large) multiply connected universes must show patterns of alignment, and propose a method to look for these patterns, thus opening the door to the development of new methods for detecting the topology of our Universe even when the injectivity radius of space is slightly larger than the radius of the last scattering surface. We illustrate all these features with the simplest examples, those of flat homogeneous manifolds, i.e., tori, with special attention given to the cylinder, i.e., T-1 topology.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
Patterns in sequences of amino acid hydrophobic free energies predict secondary structures in proteins. In protein folding, matches in hydrophobic free energy statistical wavelengths appear to contribute to selective aggregation of secondary structures in “hydrophobic zippers.” In a similar setting, the use of Fourier analysis to characterize the dominant statistical wavelengths of peptide ligands’ and receptor proteins’ hydrophobic modes to predict such matches has been limited by the aliasing and end effects of short peptide lengths, as well as the broad-band, mode multiplicity of many of their frequency (power) spectra. In addition, the sequence locations of the matching modes are lost in this transformation. We make new use of three techniques to address these difficulties: (i) eigenfunction construction from the linear decomposition of the lagged covariance matrices of the ligands and receptors as hydrophobic free energy sequences; (ii) maximum entropy, complex poles power spectra, which select the dominant modes of the hydrophobic free energy sequences or their eigenfunctions; and (iii) discrete, best bases, trigonometric wavelet transformations, which confirm the dominant spectral frequencies of the eigenfunctions and locate them as (absolute valued) moduli in the peptide or receptor sequence. The leading eigenfunction of the covariance matrix of a transmembrane receptor sequence locates the same transmembrane segments seen in n-block-averaged hydropathy plots while leaving the remaining hydrophobic modes unsmoothed and available for further analyses as secondary eigenfunctions. In these receptor eigenfunctions, we find a set of statistical wavelength matches between peptide ligands and their G-protein and tyrosine kinase coupled receptors, ranging across examples from 13.10 amino acids in acid fibroblast growth factor to 2.18 residues in corticotropin releasing factor. We find that the wavelet-located receptor modes in the extracellular loops are compatible with studies of receptor chimeric exchanges and point mutations. A nonbinding corticotropin-releasing factor receptor mutant is shown to have lost the signatory mode common to the normal receptor and its ligand. Hydrophobic free energy eigenfunctions and their transformations offer new quantitative physical homologies in database searches for peptide-receptor matches.
Resumo:
devcon transforms the coefficients of 0/1 dummy variables so that they reflect deviations from the "grand mean" rather than deviations from the reference category (the transformed coefficients are equivalent to those obtained by the so called "effects coding") and adds the coefficient for the reference category. The variance-covariance matrix of the estimates is transformed accordingly. The transformed estimated can be used with post estimation procedures. In particular, devcon can be used to solve the identification problem for dummy variable effects in the so-called Blinder-Oaxaca decomposition (see the oaxaca package).
Resumo:
We examined the genetic basis of clinal adaptation by determining the evolutionary response of life-history traits to laboratory natural selection along a gradient of thermal stress in Drosophila serrata. A gradient of heat stress was created by exposing larvae to a heat stress of 36degrees for 4 hr for 0, 1, 2, 3, 4, or 5 days of larval development, with the remainder of development taking place at 25degrees. Replicated lines were exposed to each level of this stress every second generation for 30 generations. At the end of selection, we conducted a complete reciprocal transfer experiment where all populations were raised in all environments, to estimate the realized additive genetic covariance matrix among clinal environments in three life-history traits. Visualization of the genetic covariance functions of the life-history traits revealed that the genetic correlation between environments generally declined as environments became more different and even became negative between the most different environments in some cases. One exception to this general pattern was a life-history trait representing the classic trade-off between development time and body size, which responded to selection in a similar genetic fashion across all environments. Adaptation to clinal environments may involve a number of distinct genetic effects along the length of the cline, the complexity of which may not be fully revealed by focusing primarily on populations at the ends of the cline.
Resumo:
The paper develops a novel realized matrix-exponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESV-ALM model). The matrix exponential transformation guarantees the positivedefiniteness of the dynamic covariance matrix. The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly non-linear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics, 1988, 39(1-2), 69–104), especially for developing tests for leverage and spillover effects in the covariance dynamics. Efficient importance sampling is used to maximize the likelihood function of RMESV-ALM, and the finite sample properties of the quasi-maximum likelihood estimator of the parameters are analysed. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The forecasting performance of the new model is compared with a novel dynamic realized matrix-exponential conditional covariance model. The volatility and co-volatility spillovers are examined via the news impact curves and the impulse response functions from returns to volatility and co-volatility.
Resumo:
Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.