947 resultados para Genetic Variance-covariance Matrix


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objectives of this study were to quantify the components of genetic variance and the genetic effects, and to examine the genetic relationship of inbred lines extracted from various shrunken2(sh2) breeding populations. Ten diverse inbred lines developed from genetic background, were crossed in half diallel. Parents and their F1 hybrids were evaluated at three environments. The parents were genotyped using 20 polymorphic simple sequence repeats (SSR). Agronomic and quality traits were analysed by a mixed linear model according to additive-dominance genetic model. Genetic effects were estimated using an adjusted unbiased prediction method. Additive variance was more important than dominance variance in the expression of traits related to ear aspects (husk ratio and percentage of ear filled) and eating quality (flavour and total soluble solids). For agronomic traits, however, dominance variance was more important than additive variance. The additive genetic correlation between flavour and tenderness was strong (r = 0.84, P <0.01). Flavour, tenderness and kernel colour additive genetic effects were not correlated with yield related traits. Genetic distance (GD), estimated from SSR profiles on the basis of Jaccard's similarity coefficient varied from 0.10 to 0.77 with an average of 0.56. Cluster analysis classified parents according to their pedigree relationships. In most studied traits, F1 performance was not associated with GD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Migraine is a common neurovascular brain disorder that is manifested in recurrent episodes of disabling headache. The aim of the present study was to compare the prevalence and heritability of migraine across six of the countries that participate in GenomEUtwin project including a total number of 29,717 twin pairs. Migraine was assessed by questionnaires that differed between most countries. It was most prevalent in Danish and Dutch females (32% and 34%, respectively), whereas the lowest prevalence was found in the younger and older Finnish cohorts (13% and 10%, respectively). The estimated genetic variance (heritability) was significant and the same between sexes in all countries. Heritability ranged from 34% to 57%, with lowest estimates in Australia, and highest estimates in the older cohort of Finland, the Netherlands, and Denmark. There was some indication that part of the genetic variance was non-additive, but this was significant in Sweden only. In addition to genetic factors, environmental effects that are non-shared between members of a twin pair contributed to the liability of migraine. After migraine definitions are homogenized among the participating countries, the GenomEUtwin project will provide a powerful resource to identify the genes involved in migraine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Open-pollinated progeny of Corymbia citriodora established in replicated field trials were assessed for stem diameter, wood density, and pulp yield prior to genotyping single nucleotide polymorphisms (SNP) and testing the significance of associations between markers and assessment traits. Multiple individuals within each family were genotyped and phenotyped, which facilitated a comparison of standard association testing methods and an alternative method developed to relate markers to additive genetic effects. Narrow-sense heritability estimates indicated there was significant additive genetic variance within this population for assessment traits ( h ˆ 2 =0.28to0.44 ) and genetic correlations between the three traits were negligible to moderate (r G = 0.08 to 0.50). The significance of association tests (p values) were compared for four different analyses based on two different approaches: (1) two software packages were used to fit standard univariate mixed models that include SNP-fixed effects, (2) bivariate and multivariate mixed models including each SNP as an additional selection trait were used. Within either the univariate or multivariate approach, correlations between the tests of significance approached +1; however, correspondence between the two approaches was less strong, although between-approach correlations remained significantly positive. Similar SNP markers would be selected using multivariate analyses and standard marker-trait association methods, where the former facilitates integration into the existing genetic analysis systems of applied breeding programs and may be used with either single markers or indices of markers created with genomic selection processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents a geometry-free approach to assess the variation of covariance matrices of undifferenced triple frequency GNSS measurements and its impact on positioning solutions. Four independent geometryfree/ ionosphere-free (GFIF) models formed from original triple-frequency code and phase signals allow for effective computation of variance-covariance matrices using real data. Variance Component Estimation (VCE) algorithms are implemented to obtain the covariance matrices for three pseudorange and three carrier-phase signals epoch-by-epoch. Covariance results from the triple frequency Beidou System (BDS) and GPS data sets demonstrate that the estimated standard deviation varies in consistence with the amplitude of actual GFIF error time series. The single point positioning (SPP) results from BDS ionosphere-free measurements at four MGEX stations demonstrate an improvement of up to about 50% in Up direction relative to the results based on a mean square statistics. Additionally, a more extensive SPP analysis at 95 global MGEX stations based on GPS ionosphere-free measurements shows an average improvement of about 10% relative to the traditional results. This finding provides a preliminary confirmation that adequate consideration of the variation of covariance leads to the improvement of GNSS state solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ISSR analysis was used to investigate genetic variations of 184 haploid and diploid samples from nine North Atlantic Chondrus crispus Stackhouse populations and one outgroup Yellow Sea Chondrus ocellatus Holmes population. Twenty-two of 50 primers were selected and 163 loci were scored for genetic diversity analysis. Genetic diversity varied among populations, percentage of polymorphic bands (PPB) ranged from 27.0 to 55.8%, H(Nei's genetic diversity) ranged from 0.11 to 0.20 and I(Shannon's information index) ranged from 0.16 to 0.30. Estimators PPB, H and I had similar values in intra-population genetic diversity, regardless of calculation methods. Analysis of molecular variance (AMOVA) apportioned inter-population and intra-population variations for C crispus, showing more genetic variance (56.5%) occurred in intra-population, and 43.5% variation among nine populations. The Mantel test suggested that genetic differentiation between nine C. crispus populations was closely related with geographic distances (R = 0.78, P = 0.002). Results suggest that, on larger distance scale (ca. > 1000 km), ISSR analysis is useful for determining genetic differentiations of C crispus populations including morphologically inseparable haploid and diploid individuals. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new method for estimating the covariance matrix of a multivariate time series of nancial returns. The method is based on estimating sample covariances from overlapping windows of observations which are then appropriately weighted to obtain the nal covariance estimate. We extend the idea of (model) covariance averaging o ered in the covariance shrinkage approach by means of greater ease of use, exibility and robustness in averaging information over different data segments. The suggested approach does not su er from the curse of dimensionality and can be used without problems of either approximation or any demand for numerical optimization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predicting progeny performance from parental genetic divergence can potentially enhance the efficiency of supportive breeding programmes and facilitate risk assessment. Yet, experimental testing of the effects of breeding distance on offspring performance remains rare, especially in wild populations of vertebrates. Recent studies have demonstrated that embryos of salmonid fish are sensitive indicators of additive genetic variance for viability traits. We therefore used gametes of wild brown trout (Salmo trutta) from five genetically distinct populations of a river catchment in Switzerland, and used a full factorial design to produce over 2,000 embryos in 100 different crosses with varying genetic distances (FST range 0.005-0.035). Customized egg capsules allowed recording the survival of individual embryos until hatching under natural field conditions. Our breeding design enabled us to evaluate the role of the environment, of genetic and nongenetic parental contributions, and of interactions between these factors, on embryo viability. We found that embryo survival was strongly affected by maternal environmental (i.e. non-genetic) effects and by the microenvironment, i.e. by the location within the gravel. However, embryo survival was not predicted by population divergence, parental allelic dissimilarity, or heterozygosity, neither in the field nor under laboratory conditions. Our findings suggest that the genetic effects of inter-population hybridization within a genetically differentiated meta-population can be minor in comparison to environmental effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Els estudis de supervivència s'interessen pel temps que passa des de l'inici de l'estudi (diagnòstic de la malaltia, inici del tractament,...) fins que es produeix l'esdeveniment d'interès (mort, curació, millora,...). No obstant això, moltes vegades aquest esdeveniment s'observa més d'una vegada en un mateix individu durant el període de seguiment (dades de supervivència multivariant). En aquest cas, és necessari utilitzar una metodologia diferent a la utilitzada en l'anàlisi de supervivència estàndard. El principal problema que l'estudi d'aquest tipus de dades comporta és que les observacions poden no ser independents. Fins ara, aquest problema s'ha solucionat de dues maneres diferents en funció de la variable dependent. Si aquesta variable segueix una distribució de la família exponencial s'utilitzen els models lineals generalitzats mixtes (GLMM); i si aquesta variable és el temps, variable amb una distribució de probabilitat no pertanyent a aquesta família, s'utilitza l'anàlisi de supervivència multivariant. El que es pretén en aquesta tesis és unificar aquests dos enfocs, és a dir, utilitzar una variable dependent que sigui el temps amb agrupacions d'individus o d'observacions, a partir d'un GLMM, amb la finalitat d'introduir nous mètodes pel tractament d'aquest tipus de dades.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new spectral-based approach is presented to find orthogonal patterns from gridded weather/climate data. The method is based on optimizing the interpolation error variance. The optimally interpolated patterns (OIP) are then given by the eigenvectors of the interpolation error covariance matrix, obtained using the cross-spectral matrix. The formulation of the approach is presented, and the application to low-dimension stochastic toy models and to various reanalyses datasets is performed. In particular, it is found that the lowest-frequency patterns correspond to largest eigenvalues, that is, variances, of the interpolation error matrix. The approach has been applied to the Northern Hemispheric (NH) and tropical sea level pressure (SLP) and to the Indian Ocean sea surface temperature (SST). Two main OIP patterns are found for the NH SLP representing respectively the North Atlantic Oscillation and the North Pacific pattern. The leading tropical SLP OIP represents the Southern Oscillation. For the Indian Ocean SST, the leading OIP pattern shows a tripole-like structure having one sign over the eastern and north- and southwestern parts and an opposite sign in the remaining parts of the basin. The pattern is also found to have a high lagged correlation with the Niño-3 index with 6-months lag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonlinear system identification is considered using a generalized kernel regression model. Unlike the standard kernel model, which employs a fixed common variance for all the kernel regressors, each kernel regressor in the generalized kernel model has an individually tuned diagonal covariance matrix that is determined by maximizing the correlation between the training data and the regressor using a repeated guided random search based on boosting optimization. An efficient construction algorithm based on orthogonal forward regression with leave-one-out (LOO) test statistic and local regularization (LR) is then used to select a parsimonious generalized kernel regression model from the resulting full regression matrix. The proposed modeling algorithm is fully automatic and the user is not required to specify any criterion to terminate the construction procedure. Experimental results involving two real data sets demonstrate the effectiveness of the proposed nonlinear system identification approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood. An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.