821 resultados para covariance intersect algorithm
Resumo:
The objective of this study was to estimate (co)variance functions using random regression models on Legendre polynomials for the analysis of repeated measures of BW from birth to adult age. A total of 82,064 records from 8,145 females were analyzed. Different models were compared. The models included additive direct and maternal effects, and animal and maternal permanent environmental effects as random terms. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of animal age (cubic regression) were considered as random co-variables. Eight models with polynomials of third to sixth order were used to describe additive direct and maternal effects, and animal and maternal permanent environmental effects. Residual effects were modeled using 1 (i.e., assuming homogeneity of variances across all ages) or 5 age classes. The model with 5 classes was the best to describe the trajectory of residuals along the growth curve. The model including fourth- and sixth-order polynomials for additive direct and animal permanent environmental effects, respectively, and third-order polynomials for maternal genetic and maternal permanent environmental effects were the best. Estimates of (co) variance obtained with the multi-trait and random regression models were similar. Direct heritability estimates obtained with the random regression models followed a trend similar to that obtained with the multi-trait model. The largest estimates of maternal heritability were those of BW taken close to 240 d of age. In general, estimates of correlation between BW from birth to 8 yr of age decreased with increasing distance between ages.
Resumo:
Using the classical twin design, this study investigates the influence of genetic factors on the large phenotypic variance in inspection time (IT), and whether the well established IT-IQ association can be explained by a common genetic factor. Three hundred ninety pairs of twins (184 monozygotic, MZ; 206 dizygotic, DZ) with a mean age of 16 years participated, and 49 pairs returned approximately 3 months, later for retesting. As in many IT studies, the pi figure stimulus was used and IT was estimated from the cumulative normal ogive. IT ranged from 39.4 to 774.1 ms (159 +/- 110.1 ms) with faster ITs (by an average of 26.9 ms) found in the retest session from which a reliability of .69 was estimated. Full-scale IQ (FIQ) was assessed by the Multidimensional Aptitude Battery (MAB) and ranged from 79 to 145 (111 +/- 13). The phenotypic association between IT and FIQ was confirmed (- .35) and bivariate results showed that a common genetic factor accounted for 36% of the variance in IT and 32% of the variance in FIQ. The maximum likelihood estimate of the genetic correlation was - .63. When performance and verbal IQ (PIQ & VIQ) were analysed with IT, a stronger phenotypic and genetic relationship was found between PIQ and IT than with VIQ. A large part of the IT variance (64%) was accounted for by a unique environmental factor. Further genetic factors were needed to explain the remaining variance in IQ with a small component of unique environmental variance present. The separability of a shared genetic factor influencing IT and IQ from the total genetic variance in IQ suggests that IT affects a specific subcomponent of intelligence rather than a generalised efficiency. (C) 2001 Elsevier Science Inc. All rights reserved.
Resumo:
The genetic relationship between lower (information processing speed), intermediate (working memory), and higher levels (complex cognitive processes as indexed by IQ) of mental ability was studied in a classical twin design comprising 166 monozygotic and 190 dizygotic twin pairs. Processing speed was measured by a choice reaction time (RT) task (2-, 4-, and 8-choice), working memory by a visual-spatial delayed response task, and IQ by the Multidimensional Aptitude Battery. Multivariate analysis, adjusted for test-retest reliability, showed the presence of a genetic factor influencing all variables and a genetic factor influencing 4- and 8-choice RTs, working memory, and IQ. There were also genetic factors specific to 8-choice RT, working memory, and IQ. The results confirmed a strong relationship between choice RT and IQ (phenotypic correlations: -0.31 to -0.53 in females, -0.32 to -0.56 in males; genotypic correlations: -0.45 to -0.70) and a weaker but significant association between working memory and IQ (phenotypic: 0.26 in females, 0.13 in males; genotypic: 0.34). A significant part of the genetic variance (43%) in IQ was not related to either choice RT or delayed response performance, and may represent higher order cognitive processes.
Resumo:
The phase estimation algorithm is so named because it allows an estimation of the eigenvalues associated with an operator. However, it has been proposed that the algorithm can also be used to generate eigenstates. Here we extend this proposal for small quantum systems, identifying the conditions under which the phase-estimation algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate two simple examples, one in which the algorithm effectively generates eigenstates, and one in which it does not.
Resumo:
A new algorithm, PfAGSS, for predicting 3' splice sites in Plasmodium falciparum genomic sequences is described. Application of this program to the published P. falciparum chromosome 2 and 3 data suggests that existing programs result in a high error rate in assigning 3' intron boundaries. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Motivation: A consensus sequence for a family of related sequences is, as the name suggests, a sequence that captures the features common to most members of the family. Consensus sequences are important in various DNA sequencing applications and are a convenient way to characterize a family of molecules. Results: This paper describes a new algorithm for finding a consensus sequence, using the popular optimization method known as simulated annealing. Unlike the conventional approach of finding a consensus sequence by first forming a multiple sequence alignment, this algorithm searches for a sequence that minimises the sum of pairwise distances to each of the input sequences. The resulting consensus sequence can then be used to induce a multiple sequence alignment. The time required by the algorithm scales linearly with the number of input sequences and quadratically with the length of the consensus sequence. We present results demonstrating the high quality of the consensus sequences and alignments produced by the new algorithm. For comparison, we also present similar results obtained using ClustalW. The new algorithm outperforms ClustalW in many cases.
Resumo:
A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
Libraries of cyclic peptides are being synthesized using combinatorial chemistry for high throughput screening in the drug discovery process. This paper describes the min_syn_steps.cpp program (available at http://www.imb.uq.edu.au/groups/smythe/tran), which after inputting a list of cyclic peptides to be synthesized, removes cyclic redundant sequences and calculates synthetic strategies which minimize the synthetic steps as well as the reagent requirements. The synthetic steps and reagent requirements could be minimized by finding common subsets within the sequences for block synthesis. Since a brute-force approach to search for optimum synthetic strategies is impractically large, a subset-orientated approach is utilized here to limit the size of the search. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.