962 resultados para Joint Segregation Analysis (jsa)
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
We present the first comprehensive study, to our knowledge, on genomic chromosomal analysis in syndromic craniosynostosis. In total, 45 patients with craniosynostotic disorders were screened with a variety of methods including conventional karyotype, microsatellite segregation analysis, subtelomeric multiplex ligation-dependent probe amplification) and whole-genome array-based comparative genome hybridisation. Causative abnormalities were present in 42.2% (19/45) of the samples, and 27.8% (10/36) of the patients with normal conventional karyotype carried submicroscopic imbalances. Our results include a wide variety of imbalances and point to novel chromosomal regions associated with craniosynostosis. The high incidence of pure duplications or trisomies suggests that these are important mechanisms in craniosynostosis, particularly in cases involving the metopic suture.
Resumo:
This article jointly examines the differences of laboratory versions of the Dutch clock open auction, a sealed-bid auction to represent book building, and a two-stage sealed bid auction to proxy for the “competitive IPO”, a recent innovation used in a few European equity initial public offerings. We investigate pricing, seller allocation, and buyer welfare allocation efficiency and conclude that the book building emulation seems to be as price efficient as the Dutch auction, even after investor learning, whereas the competitive IPO is not price efficient, regardless of learning. The competitive IPO is the most seller allocative efficient method because it maximizes offer proceeds. The Dutch auction emerges as the most buyer welfare allocative efficient method. Underwriters are probably seeking pricing efficiency rather than seller or buyer welfare allocative efficiency and their discretionary pricing and allocation must be important since book building is prominent worldwide.
Resumo:
Using restriction fragment length polymorphism (RFLP) we have analyzed the segregation of alleles of the different vitellogenin genes of Xenopus laevis. The results demonstrate that the four genes whose expression is controlled by oestrogen, form two linkage groups. The genes A1, A2 and B1 are linked genetically whereas the fourth gene, the gene B2, segregates independently. The possible origin of this unexpected arrangement is discussed.
Resumo:
The generalization of simple (two-variable) correspondence analysis to more than two categorical variables, commonly referred to as multiple correspondence analysis, is neither obvious nor well-defined. We present two alternative ways of generalizing correspondence analysis, one based on the quantification of the variables and intercorrelation relationships, and the other based on the geometric ideas of simple correspondence analysis. We propose a version of multiple correspondence analysis, with adjusted principal inertias, as the method of choice for the geometric definition, since it contains simple correspondence analysis as an exact special case, which is not the situation of the standard generalizations. We also clarify the issue of supplementary point representation and the properties of joint correspondence analysis, a method that visualizes all two-way relationships between the variables. The methodology is illustrated using data on attitudes to science from the International Social Survey Program on Environment in 1993.
Resumo:
The joint sound is a common sign in TMD, the diagnosis is important to establish the treatment of pathological alterations which occur in the TMJ. In this study, two groups were selected: 1, Asymptomatic volunteers; and 2, Symptomatic patients who were diagnosed in a clinical examination. After the initial examination, they were submitted to evaluation using electrovibratography (SonoPAK II, BioResearch Assoc., Inc., Milwaukee, Wisconsin). The analysis of results indicated that the averages of the vibratory energy in the symptomatic group presented higher values in all stages of the mandibular movement when compared to the averages of vibratory energy registered in the asymptomatic group.
Resumo:
The study of articular sounds using a computerized system (SonoPAK) in patients with temporomandibular disorders (TMD) of inflammatory origin revealed an increase of vibratory energy when compared to asymptomatic individuals. The following conclusions were reached: 1. The amount of vibratory energy registered in these patients ranged from 8.50 to 57.61 Hz. The major vibrations occurred in the middle of the mandibular opening cycle; 2. The mean vibratory energy measured at less than 300 Hz was between 5.70 and 48.64 Hz and at higher than 300 Hz was between 3.70 and 8.99 Hz; 3. The peak amplitude in the patients with inflammation ranged from 0.35 to 3.96 Pascal and the peak of frequency from 83.20 to 120.20 Hz.
Resumo:
The goal of this study was to analyze the mode of inheritance of an overweight body condition in an experimental cat population. The cat population consisted of 95 cats of which 81 cats could be clearly classified into lean or overweight using the body condition scoring system according to Laflamme. The lean or overweight classification was then used for segregation analyses. Complex segregation analyses were employed to test for the significance of one environmental and 4 genetic models (general, mixed inheritance, major gene, and polygene). The general genetic model fit the data significantly better than the environmental model (P = 0.0013). Among all other models employed, the major gene model explained the segregation of the overweight phenotype best. This is the first study in which a genetic component could be shown to be responsible for the development of overweight in cats.
Resumo:
In Operational Modal Analysis (OMA) of a structure, the data acquisition process may be repeated many times. In these cases, the analyst has several similar records for the modal analysis of the structure that have been obtained at di�erent time instants (multiple records). The solution obtained varies from one record to another, sometimes considerably. The differences are due to several reasons: statistical errors of estimation, changes in the external forces (unmeasured forces) that modify the output spectra, appearance of spurious modes, etc. Combining the results of the di�erent individual analysis is not straightforward. To solve the problem, we propose to make the joint estimation of the parameters using all the records. This can be done in a very simple way using state space models and computing the estimates by maximum-likelihood. The method provides a single result for the modal parameters that combines optimally all the records.
Resumo:
Computing the modal parameters of large structures in Operational Modal Analysis often requires to process data from multiple non simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors that are fixed for all the measurements, while the other sensors are moved from one setup to the next. One possibility is to process the setups separately what result in different modal parameter estimates for each setup. Then the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global modes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a state space model that can be used to process all setups at once so the global mode shapes are obtained automatically and subsequently only a value for the natural frequency and damping ratio of each mode is computed. We also present how this model can be estimated using maximum likelihood and the Expectation Maximization algorithm. We apply this technique to real data measured at a footbridge.
Resumo:
Longitudinal joint quality control/assurance is essential to the successful performance of asphalt pavements and it has received considerable amount of attention in recent years. The purpose of the study is to evaluate the level of compaction at the longitudinal joint and determine the effect of segregation on the longitudinal joint performance. Five paving projects with the use of traditional butt joint, infrared joint heater, edge restraint by milling and modified butt joint with the hot pinch longitudinal joint construction techniques were selected in this study. For each project, field density and permeability tests were made and cores from the pavement were obtained for in-lab permeability, air void and indirect tensile strength. Asphalt content and gradations were also obtained to determine the joint segregation. In general, this study finds that the minimum required joint density should be around 90.0% of the theoretical maximum density based on the AASHTO T166 method. The restrained-edge by milling and butt joint with the infrared heat treatment construction methods both create the joint density higher than this 90.0% limit. Traditional butt joint exhibits lower density and higher permeability than the criterion. In addition, all of the projects appear to have segregation at the longitudinal joint except for the edge-restraint by milling method.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.