10 resultados para Phenotypic correlation estimates

em CentAUR: Central Archive University of Reading - UK


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A sequential study design generally makes more efficient use of available information than a fixed sample counterpart of equal power. This feature is gradually being exploited by researchers in genetic and epidemiological investigations that utilize banked biological resources and in studies where time, cost and ethics are prominent considerations. Recent work in this area has focussed on the sequential analysis of matched case-control studies with a dichotomous trait. In this paper, we extend the sequential approach to a comparison of the associations within two independent groups of paired continuous observations. Such a comparison is particularly relevant in familial studies of phenotypic correlation using twins. We develop a sequential twin method based on the intraclass correlation and show that use of sequential methodology can lead to a substantial reduction in the number of observations without compromising the study error rates. Additionally, our approach permits straightforward allowance for other explanatory factors in the analysis. We illustrate our method in a sequential heritability study of dysplasia that allows for the effect of body mass index and compares monozygotes with pairs of singleton sisters. Copyright (c) 2006 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background. The anaerobic spirochaete Brachyspira pilosicoli causes enteric disease in avian, porcine and human hosts, amongst others. To date, the only available genome sequence of B. pilosicoli is that of strain 95/1000, a porcine isolate. In the first intra-species genome comparison within the Brachyspira genus, we report the whole genome sequence of B. pilosicoli B2904, an avian isolate, the incomplete genome sequence of B. pilosicoli WesB, a human isolate, and the comparisons with B. pilosicoli 95/1000. We also draw on incomplete genome sequences from three other Brachyspira species. Finally we report the first application of the high-throughput Biolog phenotype screening tool on the B. pilosicoli strains for detailed comparisons between genotype and phenotype. Results. Feature and sequence genome comparisons revealed a high degree of similarity between the three B. pilosicoli strains, although the genomes of B2904 and WesB were larger than that of 95/1000 (~2,765, 2.890 and 2.596 Mb, respectively). Genome rearrangements were observed which correlated largely with the positions of mobile genetic elements. Through comparison of the B2904 and WesB genomes with the 95/1000 genome, features that we propose are non-essential due to their absence from 95/1000 include a peptidase, glycine reductase complex components and transposases. Novel bacteriophages were detected in the newly-sequenced genomes, which appeared to have involvement in intra- and inter-species horizontal gene transfer. Phenotypic differences predicted from genome analysis, such as the lack of genes for glucuronate catabolism in 95/1000, were confirmed by phenotyping. Conclusions. The availability of multiple B. pilosicoli genome sequences has allowed us to demonstrate the substantial genomic variation that exists between these strains, and provides an insight into genetic events that are shaping the species. In addition, phenotype screening allowed determination of how genotypic differences translated to phenotype. Further application of such comparisons will improve understanding of the metabolic capabilities of Brachyspira species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time correlation functions yield profound information about the dynamics of a physical system and hence are frequently calculated in computer simulations. For systems whose dynamics span a wide range of time, currently used methods require significant computer time and memory. In this paper, we discuss the multiple-tau correlator method for the efficient calculation of accurate time correlation functions on the fly during computer simulations. The multiple-tau correlator is efficacious in terms of computational requirements and can be tuned to the desired level of accuracy. Further, we derive estimates for the error arising from the use of the multiple-tau correlator and extend it for use in the calculation of mean-square particle displacements and dynamic structure factors. The method described here, in hardware implementation, is routinely used in light scattering experiments but has not yet found widespread use in computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: We conducted a systematic review of studies examining relationships between measures of beverage alcohol tax or price levels and alcohol sales or self-reported drinking. A total of 112 studies of alcohol tax or price effects were found, containing 1003 estimates of the tax/price–consumption relationship. Design: Studies included analyses of alternative outcome measures, varying subgroups of the population, several statistical models, and using different units of analysis. Multiple estimates were coded from each study, along with numerous study characteristics. Using reported estimates, standard errors, t-ratios, sample sizes and other statistics, we calculated the partial correlation for the relationship between alcohol price or tax and sales or drinking measures for each major model or subgroup reported within each study. Random-effects models were used to combine studies for inverse variance weighted overall estimates of the magnitude and significance of the relationship between alcohol tax/price and drinking. Findings: Simple means of reported elasticities are -0.46 for beer, -0.69 for wine and -0.80 for spirits. Meta-analytical results document the highly significant relationships (P < 0.001) between alcohol tax or price measures and indices of sales or consumption of alcohol (aggregate-level r = -0.17 for beer, -0.30 for wine, -0.29 for spirits and -0.44 for total alcohol). Price/tax also affects heavy drinking significantly (mean reported elasticity = -0.28, individual-level r = -0.01, P < 0.01), but the magnitude of effect is smaller than effects on overall drinking. Conclusions: A large literature establishes that beverage alcohol prices and taxes are related inversely to drinking. Effects are large compared to other prevention policies and programs. Public policies that raise prices of alcohol are an effective means to reduce drinking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The deployment of genetic markers is of interest in crop assessment and breeding programmes, due to the potential savings in cost and time afforded. As part of the internationally recognised framework for the awarding of Plant Breeders’ Rights (PBR), new barley variety submissions are evaluated using a suite of morphological traits to ensure they are distinct, uniform and stable (DUS) in comparison to all previous submissions. Increasing knowledge of the genetic control of many of these traits provides the opportunity to assess the potential of deploying diagnostic/perfect genetic markers in place of phenotypic assessment. Here, we identify a suite of 25 genetic markers assaying for 14 DUS traits, and implement them using a single genotyping platform (KASPar). Using a panel of 169 UK barley varieties, we show that phenotypic state at three of these traits can be perfectly predicted by genotype. Predictive values for an additional nine traits ranged from 81 to 99 %. Finally, by comparison of varietal discrimination based on phenotype and genotype resulted in correlation of 0.72, indicating that deployment of molecular markers for varietal discrimination could be feasible in the near future. Due to the flexibility of the genotyping platform used, the genetic markers described here can be used in any number or combination, in-house or by outsourcing, allowing flexible deployment by users. These markers are likely to find application where tracking of specific alleles is required in breeding programmes, or for potential use within national assessment programmes for the awarding of PBRs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The global cycle of multicomponent aerosols including sulfate, black carbon (BC),organic matter (OM), mineral dust, and sea salt is simulated in the Laboratoire de Me´te´orologie Dynamique general circulation model (LMDZT GCM). The seasonal open biomass burning emissions for simulation years 2000–2001 are scaled from climatological emissions in proportion to satellite detected fire counts. The emissions of dust and sea salt are parameterized online in the model. The comparison of model-predicted monthly mean aerosol optical depth (AOD) at 500 nm with Aerosol Robotic Network (AERONET) shows good agreement with a correlation coefficient of 0.57(N = 1324) and 76% of data points falling within a factor of 2 deviation. The correlation coefficient for daily mean values drops to 0.49 (N = 23,680). The absorption AOD (ta at 670 nm) estimated in the model is poorly correlated with measurements (r = 0.27, N = 349). It is biased low by 24% as compared to AERONET. The model reproduces the prominent features in the monthly mean AOD retrievals from Moderate Resolution Imaging Spectroradiometer (MODIS). The agreement between the model and MODIS is better over source and outflow regions (i.e., within a factor of 2).There is an underestimation of the model by up to a factor of 3 to 5 over some remote oceans. The largest contribution to global annual average AOD (0.12 at 550 nm) is from sulfate (0.043 or 35%), followed by sea salt (0.027 or 23%), dust (0.026 or 22%),OM (0.021 or 17%), and BC (0.004 or 3%). The atmospheric aerosol absorption is predominantly contributed by BC and is about 3% of the total AOD. The globally and annually averaged shortwave (SW) direct aerosol radiative perturbation (DARP) in clear-sky conditions is �2.17 Wm�2 and is about a factor of 2 larger than in all-sky conditions (�1.04 Wm�2). The net DARP (SW + LW) by all aerosols is �1.46 and �0.59 Wm�2 in clear- and all-sky conditions, respectively. Use of realistic, less absorbing in SW, optical properties for dust results in negative forcing over the dust-dominated regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results from all phases of the orbits of the Ulysses spacecraft have shown that the magnitude of the radial component of the heliospheric field is approximately independent of heliographic latitude. This result allows the use of near- Earth observations to compute the total open flux of the Sun. For example, using satellite observations of the interplanetary magnetic field, the average open solar flux was shown to have risen by 29% between 1963 and 1987 and using the aa geomagnetic index it was found to have doubled during the 20th century. It is therefore important to assess fully the accuracy of the result and to check that it applies to all phases of the solar cycle. The first perihelion pass of the Ulysses spacecraft was close to sunspot minimum, and recent data from the second perihelion pass show that the result also holds at solar maximum. The high level of correlation between the open flux derived from the various methods strongly supports the Ulysses discovery that the radial field component is independent of latitude. We show here that the errors introduced into open solar flux estimates by assuming that the heliospheric field’s radial component is independent of latitude are similar for the two passes and are of order 25% for daily values, falling to 5% for averaging timescales of 27 days or greater. We compare here the results of four methods for estimating the open solar flux with results from the first and second perehelion passes by Ulysses. We find that the errors are lowest (1–5% for averages over the entire perehelion passes lasting near 320 days), for near-Earth methods, based on either interplanetary magnetic field observations or the aa geomagnetic activity index. The corresponding errors for the Solanki et al. (2000) model are of the order of 9–15% and for the PFSS method, based on solar magnetograms, are of the order of 13–47%. The model of Solanki et al. is based on the continuity equation of open flux, and uses the sunspot number to quantify the rate of open flux emergence. It predicts that the average open solar flux has been decreasing since 1987, as Correspondence to: M. Lockwood (m.lockwood@rl.ac.uk) is observed in the variation of all the estimates of the open flux. This decline combines with the solar cycle variation to produce an open flux during the second (sunspot maximum) perihelion pass of Ulysses which is only slightly larger than that during the first (sunspot minimum) perihelion pass.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies with a diverse array of 22 purified condensed tannin (CT) samples from nine plant species demonstrated that procyanidin/prodelphinidin (PC/PD) and cis/trans-flavan-3-ol ratios can be appraised by 1H-13C HSQC NMR spectroscopy. The method was developed from samples containing 44 to ~100% CT, PC/PD ratios ranging from 0/100 to 99/1, and cis/trans ratios from 58/42 to 95/5 as determined by thiolysis with benzyl mercaptan. Integration of cross-peak contours of H/C-6' signals from PC and of H/C-2',6' signals from PD yielded nuclei adjusted estimates that were highly correlated with PC/PD ratios obtained by thiolysis (R2 = 0.99). Cis/trans-flavan-3-ol ratios, obtained by integration of the respective H/C-4 cross-peak contours, were also related to determinations made by thiolysis (R2 = 0.89). Overall, 1H-13C HSQC NMR spectroscopy appears to be a viable alternative to thiolysis for estimating PC/PD and cis/trans ratios of CT, if precautions are taken to avoid integration of cross-peak contours of contaminants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.