112 resultados para Statistical methodologies
Resumo:
Current gas-based in vitro evaluation systems are extremely powerful research techniques. However they have the potential to generate a great deal more than simple fermentation dynamics. Details from four experiments are presented in which adaptation, and novel application, of an in vitro system allowed widely differing objectives to be examined. In the first two studies, complement methodologies were utilised. In such assays, an activity or outcome is inferred through the occurrence of a secondary event rather than by direct observation. Using an N-deficient incubation medium, the increase in starch fermentation, when supplemented with individual amino acids (i.e., known level of N) relative to that of urea (i.e., known quantity and N availability), provided an estimate of their microbial utilisation. Due to the low level of response observed with some arnino acids (notably methionine and lysine), it was concluded, that they may not need to be offered in a rumen-inert form to escape rumen microbial degradation. In another experiment, the extent to which degradation of plant cell wall components was inhibited by lipid supplementation was evaluated using fermentation gas release profiles of washed hay. The different responses due to lipid source and level of inclusion suggested that the degree of rumen protection required to ameliorate this depression was supplement dependent. That in vitro inocula differ in their microbial composition is of little interest per se, as long as the outcome is the same (i.e., that similar substrates are degraded at comparable rates and end-product release is equivalent). However where a microbial population is deficient in a particular activity, increasing the level of inoculation will have no benefit. Estimates of hydrolytic activity were obtained by examining fermentation kinetics of specific substrates. A number of studies identified a fundamental difference between rumen fluid and faecal inocula, with the latter having a lower fibrolytic activity, which could not be completely attributed to microbial numbers. The majority of forage maize is offered as an ensiled feed, however most of the information on which decisions such as choice of variety, crop management and harvesting date are made is based on fresh crop measurements. As such, an attempt was made to estimate ensiled maize quality from an in vitro analysis of the fresh crop. Fermentation profiles and chemical analysis confirmed changes in crop composition over the growing season, and loss of labile carbohydrates during ensiling. In addition, examination of degradation residues allowed metabolizable energy (ME) contents to be estimated. Due to difficulties associated with starch analysis, the observation that this parameter could be predicted by difference (together with an assumed degradability), allowed an estimate of ensiled maize ME to be developed from fresh material. In addition, the contribution of the main carbohydrates towards ME showed the importance of delaying harvest until maximum starch content has been achieved. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This review considers microbial inocula used in in vitro systems from the perspective of their ability to degrade or ferment a particular substrate, rather than the microbial species that it contains. By necessity, this required an examination of bacterial, protozoal and fungal populations of the rumen and hindgut with respect to factors influencing their activity. The potential to manipulate these populations through diet or sampling time are examined, as is inoculum preparation and level. The main alternatives to fresh rumen fluid (i.e., caecal digesta or faeces) are discussed with respect to end-point degradabilities and fermentation dynamics. Although the potential to use rumen contents obtained from donor animals at slaughter offers possibilities, the requirement to store it and its subsequent loss of activity are limitations. Statistical modelling of data, although still requiring a deal of developmental work, may offer an alternative approach. Finally, with respect to the range of in vitro methodologies and equipment employed, it is suggested that a degree of uniformity could be obtained through generation of a set of guidelines relating to the host animal, sampling technique and inoculum preparation. It was considered unlikely that any particular system would be accepted as the 'standard' procedure. However, before any protocol can be adopted, additional data are required (e.g., a method to assess inoculum 'quality' with respect to its fermentative and/or degradative activity), preparation/inoculation techniques need to be refined and a methodology to store inocula without loss of efficacy developed. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The conventional method for the assessment of acute dermal toxicity (OECD Test Guideline 402, 1987) uses death of animals as an endpoint to identify the median lethal dose (LD50). A new OECD Testing Guideline called the dermal fixed dose procedure (dermal FDP) is being prepared to provide an alternative to Test Guideline 402. In contrast to Test Guideline 402, the dermal FDP does not provide a point estimate of the LD50, but aims to identify that dose of the substance under investigation that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonised System of Classification and Labelling scheme (GHS). The dermal FDP has been validated using statistical modelling rather than by in vivo testing. The statistical modelling approach enables calculation of the probability of each GHS classification and the expected numbers of deaths and animals used in the test for imaginary substances with a range of LD50 values and dose-response curve slopes. This paper describes the dermal FDP and reports the results from the statistical evaluation. It is shown that the procedure will be completed with considerably less death and suffering than guideline 402, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LD50 value.
Statistical evaluation of the fixed concentration procedure for acute inhalation toxicity assessment
Resumo:
The conventional method for the assessment of acute inhalation toxicity (OECD Test Guideline 403, 1981) uses death of animals as an endpoint to identify the median lethal concentration (LC50). A new OECD Testing Guideline called the Fixed Concentration Procedure (FCP) is being prepared to provide an alternative to Test Guideline 403. Unlike Test Guideline 403, the FCP does not provide a point estimate of the LC50, but aims to identify an airborne exposure level that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonized System of Classification and Labelling scheme (GHS). The FCP has been validated using statistical simulation rather than byin vivo testing. The statistical simulation approach predicts the GHS classification outcome and the numbers of deaths and animals used in the test for imaginary substances with a range of LC50 values and dose response curve slopes. This paper describes the FCP and reports the results from the statistical simulation study assessing its properties. It is shown that the procedure will be completed with considerably less death and suffering than Test Guideline 403, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LC50 value.
Resumo:
The fixed-dose procedure (FDP) was introduced as OECD Test Guideline 420 in 1992, as an alternative to the conventional median lethal dose (LD50) test for the assessment of acute oral toxicity (OECD Test Guideline 401). The FDP uses fewer animals and causes less suffering than the conventional test, while providing information on the acute toxicity to allow substances to be ranked according to the EU hazard classification system. Recently the FDP has been revised, with the aim of providing further reductions and refinements, and classification according to the criteria of the Globally Harmonized Hazard Classification and Labelling scheme (GHS). This paper describes the revised FDP and analyses its properties, as determined by a statistical modelling approach. The analysis shows that the revised FDP classifies substances for acute oral toxicity generally in the same, or a more stringent, hazard class as that based on the LD50 value, according to either the GHS or the EU classification scheme. The likelihood of achieving the same classification is greatest for substances with a steep dose-response curve and median toxic dose (TD50) close to the LD50. The revised FDP usually requires five or six animals with two or fewer dying as a result of treatment in most cases.
Resumo:
Pharmacogenetic trials investigate the effect of genotype on treatment response. When there are two or more treatment groups and two or more genetic groups, investigation of gene-treatment interactions is of key interest. However, calculation of the power to detect such interactions is complicated because this depends not only on the treatment effect size within each genetic group, but also on the number of genetic groups, the size of each genetic group, and the type of genetic effect that is both present and tested for. The scale chosen to measure the magnitude of an interaction can also be problematic, especially for the binary case. Elston et al. proposed a test for detecting the presence of gene-treatment interactions for binary responses, and gave appropriate power calculations. This paper shows how the same approach can also be used for normally distributed responses. We also propose a method for analysing and performing sample size calculations based on a generalized linear model (GLM) approach. The power of the Elston et al. and GLM approaches are compared for the binary and normal case using several illustrative examples. While more sensitive to errors in model specification than the Elston et al. approach, the GLM approach is much more flexible and in many cases more powerful. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The proportional odds model provides a powerful tool for analysing ordered categorical data and setting sample size, although for many clinical trials its validity is questionable. The purpose of this paper is to present a new class of constrained odds models which includes the proportional odds model. The efficient score and Fisher's information are derived from the profile likelihood for the constrained odds model. These results are new even for the special case of proportional odds where the resulting statistics define the Mann-Whitney test. A strategy is described involving selecting one of these models in advance, requiring assumptions as strong as those underlying proportional odds, but allowing a choice of such models. The accuracy of the new procedure and its power are evaluated.
Resumo:
BACKGROUND: The widespread occurrence of feminized male fish downstream of some wastewater treatment works has led to substantial interest from ecologists and public health professionals. This concern stems from the view that the effects observed have a parallel in humans, and that both phenomena are caused by exposure to mixtures of contaminants that interfere with reproductive development. The evidence for a "wildlife-human connection" is, however, weak: Testicular dysgenesis syndrome, seen in human males, is most easily reproduced in rodent models by exposure to mixtures of antiandrogenic chemicals. In contrast, the accepted explanation for feminization of wild male fish is that it results mainly from exposure to steroidal estrogens originating primarily from human excretion. OBJECTIVES: We sought to further explore the hypothesis that endocrine disruption in fish is multi-causal, resulting from exposure to mixtures of chemicals with both estrogenic and antiandrogenic properties. METHODS: We used hierarchical generalized linear and generalized additive statistical modeling to explore the associations between modeled concentrations and activities of estrogenic and antiandrogenic chemicals in 30 U.K. rivers and feminized responses seen in wild fish living in these rivers. RESULTS: In addition to the estrogenic substances, antiandrogenic activity was prevalent in almost all treated sewage effluents tested. Further, the results of the modeling demonstrated that feminizing effects in wild fish could be best modeled as a function of their predicted exposure to both anti-androgens and estrogens or to antiandrogens alone. CONCLUSION: The results provide a strong argument for a multicausal etiology of widespread feminization of wild fish in U.K. rivers involving contributions from both steroidal estrogens and xeno-estrogens and from other (as yet unknown) contaminants with antiandrogenic properties. These results may add farther credence to the hypothesis that endocrine-disrupting effects seen in wild fish and in humans are caused by similar combinations of endocrine-disrupting chemical cocktails.
Resumo:
In conventional phylogeographic studies, historical demographic processes are elucidated from the geographical distribution of individuals represented on an inferred gene tree. However, the interpretation of gene trees in this context can be difficult as the same demographic/geographical process can randomly lead to multiple different genealogies. Likewise, the same gene trees can arise under different demographic models. This problem has led to the emergence of many statistical methods for making phylogeographic inferences. A popular phylogeographic approach based on nested clade analysis is challenged by the fact that a certain amount of the interpretation of the data is left to the subjective choices of the user, and it has been argued that the method performs poorly in simulation studies. More rigorous statistical methods based on coalescence theory have been developed. However, these methods may also be challenged by computational problems or poor model choice. In this review, we will describe the development of statistical methods in phylogeographic analysis, and discuss some of the challenges facing these methods.
Resumo:
An appropriate model of recent human evolution is not only important to understand our own history, but it is necessary to disentangle the effects of demography and selection on genome diversity. Although most genetic data support the view that our species originated recently in Africa, it is still unclear if it completely replaced former members of the Homo genus, or if some interbreeding occurred during its range expansion. Several scenarios of modern human evolution have been proposed on the basis of molecular and paleontological data, but their likelihood has never been statistically assessed. Using DNA data from 50 nuclear loci sequenced in African, Asian and Native American samples, we show here by extensive simulations that a simple African replacement model with exponential growth has a higher probability (78%) as compared with alternative multiregional evolution or assimilation scenarios. A Bayesian analysis of the data under this best supported model points to an origin of our species approximate to 141 thousand years ago (Kya), an exit out-of-Africa approximate to 51 Kya, and a recent colonization of the Americas approximate to 10.5 Kya. We also find that the African replacement model explains not only the shallow ancestry of mtDNA or Y-chromosomes but also the occurrence of deep lineages at some autosomal loci, which has been formerly interpreted as a sign of interbreeding with Homo erectus.
Resumo:
Background: We report an analysis of a protein network of functionally linked proteins, identified from a phylogenetic statistical analysis of complete eukaryotic genomes. Phylogenetic methods identify pairs of proteins that co-evolve on a phylogenetic tree, and have been shown to have a high probability of correctly identifying known functional links. Results: The eukaryotic correlated evolution network we derive displays the familiar power law scaling of connectivity. We introduce the use of explicit phylogenetic methods to reconstruct the ancestral presence or absence of proteins at the interior nodes of a phylogeny of eukaryote species. We find that the connectivity distribution of proteins at the point they arise on the tree and join the network follows a power law, as does the connectivity distribution of proteins at the time they are lost from the network. Proteins resident in the network acquire connections over time, but we find no evidence that 'preferential attachment' - the phenomenon of newly acquired connections in the network being more likely to be made to proteins with large numbers of connections - influences the network structure. We derive a 'variable rate of attachment' model in which proteins vary in their propensity to form network interactions independently of how many connections they have or of the total number of connections in the network, and show how this model can produce apparent power-law scaling without preferential attachment. Conclusion: A few simple rules can explain the topological structure and evolutionary changes to protein-interaction networks: most change is concentrated in satellite proteins of low connectivity and small phenotypic effect, and proteins differ in their propensity to form attachments. Given these rules of assembly, power law scaled networks naturally emerge from simple principles of selection, yielding protein interaction networks that retain a high-degree of robustness on short time scales and evolvability on longer evolutionary time scales.
Resumo:
A physically motivated statistical model is used to diagnose variability and trends in wintertime ( October - March) Global Precipitation Climatology Project (GPCP) pentad (5-day mean) precipitation. Quasi-geostrophic theory suggests that extratropical precipitation amounts should depend multiplicatively on the pressure gradient, saturation specific humidity, and the meridional temperature gradient. This physical insight has been used to guide the development of a suitable statistical model for precipitation using a mixture of generalized linear models: a logistic model for the binary occurrence of precipitation and a Gamma distribution model for the wet day precipitation amount. The statistical model allows for the investigation of the role of each factor in determining variations and long-term trends. Saturation specific humidity q(s) has a generally negative effect on global precipitation occurrence and with the tropical wet pentad precipitation amount, but has a positive relationship with the pentad precipitation amount at mid- and high latitudes. The North Atlantic Oscillation, a proxy for the meridional temperature gradient, is also found to have a statistically significant positive effect on precipitation over much of the Atlantic region. Residual time trends in wet pentad precipitation are extremely sensitive to the choice of the wet pentad threshold because of increasing trends in low-amplitude precipitation pentads; too low a choice of threshold can lead to a spurious decreasing trend in wet pentad precipitation amounts. However, for not too small thresholds, it is found that the meridional temperature gradient is an important factor for explaining part of the long-term trend in Atlantic precipitation.