894 resultados para probability of error
Resumo:
A dataset of 1,846,990 completed lactation record,; was created Using milk recording data from 8,967 commercial dairy farms in the United Kingdom over a five year period. Herd-specific lactation curves describing levels of milk, Cat and protein by lactation number and month of calving were generated for each farm. The actual yield of milk and protein proportion at the first milk recording of individual cow lactations were compared with the levels taken from the lactation curves. Logistic regression analysis showed that cows production milk with a lower percentage of protein than average had a significantly lower probability of being in-calf at 100 days post calving and it significantly higher probability of being culled at the end of lactation. The culling rates derived from the studied database demonstrate the current high wastage rate of commercial dairy cows. Well of this wastage is due to involuntary culling as a result of reproductive failure.
Resumo:
The conventional method for the assessment of acute dermal toxicity (OECD Test Guideline 402, 1987) uses death of animals as an endpoint to identify the median lethal dose (LD50). A new OECD Testing Guideline called the dermal fixed dose procedure (dermal FDP) is being prepared to provide an alternative to Test Guideline 402. In contrast to Test Guideline 402, the dermal FDP does not provide a point estimate of the LD50, but aims to identify that dose of the substance under investigation that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonised System of Classification and Labelling scheme (GHS). The dermal FDP has been validated using statistical modelling rather than by in vivo testing. The statistical modelling approach enables calculation of the probability of each GHS classification and the expected numbers of deaths and animals used in the test for imaginary substances with a range of LD50 values and dose-response curve slopes. This paper describes the dermal FDP and reports the results from the statistical evaluation. It is shown that the procedure will be completed with considerably less death and suffering than guideline 402, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LD50 value.
Resumo:
The release of genetically modified plants is governed by regulations that aim to provide an assessment of potential impact on the environment. One of the most important components of this risk assessment is an evaluation of the probability of gene flow. In this review, we provide an overview of the current literature on gene flow from transgenic plants, providing a framework of issues for those considering the release of a transgenic plant into the environment. For some plants gene flow from transgenic crops is well documented, and this information is discussed in detail in this review. Mechanisms of gene flow vary from plant species to plant species and range from the possibility of asexual propagation, short- or long-distance pollen dispersal mediated by insects or wind and seed dispersal. Volunteer populations of transgenic plants may occur where seed is inadvertently spread during harvest or commercial distribution. If there are wild populations related to the transgenic crop then hybridization and eventually introgression in the wild may occur, as it has for herbicide resistant transgenic oilseed rape (Brassica napus). Tools to measure the amount of gene flow, experimental data measuring the distance of pollen dispersal, and experiments measuring hybridization and seed survivability are discussed in this review. The various methods that have been proposed to prevent gene flow from genetically modified plants are also described. The current "transgenic traits'! in the major crops confer resistance to herbicides and certain insects. Such traits could confer a selective advantage (an increase in fitness) in wild plant populations in some circumstances, were gene flow to occur. However, there is ample evidence that gene flow from crops to related wild species occurred before the development of transgenic crops and this should be taken into account in the risk assessment process.
Resumo:
Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
Resumo:
Recombination is thought to occur only rarely in animal mitochondrial DNA ( mtDNA). However, detection of mtDNA recombination requires that cells become heteroplasmic through mutation, intramolecular recombination or ' leakage' of paternal mtDNA. Interspecific hybridization increases the probability of detecting mtDNA recombinants due to higher levels of sequence divergence and potentially higher levels of paternal leakage. During a study of historical variation in Atlantic salmon ( Salmo salar) mtDNA, an individual with a recombinant haplotype containing sequence from both Atlantic salmon and brown trout ( Salmo trutta) was detected. The individual was not an F1 hybrid but it did have an unusual nuclear genotype which suggested that it was a later-generation backcross. No other similar recombinant haplotype was found from the same population or three neighbouring Atlantic salmon populations in 717 individuals collected during 1948 - 2002. Interspecific recombination may increase mtDNA variability within species and can have implications for phylogenetic studies.
Resumo:
Stephens and Donnelly have introduced a simple yet powerful importance sampling scheme for computing the likelihood in population genetic models. Fundamental to the method is an approximation to the conditional probability of the allelic type of an additional gene, given those currently in the sample. As noted by Li and Stephens, the product of these conditional probabilities for a sequence of draws that gives the frequency of allelic types in a sample is an approximation to the likelihood, and can be used directly in inference. The aim of this note is to demonstrate the high level of accuracy of "product of approximate conditionals" (PAC) likelihood when used with microsatellite data. Results obtained on simulated microsatellite data show that this strategy leads to a negligible bias over a wide range of the scaled mutation parameter theta. Furthermore, the sampling variance of likelihood estimates as well as the computation time are lower than that obtained with importance sampling on the whole range of theta. It follows that this approach represents an efficient substitute to IS algorithms in computer intensive (e.g. MCMC) inference methods in population genetics. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Background: We report an analysis of a protein network of functionally linked proteins, identified from a phylogenetic statistical analysis of complete eukaryotic genomes. Phylogenetic methods identify pairs of proteins that co-evolve on a phylogenetic tree, and have been shown to have a high probability of correctly identifying known functional links. Results: The eukaryotic correlated evolution network we derive displays the familiar power law scaling of connectivity. We introduce the use of explicit phylogenetic methods to reconstruct the ancestral presence or absence of proteins at the interior nodes of a phylogeny of eukaryote species. We find that the connectivity distribution of proteins at the point they arise on the tree and join the network follows a power law, as does the connectivity distribution of proteins at the time they are lost from the network. Proteins resident in the network acquire connections over time, but we find no evidence that 'preferential attachment' - the phenomenon of newly acquired connections in the network being more likely to be made to proteins with large numbers of connections - influences the network structure. We derive a 'variable rate of attachment' model in which proteins vary in their propensity to form network interactions independently of how many connections they have or of the total number of connections in the network, and show how this model can produce apparent power-law scaling without preferential attachment. Conclusion: A few simple rules can explain the topological structure and evolutionary changes to protein-interaction networks: most change is concentrated in satellite proteins of low connectivity and small phenotypic effect, and proteins differ in their propensity to form attachments. Given these rules of assembly, power law scaled networks naturally emerge from simple principles of selection, yielding protein interaction networks that retain a high-degree of robustness on short time scales and evolvability on longer evolutionary time scales.
Resumo:
In the present study we measured maternal plasma concentrations of two placental neurohormones, corticotropin-releasing factor (CRF) and CRF-binding protein (CRF-BP), in 58 at-risk pregnant women consecutively enrolled between 28 and 29 wk of pregnancy to evaluate whether their evaluation may predict third trimester-onset preeclampsia ( PE). The statistical significance was assessed by t test. The cut-off points for defining altered CRF and CRF-BP levels for prediction of PE were chosen by receiving operator characteristics curve analysis, and the probability of developing PE was calculated for several combinations of hormone testing results. CRF and CRF-BP levels were significantly ( both P < 0.0001) higher and lower, respectively, in the patients (n = 20) who later developed PE than in those who did not present PE at follow-up. CRF at the cut-off 425.95 pmol/liter achieved a sensitivity of 94.8% and a specificity of 96.9%, whereas CRF-BP at the cut-off 125.8 nmol/liter combined a sensitivity of 92.5% and a specificity of 82.5% as single markers for prediction of PE. The probability of PE was 34.5% in the whole study population, 93.75% when both CRF and CRF-BP levels were changed, and 0% if both hormone markers were unaltered. The measurement of CRF and CRF-BP levels may add significant prognostic information for predicting PE in at-risk pregnant women.
Resumo:
Natural exposure to prion disease is likely to occur throughout successive challenges, yet most experiments focus on single large doses of infectious material. We analyze the results from an experiment in which rodents were exposed to multiple doses of feed contaminated with the scrapie agent. We formally define hypotheses for how the doses combine in terms of statistical models. The competing hypotheses are that only the total dose of infectivity is important (cumulative model), doses act independently, or a general alternative that interaction between successive doses occurs (to raise or lower the risk of infection). We provide sample size calculations to distinguish these hypotheses. In the experiment, a fixed total dose has a significantly reduced probability of causing infection if the material is presented as multiple challenges, and as the time between challenges lengthens. Incubation periods are shorter and less variable if all material is consumed on one occasion. We show that the probability of infection is inconsistent with the hypothesis that each dose acts as a cumulative or independent challenge. The incubation periods are inconsistent with the independence hypothesis. Thus, although a trend exists for the risk of infection with prion disease to increase with repeated doses, it does so to a lesser degree than is expected if challenges combine independently or in a cumulative manner.
Resumo:
1. Demographic models are assuming an important role in management decisions for endangered species. Elasticity analysis and scope for management analysis are two such applications. Elasticity analysis determines the vital rates that have the greatest impact on population growth. Scope for management analysis examines the effects that feasible management might have on vital rates and population growth. Both methods target management in an attempt to maximize population growth. 2. The Seychelles magpie robin Copsychus sechellarum is a critically endangered island endemic, the population of which underwent significant growth in the early 1990s following the implementation of a recovery programme. We examined how the formal use of elasticity and scope for management analyses might have shaped management in the recovery programme, and assessed their effectiveness by comparison with the actual population growth achieved. 3. The magpie robin population doubled from about 25 birds in 1990 to more than 50 by 1995. A simple two-stage demographic model showed that this growth was driven primarily by a significant increase in the annual survival probability of first-year birds and an increase in the birth rate. Neither the annual survival probability of adults nor the probability of a female breeding at age 1 changed significantly over time. 4. Elasticity analysis showed that the annual survival probability of adults had the greatest impact on population growth. There was some scope to use management to increase survival, but because survival rates were already high (> 0.9) this had a negligible effect on population growth. Scope for management analysis showed that significant population growth could have been achieved by targeting management measures at the birth rate and survival probability of first-year birds, although predicted growth rates were lower than those achieved by the recovery programme when all management measures were in place (i.e. 1992-95). 5. Synthesis and applications. We argue that scope for management analysis can provide a useful basis for management but will inevitably be limited to some extent by a lack of data, as our study shows. This means that identifying perceived ecological problems and designing management to alleviate them must be an important component of endangered species management. The corollary of this is that it will not be possible or wise to consider only management options for which there is a demonstrable ecological benefit. Given these constraints, we see little role for elasticity analysis because, when data are available, a scope for management analysis will always be of greater practical value and, when data are lacking, precautionary management demands that as many perceived ecological problems as possible are tackled.
Communicating risk of medication side effects: an empirical evaluation of EU recommended terminology
Resumo:
Two experiments compared people's interpretation of verbal and numerical descriptions of the risk of medication side effects occurring. The verbal descriptors were selected from those recommended for use by the European Union (very common, common, uncommon, rare, very rare). Both experiments used a controlled empirical methodology, in which nearly 500 members of the general population were presented with a fictitious (but realistic) scenario about visiting the doctor and being prescribed medication, together with information about the medicine's side effects and their probability of occurrence. Experiment 1 found that, in all three age groups tested (18 - 40, 41 - 60 and over 60), participants given a verbal descriptor (very common) estimated side effect risk to be considerably higher than those given a comparable numerical description. Furthermore, the differences in interpretation were reflected in their judgements of side effect severity, risk to health, and intention to comply. Experiment 2 confirmed these findings using two different verbal descriptors (common and rare) and in scenarios which described either relatively severe or relatively mild side effects. Strikingly, only 7 out of 180 participants in this study gave a probability estimate which fell within the EU assigned numerical range. Thus, large scale use of the descriptors could have serious negative consequences for individual and public health. We therefore recommend that the EU and National authorities suspend their recommendations regarding these descriptors until a more substantial evidence base is available to support their appropriate use.
Resumo:
Objectives: To examine doctors' (Experiment 1) and doctors' and lay people's (Experiment 2) interpretations of two sets of recommended verbal labels for conveying information about side effects incidence rates. Method: Both studies used a controlled empirical methodology in which participants were presented with a hypothetical, but realistic, scenario involving a prescribed medication that was said to be associated with either mild or severe side effects. The probability of each side effect was described using one of the five descriptors advocated by the European Union (Experiment 1) or one of the six descriptors advocated in Calman's risk scale (Experiment 2), and study participants were required to estimate (numerically) the probability of each side effect occurring. Key findings: Experiment 1 showed that the doctors significantly overestimated the risk of side effects occurring when interpreting the five EU descriptors, compared with the assigned probability ranges. Experiment 2 showed that both groups significantly overestimated risk when given the six Calman descriptors, although the degree of overestimation was not as great for the doctors as for the lay people. Conclusion: On the basis of our findings, we argue that we are still a long way from achieving a standardised language of risk for use by both professionals and the general public, although there might be more potential for use of standardised terms among professionals. In the meantime, the EU and other regulatory bodies and health professionals should be very cautious about advocating the use of particular verbal labels for describing medication side effects.
Resumo:
A study examined people's interpretation of European Commission (EC) recommended verbal descriptors for risk of medicine side effects, and actions to take if they do occur. Members of the general public were presented with a fictitious (but realistic) scenario about suffering from a stiff neck, visiting the local pharmacy and purchasing an over the counter (OTC) medicine (Ibruprofen). The medicine came with an information leaflet which included information about the medicine's side effects, their risk of occurrence, and recommended actions to take if adverse effects are experienced. Probability of occurrence was presented numerically (6%) or verbally, using the recommended EC descriptor (common). Results showed that, in line with findings of our earlier work with prescribed medicines, participants significantly overestimated side effect risk. Furthermore, the differences in interpretation were reflected in their judgements of satisfaction, side effect severity, risk to health, and intention to take the medicine. Finally, we observed no significant difference between people's interpretation of the recommended action descriptors ('immediately' and 'as soon as possible'). (C) 2003 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.