46 resultados para sample dilution
em CentAUR: Central Archive University of Reading - UK
Resumo:
The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.
Resumo:
Bayesian inference has been used to determine rigorous estimates of hydroxyl radical concentrations () and air mass dilution rates (K) averaged following air masses between linked observations of nonmethane hydrocarbons (NMHCs) spanning the North Atlantic during the Intercontinental Transport and Chemical Transformation (ITCT)-Lagrangian-2K4 experiment. The Bayesian technique obtains a refined (posterior) distribution of a parameter given data related to the parameter through a model and prior beliefs about the parameter distribution. Here, the model describes hydrocarbon loss through OH reaction and mixing with a background concentration at rate K. The Lagrangian experiment provides direct observations of hydrocarbons at two time points, removing assumptions regarding composition or sources upstream of a single observation. The estimates are sharpened by using many hydrocarbons with different reactivities and accounting for their variability and measurement uncertainty. A novel technique is used to construct prior background distributions of many species, described by variation of a single parameter . This exploits the high correlation of species, related by the first principal component of many NMHC samples. The Bayesian method obtains posterior estimates of , K and following each air mass. Median values are typically between 0.5 and 2.0 × 106 molecules cm−3, but are elevated to between 2.5 and 3.5 × 106 molecules cm−3, in low-level pollution. A comparison of estimates from absolute NMHC concentrations and NMHC ratios assuming zero background (the “photochemical clock” method) shows similar distributions but reveals systematic high bias in the estimates from ratios. Estimates of K are ∼0.1 day−1 but show more sensitivity to the prior distribution assumed.
Resumo:
SMPS and DMS500 analysers were used to measure particulate size distributions in the exhaust of a fully annular aero gas turbine engine at two operating conditions to compare and analyse sources of discrepancy. A number of different dilution ratio values were utilised for the comparative analysis, and a Dekati hot diluter operating at a temperature of 623°K was also utilised to remove volatile PM prior to measurements being made. Additional work focused on observing the effect of varying the sample line temperatures to ascertain the impact. Explanations are offered for most of the trends observed, although a new, repeatable event identified in the range from 417°K to 423°K – where there was a three order of magnitude increase in the nucleation mode of the sample – requires further study.
Resumo:
This article assesses the extent to which sampling variation affects findings about Malmquist productivity change derived using data envelopment analysis (DEA), in the first stage by calculating productivity indices and in the second stage by investigating the farm-specific change in productivity. Confidence intervals for Malmquist indices are constructed using Simar and Wilson's (1999) bootstrapping procedure. The main contribution of this article is to account in the second stage for the information in the second stage provided by the first-stage bootstrap. The DEA SEs of the Malmquist indices given by bootstrapping are employed in an innovative heteroscedastic panel regression, using a maximum likelihood procedure. The application is to a sample of 250 Polish farms over the period 1996 to 2000. The confidence intervals' results suggest that the second half of 1990s for Polish farms was characterized not so much by productivity regress but rather by stagnation. As for the determinants of farm productivity change, we find that the integration of the DEA SEs in the second-stage regression is significant in explaining a proportion of the variance in the error term. Although our heteroscedastic regression results differ with those from the standard OLS, in terms of significance and sign, they are consistent with theory and previous research.
Resumo:
At present, collective action regarding bio-security among UK cattle and sheep farmers is rare. Despite the occurrence of catastrophic livestock diseases such as bovine spongiform encephalopathy (BSE) and foot and mouth disease (FMD), within recent decades, there are few national or local farmer-led animal health schemes. To explore the reasons for this apparent lack of interest, we utilised a socio-psychological approach to disaggregate the cognitive, emotive and contextual factors driving bio-security behaviour among cattle and sheep farmers in the United Kingdom (UK). In total, we interviewed 121 farmers in South-West England and Wales. The main analytical tools included a content, cluster and logistic regression analysis. The results of the content analysis illustrated apparent 'dissonance' between bio-security attitudes and behaviour.(1) Despite the heavy toll animal disease has taken on the agricultural economy, most study participants were dismissive of the many measures associated with bio-security. Justification for this lack of interest was largely framed in relation to the collective attribution or blame for the disease threats themselves. Indeed, epidemic diseases were largely related to external actors and agents. Reasons for outbreaks included inadequate border control, in tandem with ineffective policies and regulations. Conversely, endemic livestock disease was viewed as a problem for 'bad' farmers and not an issue for those individuals who managed their stock well. As such, there was little utility in forming groups to address what was largely perceived as an individual problem. Further, we found that attitudes toward bio-security did not appear to be influenced by any particular source of information per se. While strong negative attitudes were found toward specific sources of bio-security information, e.g. government leaflets, these appear to simply reflect widely held beliefs. In relation to actual bio-security behaviours, the logistic regression analysis revealed no significant difference between in-scheme and out of scheme farmers. We concluded that in order to support collective action with regard to bio-security, messages need to be reframed and delivered from a neutral source. Efforts to support group formation must also recognise and address the issues relating to perceptions of social connectedness among the communities involved. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a case study to illustrate the range of decisions involved in designing a sampling strategy for a complex, longitudinal research study. It is based on experience from the Young Lives project and identifies the approaches used to sample children for longitudinal follow-up in four less developed countries (LDCs). The rationale for decisions made and the resulting benefits, and limitations, of the approaches adopted are discussed. Of particular importance is the choice of sampling approach to yield useful analysis; specific examples are presented of how this informed the design of the Young Lives sampling strategy.
Resumo:
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
Objective: Autism spectrum disorders are now recognized to occur in up to 1% of the population and to be a major public health concern because of their early onset, lifelong persistence, and high levels of associated impairment. Little is known about the associated psychiatric disorders that may contribute to impairment. We identify the rates and type of psychiatric comorbidity associated with ASDs and explore the associations with variables identified as risk factors for child psychiatric disorders. Method: A subgroup of 112 ten- to 14-year old children from a population-derived cohort was assessed for other child psychiatric disorders (3 months' prevalence) through parent interview using the Child and Adolescent Psychiatric Assessment. DSM-IV diagnoses for childhood anxiety disorders, depressive disorders, oppositional defiant and conduct disorders, attention-deficit/hyperactivity disorder, tic disorders, trichotillomania, enuresis, and encopresis were identified. Results: Seventy percent of participants had at least one comorbid disorder and 41% had two or more. The most common diagnoses were social anxiety disorder (29.2%, 95% confidence interval [CI)] 13.2-45.1), attention-deficit/hyperactivity disorder (28.2%, 95% CI 13.3-43.0), and oppositional defiant disorder (28.1%, 95% CI 13.9-42.2). Of those with attention/deficit/hyperactivity disorder, 84% received a second comorbid diagnosis. There were few associations between putative risk factors and psychiatric disorder. Conclusions: Psychiatric disorders are common and frequently multiple in children with autism spectrum disorders. They may provide targets for intervention and should be routinely evaluated in the clinical assessment of this group.