845 resultados para Petrol sample
Resumo:
Lime treatment of hydrocarbon-contaminated soils offers the potential to stabilize and solidify these materials, with a consequent reduction in the risks associated with the leachate emanating from them. This can aid the disposal of contaminated soils or enable their on-site treatment. In this study, the addition of hydrated lime and quicklime significantly reduced the leaching of total petroleum hydrocarbons (TPH) from soils polluted with a 50:50 petrol/diesel mixture. Treatment with quicklime was slightly more effective, but hydrated lime may be better in the field because of its ease of handling. It is proposed that this occurs as a consequence of pozzolanic reactions retaining the hydrocarbons within the soil matrix. There was some evidence that this may be a temporary effect, as leaching increased between seven and 21 days after treatment, but the TPH concentrations in the leachate of treated soils were still one order of magnitude below those of the control soil, offering significant protection to groundwater. The reduction in leaching following treatment was observed in both aliphatic and aromatic fractions, but the latter were more affected because of their higher solubilty. The results are discussed in the context of risk assessment, and recommendations for future research are made.
Resumo:
This article assesses the extent to which sampling variation affects findings about Malmquist productivity change derived using data envelopment analysis (DEA), in the first stage by calculating productivity indices and in the second stage by investigating the farm-specific change in productivity. Confidence intervals for Malmquist indices are constructed using Simar and Wilson's (1999) bootstrapping procedure. The main contribution of this article is to account in the second stage for the information in the second stage provided by the first-stage bootstrap. The DEA SEs of the Malmquist indices given by bootstrapping are employed in an innovative heteroscedastic panel regression, using a maximum likelihood procedure. The application is to a sample of 250 Polish farms over the period 1996 to 2000. The confidence intervals' results suggest that the second half of 1990s for Polish farms was characterized not so much by productivity regress but rather by stagnation. As for the determinants of farm productivity change, we find that the integration of the DEA SEs in the second-stage regression is significant in explaining a proportion of the variance in the error term. Although our heteroscedastic regression results differ with those from the standard OLS, in terms of significance and sign, they are consistent with theory and previous research.
Resumo:
At present, collective action regarding bio-security among UK cattle and sheep farmers is rare. Despite the occurrence of catastrophic livestock diseases such as bovine spongiform encephalopathy (BSE) and foot and mouth disease (FMD), within recent decades, there are few national or local farmer-led animal health schemes. To explore the reasons for this apparent lack of interest, we utilised a socio-psychological approach to disaggregate the cognitive, emotive and contextual factors driving bio-security behaviour among cattle and sheep farmers in the United Kingdom (UK). In total, we interviewed 121 farmers in South-West England and Wales. The main analytical tools included a content, cluster and logistic regression analysis. The results of the content analysis illustrated apparent 'dissonance' between bio-security attitudes and behaviour.(1) Despite the heavy toll animal disease has taken on the agricultural economy, most study participants were dismissive of the many measures associated with bio-security. Justification for this lack of interest was largely framed in relation to the collective attribution or blame for the disease threats themselves. Indeed, epidemic diseases were largely related to external actors and agents. Reasons for outbreaks included inadequate border control, in tandem with ineffective policies and regulations. Conversely, endemic livestock disease was viewed as a problem for 'bad' farmers and not an issue for those individuals who managed their stock well. As such, there was little utility in forming groups to address what was largely perceived as an individual problem. Further, we found that attitudes toward bio-security did not appear to be influenced by any particular source of information per se. While strong negative attitudes were found toward specific sources of bio-security information, e.g. government leaflets, these appear to simply reflect widely held beliefs. In relation to actual bio-security behaviours, the logistic regression analysis revealed no significant difference between in-scheme and out of scheme farmers. We concluded that in order to support collective action with regard to bio-security, messages need to be reframed and delivered from a neutral source. Efforts to support group formation must also recognise and address the issues relating to perceptions of social connectedness among the communities involved. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a case study to illustrate the range of decisions involved in designing a sampling strategy for a complex, longitudinal research study. It is based on experience from the Young Lives project and identifies the approaches used to sample children for longitudinal follow-up in four less developed countries (LDCs). The rationale for decisions made and the resulting benefits, and limitations, of the approaches adopted are discussed. Of particular importance is the choice of sampling approach to yield useful analysis; specific examples are presented of how this informed the design of the Young Lives sampling strategy.
Resumo:
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
Objective: Autism spectrum disorders are now recognized to occur in up to 1% of the population and to be a major public health concern because of their early onset, lifelong persistence, and high levels of associated impairment. Little is known about the associated psychiatric disorders that may contribute to impairment. We identify the rates and type of psychiatric comorbidity associated with ASDs and explore the associations with variables identified as risk factors for child psychiatric disorders. Method: A subgroup of 112 ten- to 14-year old children from a population-derived cohort was assessed for other child psychiatric disorders (3 months' prevalence) through parent interview using the Child and Adolescent Psychiatric Assessment. DSM-IV diagnoses for childhood anxiety disorders, depressive disorders, oppositional defiant and conduct disorders, attention-deficit/hyperactivity disorder, tic disorders, trichotillomania, enuresis, and encopresis were identified. Results: Seventy percent of participants had at least one comorbid disorder and 41% had two or more. The most common diagnoses were social anxiety disorder (29.2%, 95% confidence interval [CI)] 13.2-45.1), attention-deficit/hyperactivity disorder (28.2%, 95% CI 13.3-43.0), and oppositional defiant disorder (28.1%, 95% CI 13.9-42.2). Of those with attention/deficit/hyperactivity disorder, 84% received a second comorbid diagnosis. There were few associations between putative risk factors and psychiatric disorder. Conclusions: Psychiatric disorders are common and frequently multiple in children with autism spectrum disorders. They may provide targets for intervention and should be routinely evaluated in the clinical assessment of this group.