50 resultados para HOMOGENEOUS SAMPLE
em CentAUR: Central Archive University of Reading - UK
Resumo:
Whereas the predominance of El Niño Southern Oscillation (ENSO) mode in the tropical Pacific sea surface temperature (SST) variability is well established, no such consensus seems to have been reached by climate scientists regarding the Indian Ocean. While a number of researchers think that the Indian Ocean SST variability is dominated by an active dipolar-type mode of variability, similar to ENSO, others suggest that the variability is mostly passive and behaves like an autocorrelated noise. For example, it is suggested recently that the Indian Ocean SST variability is consistent with the null hypothesis of a homogeneous diffusion process. However, the existence of the basin-wide warming trend represents a deviation from a homogeneous diffusion process, which needs to be considered. An efficient way of detrending, based on differencing, is introduced and applied to the Hadley Centre ice and SST. The filtered SST anomalies over the basin (23.5N-29.5S, 30.5E-119.5E) are then analysed and found to be inconsistent with the null hypothesis on intraseasonal and interannual timescales. The same differencing method is then applied to the smaller tropical Indian Ocean domain. This smaller domain is also inconsistent with the null hypothesis on intraseasonal and interannual timescales. In particular, it is found that the leading mode of variability yields the Indian Ocean dipole, and departs significantly from the null hypothesis only in the autumn season.
Resumo:
A collection of 24 seawaters from various worldwide locations and differing depth was culled to measure their chlorine isotopic composition (delta(37)Cl). These samples cover all the oceans and large seas: Atlantic, Pacific, Indian and Antarctic oceans, Mediterranean and Red seas. This collection includes nine seawaters from three depth profiles down to 4560 mbsl. The standard deviation (2sigma) of the delta(37)Cl of this collection is +/-0.08 parts per thousand, which is in fact as large as our precision of measurement ( +/- 0.10 parts per thousand). Thus, within error, oceanic waters seem to be an homogeneous reservoir. According to our results, any seawater could be representative of Standard Mean Ocean Chloride (SMOC) and could be used as a reference standard. An extended international cross-calibration over a large range of delta(37)Cl has been completed. For this purpose, geological fluid samples of various chemical compositions and a manufactured CH3Cl gas sample, with delta(37)Cl from about -6 parts per thousand to +6 parts per thousand have been compared. Data were collected by gas source isotope ratio mass spectrometry (IRMS) at the Paris, Reading and Utrecht laboratories and by thermal ionization mass spectrometry (TIMS) at the Leeds laboratory. Comparison of IRMS values over the range -5.3 parts per thousand to +1.4 parts per thousand plots on the Y=X line, showing a very good agreement between the three laboratories. On 11 samples, the trend line between Paris and Reading Universities is: delta(37)Cl(Reading)= (1.007 +/- 0.009)delta(37)Cl(Paris) - (0.040 +/- 0.025), with a correlation coefficient: R-2 = 0.999. TIMS values from Leeds University have been compared to IRMS values from Paris University over the range -3.0 parts per thousand to +6.0 parts per thousand. On six samples, the agreement between these two laboratories, using different techniques is good: delta(37)Cl(Leeds)=(1.052 +/- 0.038)delta(37)Cl(Paris) + (0.058 +/- 0.099), with a correlation coefficient: R-2 = 0.995. The present study completes a previous cross-calibration between the Leeds and Reading laboratories to compare TIMS and IRMS results (Anal. Chem. 72 (2000) 2261). Both studies allow a comparison of IRMS and TIMS techniques between delta(37)Cl values from -4.4 parts per thousand to +6.0 parts per thousand and show a good agreement: delta(37)Cl(TIMS)=(1.039 +/- 0.023)delta(37)Cl(IRMS)+(0.059 +/- 0.056), with a correlation coefficient: R-2 = 0.996. Our study shows that, for fluid samples, if chlorine isotopic compositions are near 0 parts per thousand, their measurements either by IRMS or TIMS will give comparable results within less than +/- 0.10 parts per thousand, while for delta(37)Cl values as far as 10 parts per thousand (either positive or negative) from SMOC, both techniques will agree within less than +/- 0.30 parts per thousand. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
This article assesses the extent to which sampling variation affects findings about Malmquist productivity change derived using data envelopment analysis (DEA), in the first stage by calculating productivity indices and in the second stage by investigating the farm-specific change in productivity. Confidence intervals for Malmquist indices are constructed using Simar and Wilson's (1999) bootstrapping procedure. The main contribution of this article is to account in the second stage for the information in the second stage provided by the first-stage bootstrap. The DEA SEs of the Malmquist indices given by bootstrapping are employed in an innovative heteroscedastic panel regression, using a maximum likelihood procedure. The application is to a sample of 250 Polish farms over the period 1996 to 2000. The confidence intervals' results suggest that the second half of 1990s for Polish farms was characterized not so much by productivity regress but rather by stagnation. As for the determinants of farm productivity change, we find that the integration of the DEA SEs in the second-stage regression is significant in explaining a proportion of the variance in the error term. Although our heteroscedastic regression results differ with those from the standard OLS, in terms of significance and sign, they are consistent with theory and previous research.
Resumo:
This article explores how data envelopment analysis (DEA), along with a smoothed bootstrap method, can be used in applied analysis to obtain more reliable efficiency rankings for farms. The main focus is the smoothed homogeneous bootstrap procedure introduced by Simar and Wilson (1998) to implement statistical inference for the original efficiency point estimates. Two main model specifications, constant and variable returns to scale, are investigated along with various choices regarding data aggregation. The coefficient of separation (CoS), a statistic that indicates the degree of statistical differentiation within the sample, is used to demonstrate the findings. The CoS suggests a substantive dependency of the results on the methodology and assumptions employed. Accordingly, some observations are made on how to conduct DEA in order to get more reliable efficiency rankings, depending on the purpose for which they are to be used. In addition, attention is drawn to the ability of the SLICE MODEL, implemented in GAMS, to enable researchers to overcome the computational burdens of conducting DEA (with bootstrapping).
Resumo:
At present, collective action regarding bio-security among UK cattle and sheep farmers is rare. Despite the occurrence of catastrophic livestock diseases such as bovine spongiform encephalopathy (BSE) and foot and mouth disease (FMD), within recent decades, there are few national or local farmer-led animal health schemes. To explore the reasons for this apparent lack of interest, we utilised a socio-psychological approach to disaggregate the cognitive, emotive and contextual factors driving bio-security behaviour among cattle and sheep farmers in the United Kingdom (UK). In total, we interviewed 121 farmers in South-West England and Wales. The main analytical tools included a content, cluster and logistic regression analysis. The results of the content analysis illustrated apparent 'dissonance' between bio-security attitudes and behaviour.(1) Despite the heavy toll animal disease has taken on the agricultural economy, most study participants were dismissive of the many measures associated with bio-security. Justification for this lack of interest was largely framed in relation to the collective attribution or blame for the disease threats themselves. Indeed, epidemic diseases were largely related to external actors and agents. Reasons for outbreaks included inadequate border control, in tandem with ineffective policies and regulations. Conversely, endemic livestock disease was viewed as a problem for 'bad' farmers and not an issue for those individuals who managed their stock well. As such, there was little utility in forming groups to address what was largely perceived as an individual problem. Further, we found that attitudes toward bio-security did not appear to be influenced by any particular source of information per se. While strong negative attitudes were found toward specific sources of bio-security information, e.g. government leaflets, these appear to simply reflect widely held beliefs. In relation to actual bio-security behaviours, the logistic regression analysis revealed no significant difference between in-scheme and out of scheme farmers. We concluded that in order to support collective action with regard to bio-security, messages need to be reframed and delivered from a neutral source. Efforts to support group formation must also recognise and address the issues relating to perceptions of social connectedness among the communities involved. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a case study to illustrate the range of decisions involved in designing a sampling strategy for a complex, longitudinal research study. It is based on experience from the Young Lives project and identifies the approaches used to sample children for longitudinal follow-up in four less developed countries (LDCs). The rationale for decisions made and the resulting benefits, and limitations, of the approaches adopted are discussed. Of particular importance is the choice of sampling approach to yield useful analysis; specific examples are presented of how this informed the design of the Young Lives sampling strategy.
Resumo:
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
Time-resolved kinetic studies of the reaction of silylene, SiH2, with H2O and with D2O have been carried out in the gas phase at 297 K and at 345 K, using laser flash photolysis to generate and monitor SiH2. The reaction was studied independently as a function of H2O (or D2O) and SF6 (bath gas) pressures. At a fixed pressure of SF6 (5 Torr), [SiH2] decay constants, k(obs), showed a quadratic dependence on [H2O] or [D2O]. At a fixed pressure of H2O or D2O, k(obs) Values were strongly dependent on [SF6]. The combined rate expression is consistent with a mechanism involving the reversible formation of a vibrationally excited zwitterionic donor-acceptor complex, H2Si...OH2 (or H2Si...OD2). This complex can then either be stabilized by SF6 or it reacts with a further molecule of H2O (or D2O) in the rate-determining step. Isotope effects are in the range 1.0-1.5 and are broadly consistent with this mechanism. The mechanism is further supported by RRKM theory, which shows the association reaction to be close to its third-order region of pressure (SF6) dependence. Ab initio quantum calculations, carried out at the G3 level, support the existence of a hydrated zwitterion H2Si...(OH2)(2), which can rearrange to hydrated silanol, with an energy barrier below the reaction energy threshold. This is the first example of a gas-phase-catalyzed silylene reaction.
Resumo:
Sequential crystallization of poly(L-lactide) (PLLA) followed by poly(epsilon-caprolactone) (PCL) in double crystalline PLLA-b-PCL diblock copolymers is studied by differential scanning calorimetry (DSC), polarized optical microscopy (POM), wide-angle X-ray scattering (WAXS) and small-angle X-ray scattering (SAXS). Three samples with different compositions are studied. The sample with the shortest PLLA block (32 wt.-% PLLA) crystallizes from a homogeneous melt, the other two (with 44 and 60% PLLA) from microphase separated structures. The microphase structure of the melt is changed as PLLA crystallizes at 122 degrees C (a temperature at which the PCL block is molten) forming spherulites regardless of composition, even with 32% PLLA. SAXS indicates that a lamellar structure with a different periodicity than that obtained in the melt forms (for melt segregated samples). Where PCL is the majority block, PCL crystallization at 42 degrees C following PLLA crystallization leads to rearrangement of the lamellar structure, as observed by SAXS, possibly due to local melting at the interphases between domains. POM results showed that PCL crystallizes within previously formed PLLA spherulites. WAXS data indicate that the PLLA unit cell is modified by crystallization of PCL, at least for the two majority PCL samples. The PCL minority sample did not crystallize at 42 degrees C (well below the PCL homopolymer crystallization temperature), pointing to the influence of pre-crystallization of PLLA on PCL crystallization, although it did crystallize at lower temperature. Crystallization kinetics were examined by DSC and WAXS, with good agreement in general. The crystallization rate of PLLA decreased with increase in PCL content in the copolymers. The crystallization rate of PCL decreased with increasing PLLA content. The Avrami exponents were in general depressed for both components in the block copolymers compared to the parent homopolymers. Polarized optical micrographs during isothermal crystalli zation of (a) homo-PLLA, (b) homo-PCL, (c) and (d) block copolymer after 30 min at 122 degrees C and after 15 min at 42 degrees C.