66 resultados para 1 sigma counting error
em CentAUR: Central Archive University of Reading - UK
Resumo:
This contribution describes the optimization of chlorine extraction from silicate samples by pyrohydrolysis prior to the precise determination of Cl stable-isotope compositions (637 Cl) by gas source, dual inlet Isotope Ratio Mass Spectrometry (IRMS) on CH(3)Clg. The complete method was checked on three international reference materials for Cl-content and two laboratory glass standards. Whole procedure blanks are lower than 0. 5 mu mol, corresponding to less than 10 wt.% of most of the sample chloride analysed. In the absence of international chlorine isotope rock, we report here Cl extracted compared to accepted Cl contents and reproducibilities on Cl and delta Cl-37 measurements for the standard rocks. After extraction, the Cl contents of the three international references compared within error with the accepted values (mean yield = 94 +/-10%) with reproducibilities better than 12% (10). The laboratory glass standards - andesite SO100DS92 and phonolite S9(2) - were used specifically to test the effect of chloride amount on the measurements. They gave Cl extraction yields of 100 +/-6% (1 sigma-; n = 15) and 105 +/- 8% (1 sigma-; n = 7), respectively, with delta Cl-37 values of -0.51 0.14%o and -0.39 0.17%o (1g). In summary, for silicate samples with Cl contents between 39 and 9042 ppm, the Pyrohydrolysis/HPLC method leads to overall CI extraction yields of 100 8%, reproducibilities on Cl contents of 7% and on delta Cl-37 measurements of 0.12%o (all 1 sigma). The method was further applied to ten silicate rocks of various mineralogy and chemistry (meteorite, fresh MORB glasses, altered basalts and setpentinized peridotites) chosen for their large range of Cl contents (70-2156 ppm) and their geological significance. delta Cl-37 values range between -2.33 and -0.50%o. These strictly negative values contrast with the large range and mainly positive values previously reported for comparable silicate samples and shown here to be affected by analytical problems. Thus we propose a preliminary, revised terrestrial CI cycle, mainly dominated by negative and zero delta Cl-37 values. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Simulations of ozone loss rates using a three-dimensional chemical transport model and a box model during recent Antarctic and Arctic winters are compared with experimental loss rates. The study focused on the Antarctic winter 2003, during which the first Antarctic Match campaign was organized, and on Arctic winters 1999/2000, 2002/2003. The maximum ozone loss rates retrieved by the Match technique for the winters and levels studied reached 6 ppbv/sunlit hour and both types of simulations could generally reproduce the observations at 2-sigma error bar level. In some cases, for example, for the Arctic winter 2002/2003 at 475 K level, an excellent agreement within 1-sigma standard deviation level was obtained. An overestimation was also found with the box model simulation at some isentropic levels for the Antarctic winter and the Arctic winter 1999/2000, indicating an overestimation of chlorine activation in the model. Loss rates in the Antarctic show signs of saturation in September, which have to be considered in the comparison. Sensitivity tests were performed with the box model in order to assess the impact of kinetic parameters of the ClO-Cl2O2 catalytic cycle and total bromine content on the ozone loss rate. These tests resulted in a maximum change in ozone loss rates of 1.2 ppbv/sunlit hour, generally in high solar zenith angle conditions. In some cases, a better agreement was achieved with fastest photolysis of Cl2O2 and additional source of total inorganic bromine but at the expense of overestimation of smaller ozone loss rates derived later in the winter.
Resumo:
Diffusive isotopic fractionation factors are important in order to understand natural processes and have practical application in radioactive waste storage and carbon dioxide sequestration. We determined the isotope fractionation factors and the effective diffusion coefficients of chloride and bromide ions during aqueous diffusion in polyacrylamide gel. Diffusion was determined as functions of temperature, time and concentration. The effect of temperature is relatively large on the diffusion coefficient (D) but only small on isotope fractionation. For chlorine, the ratio, D-35cl/D-37cl varied from 1.00128 +/- 0.00017 (1 sigma) at 2 degrees C to 1.00192 +/- 0.00015 at 80 degrees C. For bromine, D-79Br/D-81Br varied from 1.00098 +/- 0.00009 at 2 degrees C to 1.0064 +/- 0.00013 at 21 degrees C and 1.00078 +/- 0.00018 (1 sigma) at 80 degrees C. There were no significant effects on the isotope fractionation due to concentration. The lack of sensitivity of the diffusive isotope fractionation to anything at the most common temperatures (0 to 30 C) makes it particularly valuable for application to understanding processes in geological environments and an important natural tracer in order to understand fluid transport processes. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
In recent years an increasing number of papers have employed meta-analysis to integrate effect sizes of researchers’ own series of studies within a single paper (“internal meta-analysis”). Although this approach has the obvious advantage of obtaining narrower confidence intervals, we show that it could inadvertently inflate false-positive rates if researchers are motivated to use internal meta-analysis in order to obtain a significant overall effect. Specifically, if one decides whether to stop or continue a further replication experiment depending on the significance of the results in an internal meta-analysis, false-positive rates would increase beyond the nominal level. We conducted a set of Monte-Carlo simulations to demonstrate our argument, and provided a literature review to gauge awareness and prevalence of this issue. Furthermore, we made several recommendations when using internal meta-analysis to make a judgment on statistical significance.
Resumo:
Two wavelet-based control variable transform schemes are described and are used to model some important features of forecast error statistics for use in variational data assimilation. The first is a conventional wavelet scheme and the other is an approximation of it. Their ability to capture the position and scale-dependent aspects of covariance structures is tested in a two-dimensional latitude-height context. This is done by comparing the covariance structures implied by the wavelet schemes with those found from the explicit forecast error covariance matrix, and with a non-wavelet- based covariance scheme used currently in an operational assimilation scheme. Qualitatively, the wavelet-based schemes show potential at modeling forecast error statistics well without giving preference to either position or scale-dependent aspects. The degree of spectral representation can be controlled by changing the number of spectral bands in the schemes, and the least number of bands that achieves adequate results is found for the model domain used. Evidence is found of a trade-off between the localization of features in positional and spectral spaces when the number of bands is changed. By examining implied covariance diagnostics, the wavelet-based schemes are found, on the whole, to give results that are closer to diagnostics found from the explicit matrix than from the nonwavelet scheme. Even though the nature of the covariances has the right qualities in spectral space, variances are found to be too low at some wavenumbers and vertical correlation length scales are found to be too long at most scales. The wavelet schemes are found to be good at resolving variations in position and scale-dependent horizontal length scales, although the length scales reproduced are usually too short. The second of the wavelet-based schemes is often found to be better than the first in some important respects, but, unlike the first, it has no exact inverse transform.
Resumo:
Background Pharmacy aseptic units prepare and supply injectables to minimise risks. The UK National Aseptic Error Reporting Scheme has been collecting data on pharmacy compounding errors, including near-misses, since 2003. Objectives The cumulative reports from January 2004 to December 2007, inclusive, were analysed. Methods The different variables of product types, error types, staff making and detecting errors, stage errors detected, perceived contributory factors, and potential or actual outcomes were presented by cross-tabulation of data. Results A total of 4691 reports were submitted against an estimated 958 532 items made, returning 0.49% as the overall error rate. Most of the errors were detected before reaching patients, with only 24 detected during or after administration. The highest number of reports related to adult cytotoxic preparations (40%) and the most frequently recorded error was a labelling error (34.2%). Errors were mostly detected at first check in assembly area (46.6%). Individual staff error contributed most (78.1%) to overall errors, while errors with paediatric parenteral nutrition appeared to be blamed on low staff levels more than other products were. The majority of errors (68.6%) had no potential patient outcomes attached, while it appeared that paediatric cytotoxic products and paediatric parenteral nutrition were associated with greater levels of perceived patient harm. Conclusions The majority of reports were related to near-misses, and this study highlights scope for examining current arrangements for checking and releasing products, certainly for paediatric cytotoxic and paediatric parenteral nutrition preparations within aseptic units, but in the context of resource and capacity constraints.
Resumo:
We study generalised prime systems P (1 < p(1) <= p(2) <= ..., with p(j) is an element of R tending to infinity) and the associated Beurling zeta function zeta p(s) = Pi(infinity)(j=1)(1 - p(j)(-s))(-1). Under appropriate assumptions, we establish various analytic properties of zeta p(s), including its analytic continuation, and we characterise the existence of a suitable generalised functional equation. In particular, we examine the relationship between a counterpart of the Prime Number Theorem (with error term) and the properties of the analytic continuation of zeta p(s). Further we study 'well-behaved' g-prime systems, namely, systems for which both the prime and integer counting function are asymptotically well-behaved. Finally, we show that there exists a natural correspondence between generalised prime systems and suitable orders on N-2. Some of the above results are relevant to the second author's theory of 'fractal membranes', whose spectral partition functions are given by Beurling-type zeta functions, as well as to joint work of that author and R. Nest on zeta functions attached to quasicrystals.
Resumo:
Flow in the world's oceans occurs at a wide range of spatial scales, from a fraction of a metre up to many thousands of kilometers. In particular, regions of intense flow are often highly localised, for example, western boundary currents, equatorial jets, overflows and convective plumes. Conventional numerical ocean models generally use static meshes. The use of dynamically-adaptive meshes has many potential advantages but needs to be guided by an error measure reflecting the underlying physics. A method of defining an error measure to guide an adaptive meshing algorithm for unstructured tetrahedral finite elements, utilizing an adjoint or goal-based method, is described here. This method is based upon a functional, encompassing important features of the flow structure. The sensitivity of this functional, with respect to the solution variables, is used as the basis from which an error measure is derived. This error measure acts to predict those areas of the domain where resolution should be changed. A barotropic wind driven gyre problem is used to demonstrate the capabilities of the method. The overall objective of this work is to develop robust error measures for use in an oceanographic context which will ensure areas of fine mesh resolution are used only where and when they are required. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Measurements of anthropogenic tracers such as chlorofluorocarbons and tritium must be quantitatively combined with ocean general circulation models as a component of systematic model development. The authors have developed and tested an inverse method, using a Green's function, to constrain general circulation models with transient tracer data. Using this method chlorofluorocarbon-11 and -12 (CFC-11 and -12) observations are combined with a North Atlantic configuration of the Miami Isopycnic Coordinate Ocean Model with 4/3 degrees resolution. Systematic differences can be seen between the observed CFC concentrations and prior CFC fields simulated by the model. These differences are reduced by the inversion, which determines the optimal gas transfer across the air-sea interface, accounting for uncertainties in the tracer observations. After including the effects of unresolved variability in the CFC fields, the model is found to be inconsistent with the observations because the model/data misfit slightly exceeds the error estimates. By excluding observations in waters ventilated north of the Greenland-Scotland ridge (sigma (0) < 27.82 kg m(-3); shallower than about 2000 m), the fit is improved, indicating that the Nordic overflows are poorly represented in the model. Some systematic differences in the model/data residuals remain and are related, in part, to excessively deep model ventilation near Rockall and deficient ventilation in the main thermocline of the eastern subtropical gyre. Nevertheless, there do not appear to be gross errors in the basin-scale model circulation. Analysis of the CFC inventory using the constrained model suggests that the North Atlantic Ocean shallower than about 2000 m was near 20% saturated in the mid-1990s. Overall, this basin is a sink to 22% of the total atmosphere-to-ocean CFC-11 flux-twice the global average value. The average water mass formation rates over the CFC transient are 7.0 and 6.0 Sv (Sv = 10(6) m(3) s(-1)) for subtropical mode water and subpolar mode water, respectively.