947 resultados para Error de medida
Resumo:
Background Pharmacy aseptic units prepare and supply injectables to minimise risks. The UK National Aseptic Error Reporting Scheme has been collecting data on pharmacy compounding errors, including near-misses, since 2003. Objectives The cumulative reports from January 2004 to December 2007, inclusive, were analysed. Methods The different variables of product types, error types, staff making and detecting errors, stage errors detected, perceived contributory factors, and potential or actual outcomes were presented by cross-tabulation of data. Results A total of 4691 reports were submitted against an estimated 958 532 items made, returning 0.49% as the overall error rate. Most of the errors were detected before reaching patients, with only 24 detected during or after administration. The highest number of reports related to adult cytotoxic preparations (40%) and the most frequently recorded error was a labelling error (34.2%). Errors were mostly detected at first check in assembly area (46.6%). Individual staff error contributed most (78.1%) to overall errors, while errors with paediatric parenteral nutrition appeared to be blamed on low staff levels more than other products were. The majority of errors (68.6%) had no potential patient outcomes attached, while it appeared that paediatric cytotoxic products and paediatric parenteral nutrition were associated with greater levels of perceived patient harm. Conclusions The majority of reports were related to near-misses, and this study highlights scope for examining current arrangements for checking and releasing products, certainly for paediatric cytotoxic and paediatric parenteral nutrition preparations within aseptic units, but in the context of resource and capacity constraints.
Resumo:
Flow in the world's oceans occurs at a wide range of spatial scales, from a fraction of a metre up to many thousands of kilometers. In particular, regions of intense flow are often highly localised, for example, western boundary currents, equatorial jets, overflows and convective plumes. Conventional numerical ocean models generally use static meshes. The use of dynamically-adaptive meshes has many potential advantages but needs to be guided by an error measure reflecting the underlying physics. A method of defining an error measure to guide an adaptive meshing algorithm for unstructured tetrahedral finite elements, utilizing an adjoint or goal-based method, is described here. This method is based upon a functional, encompassing important features of the flow structure. The sensitivity of this functional, with respect to the solution variables, is used as the basis from which an error measure is derived. This error measure acts to predict those areas of the domain where resolution should be changed. A barotropic wind driven gyre problem is used to demonstrate the capabilities of the method. The overall objective of this work is to develop robust error measures for use in an oceanographic context which will ensure areas of fine mesh resolution are used only where and when they are required. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.
Resumo:
The theta-logistic is a widely used generalisation of the logistic model of regulated biological processes which is used in particular to model population regulation. Then the parameter theta gives the shape of the relationship between per-capita population growth rate and population size. Estimation of theta from population counts is however subject to bias, particularly when there are measurement errors. Here we identify factors disposing towards accurate estimation of theta by simulation of populations regulated according to the theta-logistic model. Factors investigated were measurement error, environmental perturbation and length of time series. Large measurement errors bias estimates of theta towards zero. Where estimated theta is close to zero, the estimated annual return rate may help resolve whether this is due to bias. Environmental perturbations help yield unbiased estimates of theta. Where environmental perturbations are large, estimates of theta are likely to be reliable even when measurement errors are also large. By contrast where the environment is relatively constant, unbiased estimates of theta can only be obtained if populations are counted precisely Our results have practical conclusions for the design of long-term population surveys. Estimation of the precision of population counts would be valuable, and could be achieved in practice by repeating counts in at least some years. Increasing the length of time series beyond ten or 20 years yields only small benefits. if populations are measured with appropriate accuracy, given the level of environmental perturbation, unbiased estimates can be obtained from relatively short censuses. These conclusions are optimistic for estimation of theta. (C) 2008 Elsevier B.V All rights reserved.
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
The convergence speed of the standard Least Mean Square adaptive array may be degraded in mobile communication environments. Different conventional variable step size LMS algorithms were proposed to enhance the convergence speed while maintaining low steady state error. In this paper, a new variable step LMS algorithm, using the accumulated instantaneous error concept is proposed. In the proposed algorithm, the accumulated instantaneous error is used to update the step size parameter of standard LMS is varied. Simulation results show that the proposed algorithm is simpler and yields better performance than conventional variable step LMS.
Resumo:
Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.
Using simulation to determine the sensibility of error sources for software effort estimation models