852 resultados para Refractive Error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Pharmacy aseptic units prepare and supply injectables to minimise risks. The UK National Aseptic Error Reporting Scheme has been collecting data on pharmacy compounding errors, including near-misses, since 2003. Objectives The cumulative reports from January 2004 to December 2007, inclusive, were analysed. Methods The different variables of product types, error types, staff making and detecting errors, stage errors detected, perceived contributory factors, and potential or actual outcomes were presented by cross-tabulation of data. Results A total of 4691 reports were submitted against an estimated 958 532 items made, returning 0.49% as the overall error rate. Most of the errors were detected before reaching patients, with only 24 detected during or after administration. The highest number of reports related to adult cytotoxic preparations (40%) and the most frequently recorded error was a labelling error (34.2%). Errors were mostly detected at first check in assembly area (46.6%). Individual staff error contributed most (78.1%) to overall errors, while errors with paediatric parenteral nutrition appeared to be blamed on low staff levels more than other products were. The majority of errors (68.6%) had no potential patient outcomes attached, while it appeared that paediatric cytotoxic products and paediatric parenteral nutrition were associated with greater levels of perceived patient harm. Conclusions The majority of reports were related to near-misses, and this study highlights scope for examining current arrangements for checking and releasing products, certainly for paediatric cytotoxic and paediatric parenteral nutrition preparations within aseptic units, but in the context of resource and capacity constraints.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flow in the world's oceans occurs at a wide range of spatial scales, from a fraction of a metre up to many thousands of kilometers. In particular, regions of intense flow are often highly localised, for example, western boundary currents, equatorial jets, overflows and convective plumes. Conventional numerical ocean models generally use static meshes. The use of dynamically-adaptive meshes has many potential advantages but needs to be guided by an error measure reflecting the underlying physics. A method of defining an error measure to guide an adaptive meshing algorithm for unstructured tetrahedral finite elements, utilizing an adjoint or goal-based method, is described here. This method is based upon a functional, encompassing important features of the flow structure. The sensitivity of this functional, with respect to the solution variables, is used as the basis from which an error measure is derived. This error measure acts to predict those areas of the domain where resolution should be changed. A barotropic wind driven gyre problem is used to demonstrate the capabilities of the method. The overall objective of this work is to develop robust error measures for use in an oceanographic context which will ensure areas of fine mesh resolution are used only where and when they are required. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Much uncertainty in the value of the imaginary part of the refractive index of mineral dust contributes to uncertainty in the radiative effect of mineral dust in the atmosphere. A synthesis of optical, chemical and physical in-situ aircraft measurements from the DODO experiments during February and August 2006 are used to calculate the refractive index mineral dust encountered over West Africa. Radiative transfer modeling and measurements of broadband shortwave irradiance at a range of altitudes are used to test and validate these calculations for a specific dust event on 23 August 2006 over Mauritania. Two techniques are used to determine the refractive index: firstly a method combining measurements of scattering, absorption, size distributions and Mie code simulations, and secondly a method using composition measured on filter samples to apportion the content of internally mixed quartz, calcite and iron oxide-clay aggregates, where the iron oxide is represented by either hematite or goethite and clay by either illite or kaolinite. The imaginary part of the refractive index at 550 nm (ni550) is found to range between 0.0001 i to 0.0046 i, and where filter samples are available, agreement between methods is found depending on mineral combination assumed. The refractive indices are also found to agree well with AERONET data where comparisons are possible. ni550 is found to vary with dust source, which is investigated with the NAME model for each case. The relationship between both size distribution and ni550 on the accumulation mode single scattering albedo at 550 nm (ω0550) are examined and size distribution is found to have no correlation to ω0550, while ni550 shows a strong linear relationship with ω0550. Radiative transfer modeling was performed with different models (Mie-derived refractive indices, but also filter sampling composition assuming both internal and external mixing). Our calculations indicate that Mie-derived values of ni550 and the externally mixed dust where the iron oxide-clay aggregate corresponds to the goethite-kaolinite combination result in the best agreement with irradiance measurements. The radiative effect of the dust is found to be very sensitive to the mineral combination (and hence refractive index) assumed, and to whether the dust is assumed to be internally or externally mixed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The theta-logistic is a widely used generalisation of the logistic model of regulated biological processes which is used in particular to model population regulation. Then the parameter theta gives the shape of the relationship between per-capita population growth rate and population size. Estimation of theta from population counts is however subject to bias, particularly when there are measurement errors. Here we identify factors disposing towards accurate estimation of theta by simulation of populations regulated according to the theta-logistic model. Factors investigated were measurement error, environmental perturbation and length of time series. Large measurement errors bias estimates of theta towards zero. Where estimated theta is close to zero, the estimated annual return rate may help resolve whether this is due to bias. Environmental perturbations help yield unbiased estimates of theta. Where environmental perturbations are large, estimates of theta are likely to be reliable even when measurement errors are also large. By contrast where the environment is relatively constant, unbiased estimates of theta can only be obtained if populations are counted precisely Our results have practical conclusions for the design of long-term population surveys. Estimation of the precision of population counts would be valuable, and could be achieved in practice by repeating counts in at least some years. Increasing the length of time series beyond ten or 20 years yields only small benefits. if populations are measured with appropriate accuracy, given the level of environmental perturbation, unbiased estimates can be obtained from relatively short censuses. These conclusions are optimistic for estimation of theta. (C) 2008 Elsevier B.V All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. Accommodation can mask hyperopia and reduce the accuracy of non-cycloplegic refraction. It is, therefore, important to minimize accommodation to obtain a measure of hyperopia as accurate as possible. To characterize the parameters required to measure the maximally hyperopic error using photorefraction, we used different target types and distances to determine which target was most likely to maximally relax accommodation and thus more accurately detect hyperopia in an individual. Methods. A PlusoptiX SO4 infra-red photorefractor was mounted in a remote haploscope which presented the targets. All participants were tested with targets at four fixation distances between 0.3 and 2 m containing all combinations of blur, disparity, and proximity/looming cues. Thirty-eight infants (6 to 44 weeks) were studied longitudinally, and 104 children [4 to 15 years (mean 6.4)] and 85 adults, with a range of refractive errors and binocular vision status, were tested once. Cycloplegic refraction data were available for a sub-set of 59 participants spread across the age range. Results. The maximally hyperopic refraction (MHR) found at any time in the session was most frequently found when fixating the most distant targets and those containing disparity and dynamic proximity/looming cues. Presence or absence of blur was less significant, and targets in which only single cues to depth were present were also less likely to produce MHR. MHR correlated closely with cycloplegic refraction (r = 0.93, mean difference 0.07 D, p = n.s., 95% confidence interval +/-<0.25 D) after correction by a calibration factor. Conclusions. Maximum relaxation of accommodation occurred for binocular targets receding into the distance. Proximal and disparity cues aid relaxation of accommodation to a greater extent than blur, and thus non-cycloplegic refraction targets should incorporate these cues. This is especially important in screening contexts with a brief opportunity to test for significant hyperopia. MHR in our laboratory was found to be a reliable estimation of cycloplegic refraction. (Optom Vis Sci 2009;86:1276-1286)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The convergence speed of the standard Least Mean Square adaptive array may be degraded in mobile communication environments. Different conventional variable step size LMS algorithms were proposed to enhance the convergence speed while maintaining low steady state error. In this paper, a new variable step LMS algorithm, using the accumulated instantaneous error concept is proposed. In the proposed algorithm, the accumulated instantaneous error is used to update the step size parameter of standard LMS is varied. Simulation results show that the proposed algorithm is simpler and yields better performance than conventional variable step LMS.