866 resultados para Medication Error
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
Background: Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods: Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion: At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.
Resumo:
Patients want and need comprehensive and accurate information about their medicines so that they can participate in decisions about their healthcare: In particular, they require information about the likely risks and benefits that are associated with the different treatment options. However, to provide this information in a form that people can readily understand and use is a considerable challenge to healthcare professionals. One recent attempt to standardise the Language of risk has been to produce sets of verbal descriptors that correspond to specific probability ranges, such as those outlined in the European Commission (EC) Pharmaceutical Committee guidelines in 1998 for describing the incidence of adverse effects. This paper provides an overview of a number of studies involving members of the general public, patients, and hospital doctors, that evaluated the utility of the EC guideline descriptors (very common, common, uncommon, rare, very rare). In all studies it was found that people significantly over-estimated the likelihood of adverse effects occurring, given specific verbal descriptors. This in turn resulted in significantly higher ratings of their perceived risks to health and significantly lower ratings of their likelihood of taking the medicine. Such problems of interpretation are not restricted to the EC guideline descriptors. Similar levels of misinterpretation have also been demonstrated with two other recently advocated risk scales (Caiman's verbal descriptor scale and Barclay, Costigan and Davies' lottery scale). In conclusion, the challenge for risk communicators and for future research will be to produce a language of risk that is sufficiently flexible to take into account different perspectives, as well as changing circumstances and contexts of illness and its treatments. In the meantime, we urge the EC and other legislative bodies to stop recommending the use of specific verbal labels or phrases until there is a stronger evidence base to support their use.
Resumo:
Two experiments, using a controlled empirical methodology, investigated the effects of presenting information about medicines using a more personalised style of expression. In both studies, members of the general public were given a hypothetical scenario about visiting the doctor, being diagnosed with a particular illness, and being prescribed a medication. They were also given a written explanation about the medicine and were asked to provide ratings on a number of measures, including satisfaction, perceived risk to health, and intention to comply. In Experiment 1 the explanation focused only on possible side effects of the medicine, whereas in Experiment 2 a fuller explanation was provided, which included information about the illness, prescribed drug, its dosage and contraindications as well as its side effects. In both studies, use of a more personalised style resulted in significantly higher ratings of satisfaction and significantly lower ratings of likelihood of side effects occurring and of perceived risk to health. In Experiment 2 it also led to significantly improved recall for the written information.
Communicating risk of medication side effects: an empirical evaluation of EU recommended terminology
Resumo:
Two experiments compared people's interpretation of verbal and numerical descriptions of the risk of medication side effects occurring. The verbal descriptors were selected from those recommended for use by the European Union (very common, common, uncommon, rare, very rare). Both experiments used a controlled empirical methodology, in which nearly 500 members of the general population were presented with a fictitious (but realistic) scenario about visiting the doctor and being prescribed medication, together with information about the medicine's side effects and their probability of occurrence. Experiment 1 found that, in all three age groups tested (18 - 40, 41 - 60 and over 60), participants given a verbal descriptor (very common) estimated side effect risk to be considerably higher than those given a comparable numerical description. Furthermore, the differences in interpretation were reflected in their judgements of side effect severity, risk to health, and intention to comply. Experiment 2 confirmed these findings using two different verbal descriptors (common and rare) and in scenarios which described either relatively severe or relatively mild side effects. Strikingly, only 7 out of 180 participants in this study gave a probability estimate which fell within the EU assigned numerical range. Thus, large scale use of the descriptors could have serious negative consequences for individual and public health. We therefore recommend that the EU and National authorities suspend their recommendations regarding these descriptors until a more substantial evidence base is available to support their appropriate use.
Resumo:
This study investigates whether, and how, people's perception of risk and intended health behaviours are affected by whether a medicine is prescribed for themselves or for a young child. The question is relevant to the issue of whether it is beneficial to produce medicines information that is tailored to particular subgroups of the population, such as parents of young children. In the experiment, participants read scenarios which referred either to themselves or their (imagined) 1-year-old child, and were required to make a number of risk judgements. The results showed that both parents and non-parents were less satisfied, perceived side effects to be more severe and more likely to occur, risk to health to be higher, and said that they would be less likely to take (or give) the medicine when the recipient was the child. On the basis of the findings, it is suggested that it may well be beneficial to tailor materials to broader classes of patient type.
Resumo:
The convergence speed of the standard Least Mean Square adaptive array may be degraded in mobile communication environments. Different conventional variable step size LMS algorithms were proposed to enhance the convergence speed while maintaining low steady state error. In this paper, a new variable step LMS algorithm, using the accumulated instantaneous error concept is proposed. In the proposed algorithm, the accumulated instantaneous error is used to update the step size parameter of standard LMS is varied. Simulation results show that the proposed algorithm is simpler and yields better performance than conventional variable step LMS.
Resumo:
Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.
Using simulation to determine the sensibility of error sources for software effort estimation models
Resumo:
In this paper we present error analysis for a Monte Carlo algorithm for evaluating bilinear forms of matrix powers. An almost Optimal Monte Carlo (MAO) algorithm for solving this problem is formulated. Results for the structure of the probability error are presented and the construction of robust and interpolation Monte Carlo algorithms are discussed. Results are presented comparing the performance of the Monte Carlo algorithm with that of a corresponding deterministic algorithm. The two algorithms are tested on a well balanced matrix and then the effects of perturbing this matrix, by small and large amounts, is studied.
OFDM joint data detection and phase noise cancellation based on minimum mean square prediction error
Resumo:
This paper proposes a new iterative algorithm for orthogonal frequency division multiplexing (OFDM) joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the relatively less studied problem of "overfitting" such that the iterative approach may converge to a trivial solution. Specifically, we apply a hard-decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the PHN, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical Simulations are also given to verify the proposed algorithm. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we generalise a previously-described model of the error-prone polymerase chain reaction (PCR) reaction to conditions of arbitrarily variable amplification efficiency and initial population size. Generalisation of the model to these conditions improves the correspondence to observed and expected behaviours of PCR, and restricts the extent to which the model may explore sequence space for a prescribed set of parameters. Error-prone PCR in realistic reaction conditions is predicted to be less effective at generating grossly divergent sequences than the original model. The estimate of mutation rate per cycle by sampling sequences from an in vitro PCR experiment is correspondingly affected by the choice of model and parameters. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We analyze a fully discrete spectral method for the numerical solution of the initial- and periodic boundary-value problem for two nonlinear, nonlocal, dispersive wave equations, the Benjamin–Ono and the Intermediate Long Wave equations. The equations are discretized in space by the standard Fourier–Galerkin spectral method and in time by the explicit leap-frog scheme. For the resulting fully discrete, conditionally stable scheme we prove an L2-error bound of spectral accuracy in space and of second-order accuracy in time.