31 resultados para Medication Error
em Aston University Research Archive
Resumo:
Medication errors are associated with significant morbidity and people with mental health problems may be particularly susceptible to medication errors due to various factors. Primary care has a key role in improving medication safety in this vulnerable population. The complexity of services, involving primary and secondary care and social services, and potential training issues may increase error rates, with physical medicines representing a particular risk. Service users may be cognitively impaired and fail to identify an error placing additional responsibilities on clinicians. The potential role of carers in error prevention and medication safety requires further elaboration. A potential lack of trust between service users and clinicians may impair honest communication about medication issues leading to errors. There is a need for detailed research within this field.
Resumo:
Methods: It has been estimated that medication error harms 1-2% of patients admitted to general hospitals. There has been no previous systematic review of the incidence, cause or type of medication error in mental healthcare services. Methods: A systematic literature search for studies that examined the incidence or cause of medication error in one or more stage(s) of the medication-management process in the setting of a community or hospital-based mental healthcare service was undertaken. The results in the context of the design of the study and the denominator used were examined. Results: All studies examined medication management processes, as opposed to outcomes. The reported rate of error was highest in studies that retrospectively examined drug charts, intermediate in those that relied on reporting by pharmacists to identify error and lowest in those that relied on organisational incident reporting systems. Only a few of the errors identified by the studies caused actual harm, mostly because they were detected and remedial action was taken before the patient received the drug. The focus of the research was on inpatients and prescriptions dispensed by mental health pharmacists. Conclusion: Research about medication error in mental healthcare is limited. In particular, very little is known about the incidence of error in non-hospital settings or about the harm caused by it. Evidence is available from other sources that a substantial number of adverse drug events are caused by psychotropic drugs. Some of these are preventable and might probably, therefore, be due to medication error. On the basis of this and features of the organisation of mental healthcare that might predispose to medication error, priorities for future research are suggested.
Resumo:
Objective - To review and summarise published data on medication errors in older people with mental health problems. Methods - A systematic review was conducted to identify studies that investigated medication errors in older people with mental health problems. MEDLINE, EMBASE, PHARMLINE, COCHRANE COLLABORATION and PsycINFO were searched electronically. Any studies identified were scrutinized for further references. The title, abstract or full text was systematically reviewed for relevance. Results - Data were extracted from eight studies. In total, information about 728 errors (459 administration, 248 prescribing, 7 dispensing, 12 transcribing, 2 unclassified) was available. The dataset related almost exclusively to inpatients, frequently involved non-psychotropics, and the majority of the errors were not serious. Conclusions - Due to methodology issues it was impossible to calculate overall error rates. Future research should concentrate on serious errors within community settings, and clarify potential risk factors.
Resumo:
Background - It is well recognised that errors are more likely to occur during transitions of care, especially medicines errors. Clinic letters are used as a communication tool during a transition from hospital (outpatient clinics) to primary care (general practitioners). Little is known about medicines errors in clinic letters, as previous studies in this area have focused on medicines errors in inpatient or outpatient prescriptions. Published studies concerning clinic letters largely focus on perceptions of patients or general practitioners in respect to overall quality. Purpose - To investigate medicines errors contained in outpatient clinic letters generated by prescribers within the Neurology Department of a specialist paediatric hospital in the UK.Materials and methods - Single site, retrospective, cross-sectional review of 100 clinic letters generated during March–July 2013 in response to an outpatient consultation. Clinic letters were conveniently selected from the most recent visit of each patient. An evaluation tool with a 10-point scale, where 10 was no error and 0 was significant error, was developed and refined throughout the study to facilitate identification and characterisation of medicines errors. The tool was tested for a relationship between scores and number of medicines errors using a regression analysis.Results - Of 315 items related to neurology mentioned within the letters, 212 items were associated with 602 errors. Common missing information was allergy (97%, n = 97), formulation (60.3%, n = 190), strength/concentration (59%, n = 186) and weight (53%, n = 53). Ninety-nine letters were associated with at least one error. Scores were in range of 4–10 with 42% of letters scored as 7. Statistically significant relationships were observed between scores and number of medicines errors (R2 = 0.4168, p < 0.05) as well as between number of medicines and number of drug-related errors (R2 = 0.9719, p < 0.05). Conclusions - Nearly all clinic letters were associated with medicines errors. The 10-point evaluation tool may be a useful device to categorise clinic letter errors.
Resumo:
In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.
Resumo:
We investigate the dependence of Bayesian error bars on the distribution of data in input space. For generalized linear regression models we derive an upper bound on the error bars which shows that, in the neighbourhood of the data points, the error bars are substantially reduced from their prior values. For regions of high data density we also show that the contribution to the output variance due to the uncertainty in the weights can exhibit an approximate inverse proportionality to the probability density. Empirical results support these conclusions.
Resumo:
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
Gallager-type error-correcting codes that nearly saturate Shannon's bound are constructed using insight gained from mapping the problem onto that of an Ising spin system. The performance of the suggested codes is evaluated for different code rates in both finite and infinite message length.
Resumo:
The efficacy of a specially constructed Gallager-type error-correcting code to communication in a Gaussian channel is examined. The construction is based on the introduction of complex matrices, used in both encoding and decoding, which comprise sub-matrices of cascading connection values. The finite-size effects are estimated for comparing the results with the bounds set by Shannon. The critical noise level achieved for certain code rates and infinitely large systems nearly saturates the bounds set by Shannon even when the connectivity used is low.
Resumo:
The performance of Gallager's error-correcting code is investigated via methods of statistical physics. In this method, the transmitted codeword comprises products of the original message bits selected by two randomly-constructed sparse matrices; the number of non-zero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity is saturated for many of the codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered by employing the TAP approach which is identical to the commonly used belief-propagation-based decoding.
Resumo:
An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.
Resumo:
Background There is a paucity of data describing the prevalence of childhood refractive error in the United Kingdom. The Northern Ireland Childhood Errors of Refraction study, along with its sister study the Aston Eye Study, are the first population-based surveys of children using both random cluster sampling and cycloplegic autorefraction to quantify levels of refractive error in the United Kingdom. Methods Children aged 6–7 years and 12–13 years were recruited from a stratified random sample of primary and post-primary schools, representative of the population of Northern Ireland as a whole. Measurements included assessment of visual acuity, oculomotor balance, ocular biometry and cycloplegic binocular open-field autorefraction. Questionnaires were used to identify putative risk factors for refractive error. Results 399 (57%) of 6–7 years and 669 (60%) of 12–13 years participated. School participation rates did not vary statistically significantly with the size of the school, whether the school is urban or rural, or whether it is in a deprived/non-deprived area. The gender balance, ethnicity and type of schooling of participants are reflective of the Northern Ireland population. Conclusions The study design, sample size and methodology will ensure accurate measures of the prevalence of refractive errors in the target population and will facilitate comparisons with other population-based refractive data.
Resumo:
We employ the methods presented in the previous chapter for decoding corrupted codewords, encoded using sparse parity check error correcting codes. We show the similarity between the equations derived from the TAP approach and those obtained from belief propagation, and examine their performance as practical decoding methods.
Resumo:
A variation of low-density parity check (LDPC) error-correcting codes defined over Galois fields (GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of C non-zero elements per column. We examine the dependence of the code performance on the value of q, for finite and infinite C values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different q-dependence in the cases of C = 2 and C ≥ 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.