848 resultados para Errors and omission
Resumo:
One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.
Resumo:
BACKGROUND: Most healthcare in the US is delivered in the ambulatory care setting, but the epidemiology of errors and adverse events in ambulatory care is understudied. METHODS: Using the population-based data from the Colorado and Utah Medical Practices Study, we identified adverse events that occurred in an ambulatory care setting and led to hospital admission. Proportions with 95% CIs are reported. RESULTS: We reviewed 14,700-hospital discharge records and found 587 adverse events of which 70 were ambulatory care adverse events (AAEs) and 31 were ambulatory care preventable adverse events (APAEs). When weighted to the general population, there were 2608 AAEs and 1296 (44.3%) APAEs in Colorado and Utah, USA, in 1992. APAEs occurred most commonly in physicians' offices (43.1%, range 46.8-27.8), the emergency department (32.3%, 46.1-18.5) and at home (13.1%, 23.1-3.1). APAEs in day surgery were less common (7.1%, 13.6-0.6) but caused the greatest harm to patients. The types of APAEs were broadly distributed among missed or delayed diagnoses (36%, 50.2-21.8), surgery (24.1%, 36.7-11.5), non-surgical procedures (14.6%, 25.0-4.2), medication (13.1%, 23.1-3.1) and therapeutic events (12.3%, 22.0-2.6). Overall, 10% of the APAEs resulted in serious permanent injury or death. The proportion of APAEs that resulted in death was 31.8% for general internal medicine, 22.5% for family practice and 16.7% for emergency medicine. CONCLUSION: An estimated 75,000 hospitalisations per year are due to preventable adverse events that occur in outpatient settings in the US, resulting in 4839 serious permanent injuries and 2587 deaths.
Resumo:
Histomorphometric evaluation of the buccal aspects of periodontal tissues in rodents requires reproducible alignment of maxillae and highly precise sections containing central sections of buccal roots; this is a cumbersome and technically sensitive process due to the small specimen size. The aim of the present report is to describe and analyze a method to transfer virtual sections of micro-computer tomographic (CT)-generated image stacks to the microtome for undecalcified histological processing and to describe the anatomy of the periodontium in rat molars. A total of 84 undecalcified sections of all buccal roots of seven untreated rats was analyzed. The accuracy of section coordinate transfer from virtual micro-CT slice to the histological slice, right-left side differences and the measurement error for linear and angular measurements on micro-CT and on histological micrographs were calculated using the Bland-Altman method, interclass correlation coefficient and the method of moments estimator. Also, manual alignment of the micro-CT-scanned rat maxilla was compared with multiplanar computer-reconstructed alignment. The supra alveolar rat anatomy is rather similar to human anatomy, whereas the alveolar bone is of compact type and the keratinized gingival epithelium bends apical to join the junctional epithelium. The high methodological standardization presented herein ensures retrieval of histological slices with excellent display of anatomical microstructures, in a reproducible manner, minimizes random errors, and thereby may contribute to the reduction of number of animals needed.
In the aftermath of medical error : Caring for patients, family, and the healthcare workers involved
Resumo:
Medical errors, in particular those resulting in harm, pose a serious situation for patients ("first victims") and the healthcare workers involved ("second victims") and can have long-lasting and distressing consequences. To prevent a second traumatization, appropriate and empathic interaction with all persons involved is essential besides error analysis. Patients share a nearly universal, broad preference for a complete disclosure of incidents, regardless of age, gender, or education. This includes the personal, timely and unambiguous disclosure of the adverse event, information relating to the event, its causes and consequences, and an apology and sincere expression of regret. While the majority of healthcare professionals generally support and honest and open disclosure of adverse events, they also face various barriers which impede the disclosure (e.g., fear of legal consequences). Despite its essential importance, disclosure of adverse events in practice occurs in ways that are rarely acceptable to patients and their families. The staff involved often experiences acute distress and an intense emotional response to the event, which may become chronic and increase the risk of depression, burnout and post-traumatic stress disorders. Communication with peers is vital for people to be able to cope constructively and protectively with harmful errors. Survey studies among healthcare workers show, however, that they often do not receive sufficient individual and institutional support. Healthcare organizations should prepare for medical errors and harmful events and implement a communication plan and a support system that covers the requirements and different needs of patients and the staff involved.
Resumo:
BACKGROUND Clinicians involved in medical errors can experience significant distress. This study aims to examine (1) how medical errors impact anaesthesiologists in key work and life domains; (2) anaesthesiologists' attitudes regarding support after errors; (3) and which anaesthesiologists are most affected by errors. METHODS This study is a mailed cross-sectional survey completed by 281 of the 542 clinically active anaesthesiologists (52% response rate) working at Switzerland's five university hospitals between July 2012 and April 2013. RESULTS Respondents reported that errors had negatively affected anxiety about future errors (51%), confidence in their ability as a doctor (45%), ability to sleep (36%), job satisfaction (32%), and professional reputation (9%). Respondents' lives were more likely to be affected as error severity increased. Ninety per cent of respondents disagreed that hospitals adequately support them in coping with the stress associated with medical errors. Nearly all of the respondents (92%) reported being interested in psychological counselling after a serious error, but many identified barriers to seeking counselling. However, there were significant differences between departments regarding error-related stress levels and attitudes about error-related support. Respondents were more likely to experience certain distress if they were female, older, had previously been involved in a serious error, and were dissatisfied with their last error disclosure. CONCLUSION Medical errors, even minor errors and near misses, can have a serious effect on clinicians. Health-care organisations need to do more to support clinicians in coping with the stress associated with medical errors.
Resumo:
The NASA mission GRAIL (Gravity Recovery and Interior Laboratory) inherited its concept from the GRACE (Gravity Recovery and Climate Experiment) mission to determine the gravity field of the Moon. We present lunar gravity fields based on the data of GRAIL’s primary mission phase. Gravity field recovery is realized in the framework of the Celestial Mechanics Approach, using a development version of the Bernese GNSS Software along with Ka-band range-rate data series as observations and the GNI1B positions provided by NASA JPL as pseudo-observations. By comparing our results with the official level-2 GRAIL gravity field models we show that the lunar gravity field can be recovered with a high quality by adapting the Celestial Mechanics Approach, even when using pre-GRAIL gravity field models as a priori fields and when replacing sophisticated models of non-gravitational accelerations by appropriately spaced pseudo-stochastic pulses (i.e., instantaneous velocity changes). We present and evaluate two lunar gravity field solutions up to degree and order 200 – AIUB-GRL200A and AIUB-GRL200B. While the first solution uses no gravity field information beyond degree 200, the second is obtained by using the official GRAIL field GRGM900C up to degree and order 660 as a priori information. This reduces the omission errors and demonstrates the potential quality of our solution if we resolved the gravity field to higher degree.
Resumo:
Indoor positioning has attracted considerable attention for decades due to the increasing demands for location based services. In the past years, although numerous methods have been proposed for indoor positioning, it is still challenging to find a convincing solution that combines high positioning accuracy and ease of deployment. Radio-based indoor positioning has emerged as a dominant method due to its ubiquitousness, especially for WiFi. RSSI (Received Signal Strength Indicator) has been investigated in the area of indoor positioning for decades. However, it is prone to multipath propagation and hence fingerprinting has become the most commonly used method for indoor positioning using RSSI. The drawback of fingerprinting is that it requires intensive labour efforts to calibrate the radio map prior to experiments, which makes the deployment of the positioning system very time consuming. Using time information as another way for radio-based indoor positioning is challenged by time synchronization among anchor nodes and timestamp accuracy. Besides radio-based positioning methods, intensive research has been conducted to make use of inertial sensors for indoor tracking due to the fast developments of smartphones. However, these methods are normally prone to accumulative errors and might not be available for some applications, such as passive positioning. This thesis focuses on network-based indoor positioning and tracking systems, mainly for passive positioning, which does not require the participation of targets in the positioning process. To achieve high positioning accuracy, we work on some information of radio signals from physical-layer processing, such as timestamps and channel information. The contributions in this thesis can be divided into two parts: time-based positioning and channel information based positioning. First, for time-based indoor positioning (especially for narrow-band signals), we address challenges for compensating synchronization offsets among anchor nodes, designing timestamps with high resolution, and developing accurate positioning methods. Second, we work on range-based positioning methods with channel information to passively locate and track WiFi targets. Targeting less efforts for deployment, we work on range-based methods, which require much less calibration efforts than fingerprinting. By designing some novel enhanced methods for both ranging and positioning (including trilateration for stationary targets and particle filter for mobile targets), we are able to locate WiFi targets with high accuracy solely relying on radio signals and our proposed enhanced particle filter significantly outperforms the other commonly used range-based positioning algorithms, e.g., a traditional particle filter, extended Kalman filter and trilateration algorithms. In addition to using radio signals for passive positioning, we propose a second enhanced particle filter for active positioning to fuse inertial sensor and channel information to track indoor targets, which achieves higher tracking accuracy than tracking methods solely relying on either radio signals or inertial sensors.
Resumo:
Four different preparation and counting methods for biochemical varves were compared in order to assess counting errors and to standardize these techniques. The properties of two embedding methods, namely the shock-freeze, freeze-dry and the water-acetone-epoxy-exchange method, are discussed. Varve counts were carried out on fresh sediment and on sediment thin-sections, on the latter by manual and by automated counting using image-analysis software. Counting on fresh sediment and using image-analysis generally underestimated the number of varves, especially in sections with inconspicuous varves. A comparison between multiple varve counts carried out by a single analyst and different analysts showed no significant differences in the mean varve counts.
Resumo:
Linkage disequilibrium (LD) is defined as the nonrandom association of alleles at two or more loci in a population and may be a useful tool in a diverse array of applications including disease gene mapping, elucidating the demographic history of populations, and testing hypotheses of human evolution. However, the successful application of LD-based approaches to pertinent genetic questions is hampered by a lack of understanding about the forces that mediate the genome-wide distribution of LD within and between human populations. Delineating the genomic patterns of LD is a complex task that will require interdisciplinary research that transcends traditional scientific boundaries. The research presented in this dissertation is predicated upon the need for interdisciplinary studies and both theoretical and experimental projects were pursued. In the theoretical studies, I have investigated the effect of genotyping errors and SNP identification strategies on estimates of LD. The primary importance of these two chapters is that they provide important insights and guidance for the design of future empirical LD studies. Furthermore, I analyzed the allele frequency distribution of 26,530 single nucleotide polymorphisms (SNPs) in three populations and generated the first-generation natural selection map of the human genome, which will be an important resource for explaining and understanding genomic patterns of LD. Finally, in the experimental study, I describe a novel and simple, low-cost, and high-throughput SNP genotyping method. The theoretical analyses and experimental tools developed in this dissertation will facilitate a more complete understanding of patterns of LD in human populations. ^
Resumo:
Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^
Resumo:
Medication reconciliation, with the aim to resolve medication discrepancy, is one of the Joint Commission patient safety goals. Medication errors and adverse drug events that could result from medication discrepancy affect a large population. At least 1.5 million adverse drug events and $3.5 billion of financial burden yearly associated with medication errors could be prevented by interventions such as medication reconciliation. This research was conducted to answer the following research questions: (1a) What are the frequency range and type of measures used to report outpatient medication discrepancy? (1b) Which effective and efficient strategies for medication reconciliation in the outpatient setting have been reported? (2) What are the costs associated with medication reconciliation practice in primary care clinics? (3) What is the quality of medication reconciliation practice in primary care clinics? (4) Is medication reconciliation practice in primary care clinics cost-effective from the clinic perspective? Study designs used to answer these questions included a systematic review, cost analysis, quality assessments, and cost-effectiveness analysis. Data sources were published articles in the medical literature and data from a prospective workflow study, which included 150 patients and 1,238 medications. The systematic review confirmed that the prevalence of medication discrepancy was high in ambulatory care and higher in primary care settings. Effective strategies for medication reconciliation included the use of pharmacists, letters, a standardized practice approach, and partnership between providers and patients. Our cost analysis showed that costs associated with medication reconciliation practice were not substantially different between primary care clinics using or not using electronic medical records (EMR) ($0.95 per patient per medication in EMR clinics vs. $0.96 per patient per medication in non-EMR clinics, p=0.78). Even though medication reconciliation was frequently practiced (97-98%), the quality of such practice was poor (0-33% of process completeness measured by concordance of medication numbers and 29-33% of accuracy measured by concordance of medication names) and negatively (though not significantly) associated with medication regimen complexity. The incremental cost-effectiveness ratios for concordance of medication number per patient per medication and concordance of medication names per patient per medication were both 0.08, favoring EMR. Future studies including potential cost-savings from medication features of the EMR and potential benefits to minimize severity of harm to patients from medication discrepancy are warranted. ^
Resumo:
Medical errors and close calls are pervasive in health care. It is hypothesized that the causes of close calls are the same as for medical errors; therefore learning about close calls can help prevent errors and increase patient safety. Yet despite efforts to encourage close call reporting, close calls as well as medical errors are under-reported in health care. The purpose of this dissertation was to implement and evaluate a web-based anonymous close call reporting system in three units at an urban hospital. ^ The study participants were physicians, nurses and medical technicians (N = 187) who care for patients in the Medical Intermediate Care Unit, the Surgical Intermediate Care Unit, and the Coronary Catheterization Laboratory in the hospital. We provided educational information to the participants on how to use the system and e-mailed and delivered paper reminders to report to the participants throughout the 19-month project. We surveyed the participants at the beginning and at the end of the study to assess their attitudes and beliefs regarding incident reporting. We found that the majority of the health care providers in our study are supportive of incident reporting in general but in practice very few had actually reported an error or a close call, semi-structured interview 20 weeks after we made the close call reporting system available. The purpose of the interviews was to further assess the participants' attitudes regarding incident reporting and the reporting system. Our findings suggest that the health care providers are supportive of medical error reporting in general, but are not convinced of the benefit of reporting close calls. Barriers to close call reporting cited include lack of time, heavy workloads, preferring to take care of close calls "on the spot", and not seeing the benefits of close call reporting. Consequently only two = close calls were reported via the system by two separate caregivers during the project. ^ The findings suggest that future efforts to increase close call reporting must address barriers to reporting, especially the belief among care givers that it is not worth taking time from their already busy schedules to report close calls. ^
Resumo:
Oceanographic data collected by ocean research organisations in Russia, the USA, the United Kingdom, Germany, Norway, and Poland for the Barents, Kara and White Seas region are presented in this atlas. Recently declassified naval data from Norway, the USA, and the UK are also included. More than 1,000,000 oceanographic stations containing temperature and/or sea-water salinity data were originally selected. After correcting errors and eliminating duplicates, data from 206,300 checked stations were placed on CD-ROM, together with many figures describing the characteristics of both the single-input and combined data set. In addition, temperature and salinity measurements were interpolated to the following standard horizons: 0, 25, 50, 100, 150, 200, 250, 300 m, and bottom. This atlas covers the 100-year period 1898 to 1998 and is, to date, the most complete oceanographic data collection for these Arctic shelf seas. This data set is complemented by more than 9,000 measurements of sea surface temperature, which were recently digitized from ships' logbooks. They cover the same geographical area within the time period 1867-1912.
Resumo:
State-of-the-art process-based models have shown to be applicable to the simulation and prediction of coastal morphodynamics. On annual to decadal temporal scales, these models may show limitations in reproducing complex natural morphological evolution patterns, such as the movement of bars and tidal channels, e.g. the observed decadal migration of the Medem Channel in the Elbe Estuary, German Bight. Here a morphodynamic model is shown to simulate the hydrodynamics and sediment budgets of the domain to some extent, but fails to adequately reproduce the pronounced channel migration, due to the insufficient implementation of bank erosion processes. In order to allow for long-term simulations of the domain, a nudging method has been introduced to update the model-predicted bathymetries with observations. The model-predicted bathymetry is nudged towards true states in annual time steps. Sensitivity analysis of a user-defined correlation length scale, for the definition of the background error covariance matrix during the nudging procedure, suggests that the optimal error correlation length is similar to the grid cell size, here 80-90 m. Additionally, spatially heterogeneous correlation lengths produce more realistic channel depths than do spatially homogeneous correlation lengths. Consecutive application of the nudging method compensates for the (stand-alone) model prediction errors and corrects the channel migration pattern, with a Brier skill score of 0.78. The proposed nudging method in this study serves as an analytical approach to update model predictions towards a predefined 'true' state for the spatiotemporal interpolation of incomplete morphological data in long-term simulations.
Resumo:
DNA extraction was carried out as described on the MICROBIS project pages (http://icomm.mbl.edu/microbis ) using a commercially available extraction kit. We amplified the hypervariable regions V4-V6 of archaeal and bacterial 16S rRNA genes using PCR and several sets of forward and reverse primers (http://vamps.mbl.edu/resources/primers.php). Massively parallel tag sequencing of the PCR products was carried out on a 454 Life Sciences GS FLX sequencer at Marine Biological Laboratory, Woods Hole, MA, following the same experimental conditions for all samples. Sequence reads were submitted to a rigorous quality control procedure based on mothur v30 (doi:10.1128/AEM.01541-09) including denoising of the flow grams using an algorithm based on PyroNoise (doi:10.1038/nmeth.1361), removal of PCR errors and a chimera check using uchime (doi:10.1093/bioinformatics/btr381). The reads were taxonomically assigned according to the SILVA taxonomy (SSURef v119, 07-2014; doi:10.1093/nar/gks1219) implemented in mothur and clustered at 98% ribosomal RNA gene V4-V6 sequence identity. V4-V6 amplicon sequence abundance tables were standardized to account for unequal sampling effort using 1000 (Archaea) and 2300 (Bacteria) randomly chosen sequences without replacement using mothur and then used to calculate inverse Simpson diversity indices and Chao1 richness (doi:10.2307/4615964). Bray-Curtis dissimilarities (doi:10.2307/1942268) between all samples were calculated and used for 2-dimensional non metric multidimensional scaling (NMDS) ordinations with 20 random starts (doi:10.1007/BF02289694). Stress values below 0.2 indicated that the multidimensional dataset was well represented by the 2D ordination. NMDS ordinations were compared and tested using Procrustes correlation analysis (doi:10.1007/BF02291478). All analyses were carried out with the R statistical environment and the packages vegan (available at: http://cran.r-project.org/package=vegan), labdsv (available at: http://cran.r-project.org/package=labdsv), as well as with custom R scripts. Operational taxonomic units at 98% sequence identity (OTU0.03) that occurred only once in the whole dataset were termed absolute single sequence OTUs (SSOabs; doi:10.1038/ismej.2011.132). OTU0.03 sequences that occurred only once in at least one sample, but may occur more often in other samples were termed relative single sequence OTUs (SSOrel). SSOrel are particularly interesting for community ecology, since they comprise rare organisms that might become abundant when conditions change.16S rRNA amplicons and metagenomic reads have been stored in the sequence read archive under SRA project accession number SRP042162.