912 resultados para Medical lab data
Resumo:
Background Abstractor training is a key element in creating valid and reliable data collection procedures. The choice between in-person vs. remote or simultaneous vs. sequential abstractor training has considerable consequences for time and resource utilization. We conducted a web-based (webinar) abstractor training session to standardize training across six individual Cancer Research Network (CRN) sites for a study of breast cancer treatment effects in older women (BOWII). The goals of this manuscript are to describe the training session, its participants and participants' evaluation of webinar technology for abstraction training. Findings A webinar was held for all six sites with the primary purpose of simultaneously training staff and ensuring consistent abstraction across sites. The training session involved sequential review of over 600 data elements outlined in the coding manual in conjunction with the display of data entry fields in the study's electronic data collection system. Post-training evaluation was conducted via Survey Monkey©. Inter-rater reliability measures for abstractors within each site were conducted three months after the commencement of data collection. Ten of the 16 people who participated in the training completed the online survey. Almost all (90%) of the 10 trainees had previous medical record abstraction experience and nearly two-thirds reported over 10 years of experience. Half of the respondents had previously participated in a webinar, among which three had participated in a webinar for training purposes. All rated the knowledge and information delivered through the webinar as useful and reported it adequately prepared them for data collection. Moreover, all participants would recommend this platform for multi-site abstraction training. Consistent with participant-reported training effectiveness, results of data collection inter-rater agreement within sites ranged from 89 to 98%, with a weighted average of 95% agreement across sites. Conclusions Conducting training via web-based technology was an acceptable and effective approach to standardizing medical record review across multiple sites for this group of experienced abstractors. Given the substantial time and cost savings achieved with the webinar, coupled with participants' positive evaluation of the training session, researchers should consider this instructional method as part of training efforts to ensure high quality data collection in multi-site studies.
Resumo:
OBJECTIVES: To analyse the frequency of and identify risk factors for patient-reported medical errors in Switzerland. The joint effect of risk factors on error-reporting probability was modelled for hypothetical patients. METHODS: A representative population sample of Swiss citizens (n = 1306) was surveyed as part of the Commonwealth Fund’s 2010 lnternational Survey of the General Public’s Views of their Health Care System’s Performance in Eleven Countries. Data on personal background, utilisation of health care, coordination of care problems and reported errors were assessed. Logistic regression analysis was conducted to identify risk factors for patients’ reports of medical mistakes and medication errors. RESULTS: 11.4% of participants reported at least one error in their care in the previous two years (8% medical errors, 5.3% medication errors). Poor coordination of care experiences was frequent. 7.8% experienced that test results or medical records were not available, 17.2% received conflicting information from care providers and 11.5% reported that tests were ordered although they had been done before. Age (OR = 0.98, p = 0.014), poor health (OR = 2.95, p = 0.007), utilisation of emergency care (OR = 2.45, p = 0.003), inpatient-stay (OR = 2.31, p = 0.010) and poor care coordination (OR = 5.43, p <0.001) are important predictors for reporting error. For high utilisers of care that unify multiple risk factors the probability that errors are reported rises up to p = 0.8. CONCLUSIONS: Patient safety remains a major challenge for the Swiss health care system. Despite the health related and economic burden associated with it, the widespread experience of medical error in some subpopulations also has the potential to erode trust in the health care system as a whole.
Resumo:
Optical coherence tomography (OCT) is a well-established image modality in ophthalmology and used daily in the clinic. Automatic evaluation of such datasets requires an accurate segmentation of the retinal cell layers. However, due to the naturally low signal to noise ratio and the resulting bad image quality, this task remains challenging. We propose an automatic graph-based multi-surface segmentation algorithm that internally uses soft constraints to add prior information from a learned model. This improves the accuracy of the segmentation and increase the robustness to noise. Furthermore, we show that the graph size can be greatly reduced by applying a smart segmentation scheme. This allows the segmentation to be computed in seconds instead of minutes, without deteriorating the segmentation accuracy, making it ideal for a clinical setup. An extensive evaluation on 20 OCT datasets of healthy eyes was performed and showed a mean unsigned segmentation error of 3.05 ±0.54 μm over all datasets when compared to the average observer, which is lower than the inter-observer variability. Similar performance was measured for the task of drusen segmentation, demonstrating the usefulness of using soft constraints as a tool to deal with pathologies.
Resumo:
In Germany, hospitals can deliver data from patients with pelvic fractures selectively or twofold to two different trauma registries, i.e. the German Pelvic Injury Register (PIR) and the TraumaRegister DGU(®) (TR). Both registers are anonymous and differ in composition and content. We describe the methodological approach of linking these registries and reidentifying twofold documented patients. The aim of the approach is to create an intersection set that benefit from complementary data of each registry, respectively. Furthermore, the concordance of data entry of some clinical variables entered in both registries was evaluated.
Resumo:
The CIAO Study is a multicenter observational study currently underway in 66 European medical institutions over the course of a six-month study period (January-June 2012).This preliminary report overviews the findings of the first half of the study, which includes all data from the first three months of the six-month study period.Patients with either community-acquired or healthcare-associated complicated intra-abdominal infections (IAIs) were included in the study.912 patients with a mean age of 54.4 years (range 4-98) were enrolled in the study during the first three-month period. 47.7% of the patients were women and 52.3% were men. Among these patients, 83.3% were affected by community-acquired IAIs while the remaining 16.7% presented with healthcare-associated infections. Intraperitoneal specimens were collected from 64.2% of the enrolled patients, and from these samples, 825 microorganisms were collectively identified.The overall mortality rate was 6.4% (58/912). According to univariate statistical analysis of the data, critical clinical condition of the patient upon hospital admission (defined by severe sepsis and septic shock) as well as healthcare-associated infections, non-appendicular origin, generalized peritonitis, and serious comorbidities such as malignancy and severe cardiovascular disease were all significant risk factors for patient mortality.White Blood Cell counts (WBCs) greater than 12,000 or less than 4,000 and core body temperatures exceeding 38°C or less than 36°C by the third post-operative day were statistically significant indicators of patient mortality.
Resumo:
MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.
Resumo:
Data from the Institutional Population Component of the National Medical Expenditure Survey were used to provide national estimates of annual mental health service provision and use in nursing homes. In addition, the relationship between service provision and setting characteristics such as ownership, size, Medicaid certification, and chain status was examined. Although more than three quarters of residents with a mental disorder resided at a nursing home that provided counseling services, fewer than one fifth actually received any mental health services within the year.
Resumo:
Data from 50 residents of a long-term care facility were used to examine the extent to which performance on a brief, objective inventory could predict a clinical psychologist's evaluation of competence to participate in decisions about medical care. Results indicate that the competence to participate in medical decisions of two-thirds of the residents could be accurately assessed using scores on a mental status instrument and two vignette-based measures of medical decision-making. These procedures could enable nursing home staff to objectively assess the competence of residents to participate in important decisions about their medical care.
Resumo:
BACKGROUND: Trauma care is expensive. However, reliable data on the exact lifelong costs incurred by a major trauma patient are lacking. Discussion usually focuses on direct medical costs--underestimating consequential costs resulting from absence from work and permanent disability. METHODS: Direct medical costs and consequential costs of 63 major trauma survivors (ISS >13) at a Swiss trauma center from 1995 to 1996 were assessed 5 years posttrauma. The following cost evaluation methods were used: correction cost method (direct cost of restoring an original state), human capital method (indirect cost of lost productivity), contingent valuation method (human cost as the lost quality of life), and macroeconomic estimates. RESULTS: Mean ISS (Injury Severity Score) was 26.8 +/- 9.5 (mean +/- SD). In all, 22 patients (35%) were disabled, causing discounted average lifelong total costs of USD 1,293,800, compared with 41 patients (65%) who recovered without any disabilities with incurred costs of USD 147,200 (average of both groups USD 547,800). Two thirds of these costs were attributable to a loss of production whereas only one third was a result of the cost of correction. Primary hospital treatment (USD 27,800 +/- 37,800) was only a minor fraction of the total cost--less than the estimated cost of police and the judiciary. Loss of quality of life led to considerable intangible human costs similar to real costs. CONCLUSIONS: Trauma costs are commonly underestimated. Direct medical costs make up only a small part of the total costs. Consequential costs, such as lost productivity, are well in excess of the usual medical costs. Mere cost averages give a false estimate of the costs incurred by patients with/without disabilities.
Resumo:
The examination of traffic accidents is daily routine in forensic medicine. An important question in the analysis of the victims of traffic accidents, for example in collisions between motor vehicles and pedestrians or cyclists, is the situation of the impact. Apart from forensic medical examinations (external examination and autopsy), three-dimensional technologies and methods are gaining importance in forensic investigations. Besides the post-mortem multi-slice computed tomography (MSCT) and magnetic resonance imaging (MRI) for the documentation and analysis of internal findings, highly precise 3D surface scanning is employed for the documentation of the external body findings and of injury-inflicting instruments. The correlation of injuries of the body to the injury-inflicting object and the accident mechanism are of great importance. The applied methods include documentation of the external and internal body and the involved vehicles and inflicting tools as well as the analysis of the acquired data. The body surface and the accident vehicles with their damages were digitized by 3D surface scanning. For the internal findings of the body, post-mortem MSCT and MRI were used. The analysis included the processing of the obtained data to 3D models, determination of the driving direction of the vehicle, correlation of injuries to the vehicle damages, geometric determination of the impact situation and evaluation of further findings of the accident. In the following article, the benefits of the 3D documentation and computer-assisted, drawn-to-scale 3D comparisons of the relevant injuries with the damages to the vehicle in the analysis of the course of accidents, especially with regard to the impact situation, are shown on two examined cases.
Resumo:
Multiple outcomes data are commonly used to characterize treatment effects in medical research, for instance, multiple symptoms to characterize potential remission of a psychiatric disorder. Often either a global, i.e. symptom-invariant, treatment effect is evaluated. Such a treatment effect may over generalize the effect across the outcomes. On the other hand individual treatment effects, varying across all outcomes, are complicated to interpret, and their estimation may lose precision relative to a global summary. An effective compromise to summarize the treatment effect may be through patterns of the treatment effects, i.e. "differentiated effects." In this paper we propose a two-category model to differentiate treatment effects into two groups. A model fitting algorithm and simulation study are presented, and several methods are developed to analyze heterogeneity presenting in the treatment effects. The method is illustrated using an analysis of schizophrenia symptom data.
Resumo:
A recent article in this journal (Ioannidis JP (2005) Why most published research findings are false. PLoS Med 2: e124) argued that more than half of published research findings in the medical literature are false. In this commentary, we examine the structure of that argument, and show that it has three basic components: 1)An assumption that the prior probability of most hypotheses explored in medical research is below 50%. 2)Dichotomization of P-values at the 0.05 level and introduction of a “bias” factor (produced by significance-seeking), the combination of which severely weakens the evidence provided by every design. 3)Use of Bayes theorem to show that, in the face of weak evidence, hypotheses with low prior probabilities cannot have posterior probabilities over 50%. Thus, the claim is based on a priori assumptions that most tested hypotheses are likely to be false, and then the inferential model used makes it impossible for evidence from any study to overcome this handicap. We focus largely on step (2), explaining how the combination of dichotomization and “bias” dilutes experimental evidence, and showing how this dilution leads inevitably to the stated conclusion. We also demonstrate a fallacy in another important component of the argument –that papers in “hot” fields are more likely to produce false findings. We agree with the paper’s conclusions and recommendations that many medical research findings are less definitive than readers suspect, that P-values are widely misinterpreted, that bias of various forms is widespread, that multiple approaches are needed to prevent the literature from being systematically biased and the need for more data on the prevalence of false claims. But calculating the unreliability of the medical research literature, in whole or in part, requires more empirical evidence and different inferential models than were used. The claim that “most research findings are false for most research designs and for most fields” must be considered as yet unproven.
Resumo:
We propose a novel class of models for functional data exhibiting skewness or other shape characteristics that vary with spatial or temporal location. We use copulas so that the marginal distributions and the dependence structure can be modeled independently. Dependence is modeled with a Gaussian or t-copula, so that there is an underlying latent Gaussian process. We model the marginal distributions using the skew t family. The mean, variance, and shape parameters are modeled nonparametrically as functions of location. A computationally tractable inferential framework for estimating heterogeneous asymmetric or heavy-tailed marginal distributions is introduced. This framework provides a new set of tools for increasingly complex data collected in medical and public health studies. Our methods were motivated by and are illustrated with a state-of-the-art study of neuronal tracts in multiple sclerosis patients and healthy controls. Using the tools we have developed, we were able to find those locations along the tract most affected by the disease. However, our methods are general and highly relevant to many functional data sets. In addition to the application to one-dimensional tract profiles illustrated here, higher-dimensional extensions of the methodology could have direct applications to other biological data including functional and structural MRI.
Resumo:
Functional neuroimaging techniques enable investigations into the neural basis of human cognition, emotions, and behaviors. In practice, applications of functional magnetic resonance imaging (fMRI) have provided novel insights into the neuropathophysiology of major psychiatric,neurological, and substance abuse disorders, as well as into the neural responses to their treatments. Modern activation studies often compare localized task-induced changes in brain activity between experimental groups. One may also extend voxel-level analyses by simultaneously considering the ensemble of voxels constituting an anatomically defined region of interest (ROI) or by considering means or quantiles of the ROI. In this work we present a Bayesian extension of voxel-level analyses that offers several notable benefits. First, it combines whole-brain voxel-by-voxel modeling and ROI analyses within a unified framework. Secondly, an unstructured variance/covariance for regional mean parameters allows for the study of inter-regional functional connectivity, provided enough subjects are available to allow for accurate estimation. Finally, an exchangeable correlation structure within regions allows for the consideration of intra-regional functional connectivity. We perform estimation for our model using Markov Chain Monte Carlo (MCMC) techniques implemented via Gibbs sampling which, despite the high throughput nature of the data, can be executed quickly (less than 30 minutes). We apply our Bayesian hierarchical model to two novel fMRI data sets: one considering inhibitory control in cocaine-dependent men and the second considering verbal memory in subjects at high risk for Alzheimer’s disease. The unifying hierarchical model presented in this manuscript is shown to enhance the interpretation content of these data sets.