856 resultados para Emergency clinical assessment tools
Resumo:
Objective: To characterize the clinical findings in dogs and cats that sustained blunt trauma and to compare clinical respiratory examination results with post-traumatic thoracic radiography findings. Design: Retrospective clinical study. Setting: University small animal teaching hospital. Animals, interventions and measurements: Case records of 63 dogs and 96 cats presenting with a history of blunt trauma and thoracic radiographs between September 2001 and May 2003 were examined. Clinical signs of respiratory distress (respiratory rate (RR), pulmonary auscultation) and outcome were compared with radiographic signs of blunt trauma. Results: Forty-nine percent of dogs and 63.5% of cats had radiographic signs attributed to thoracic trauma. Twenty-two percent of dogs and 28% of cats had normal radiographs. Abnormal auscultation results were significantly associated with radiographic signs of thoracic trauma, radiography score and presence and degree of contusions. Seventy-two percent of animals with no other injuries showed signs of thoracic trauma on chest radiographs. No correlation was found between the radiographic findings and outcome, whereas the trauma score at presentation was significantly associated with outcome and with signs of chest trauma but not with the radiography score. Conclusion: Thoracic trauma is encountered in many blunt trauma patients. The RR of animals with blunt trauma is not useful in predicting thoracic injury, whereas abnormal chest auscultation results are indicative of chest abnormalities. Thorough chest auscultation is, therefore, mandatory in all trauma animals and might help in the assessment of necessity of chest radiographs.
Resumo:
BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.
Resumo:
INTRODUCTION HIV care and treatment programmes worldwide are transforming as they push to deliver universal access to essential prevention, care and treatment services to persons living with HIV and their communities. The characteristics and capacity of these HIV programmes affect patient outcomes and quality of care. Despite the importance of ensuring optimal outcomes, few studies have addressed the capacity of HIV programmes to deliver comprehensive care. We sought to describe such capacity in HIV programmes in seven regions worldwide. METHODS Staff from 128 sites in 41 countries participating in the International epidemiologic Databases to Evaluate AIDS completed a site survey from 2009 to 2010, including sites in the Asia-Pacific region (n=20), Latin America and the Caribbean (n=7), North America (n=7), Central Africa (n=12), East Africa (n=51), Southern Africa (n=16) and West Africa (n=15). We computed a measure of the comprehensiveness of care based on seven World Health Organization-recommended essential HIV services. RESULTS Most sites reported serving urban (61%; region range (rr): 33-100%) and both adult and paediatric populations (77%; rr: 29-96%). Only 45% of HIV clinics that reported treating children had paediatricians on staff. As for the seven essential services, survey respondents reported that CD4+ cell count testing was available to all but one site, while tuberculosis (TB) screening and community outreach services were available in 80 and 72%, respectively. The remaining four essential services - nutritional support (82%), combination antiretroviral therapy adherence support (88%), prevention of mother-to-child transmission (PMTCT) (94%) and other prevention and clinical management services (97%) - were uniformly available. Approximately half (46%) of sites reported offering all seven services. Newer sites and sites in settings with low rankings on the UN Human Development Index (HDI), especially those in the President's Emergency Plan for AIDS Relief focus countries, tended to offer a more comprehensive array of essential services. HIV care programme characteristics and comprehensiveness varied according to the number of years the site had been in operation and the HDI of the site setting, with more recently established clinics in low-HDI settings reporting a more comprehensive array of available services. Survey respondents frequently identified contact tracing of patients, patient outreach, nutritional counselling, onsite viral load testing, universal TB screening and the provision of isoniazid preventive therapy as unavailable services. CONCLUSIONS This study serves as a baseline for on-going monitoring of the evolution of care delivery over time and lays the groundwork for evaluating HIV treatment outcomes in relation to site capacity for comprehensive care.
Resumo:
Life expectancy continuously increases but our society faces age-related conditions. Among musculoskeletal diseases, osteoporosis associated with risk of vertebral fracture and degenerative intervertebral disc (IVD) are painful pathologies responsible for tremendous healthcare costs. Hence, reliable diagnostic tools are necessary to plan a treatment or follow up its efficacy. Yet, radiographic and MRI techniques, respectively clinical standards for evaluation of bone strength and IVD degeneration, are unspecific and not objective. Increasingly used in biomedical engineering, CT-based finite element (FE) models constitute the state-of-art for vertebral strength prediction. However, as non-invasive biomechanical evaluation and personalised FE models of the IVD are not available, rigid boundary conditions (BCs) are applied on the FE models to avoid uncertainties of disc degeneration that might bias the predictions. Moreover, considering the impact of low back pain, the biomechanical status of the IVD is needed as a criterion for early disc degeneration. Thus, the first FE study focuses on two rigid BCs applied on the vertebral bodies during compression test of cadaver vertebral bodies, vertebral sections and PMMA embedding. The second FE study highlights the large influence of the intervertebral disc’s compliance on the vertebral strength, damage distribution and its initiation. The third study introduces a new protocol for normalisation of the IVD stiffness in compression, torsion and bending using MRI-based data to account for its morphology. In the last study, a new criterion (Otsu threshold) for disc degeneration based on quantitative MRI data (axial T2 map) is proposed. The results show that vertebral strength and damage distribution computed with rigid BCs are identical. Yet, large discrepancies in strength and damage localisation were observed when the vertebral bodies were loaded via IVDs. The normalisation protocol attenuated the effect of geometry on the IVD stiffnesses without complete suppression. Finally, the Otsu threshold computed in the posterior part of annulus fibrosus was related to the disc biomechanics and meet objectivity and simplicity required for a clinical application. In conclusion, the stiffness normalisation protocol necessary for consistent IVD comparisons and the relation found between degeneration, mechanical response of the IVD and Otsu threshold lead the way for non-invasive evaluation biomechanical status of the IVD. As the FE prediction of vertebral strength is largely influenced by the IVD conditions, this data could also improve the future FE models of osteoporotic vertebra.
Resumo:
Introduction Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated. Literature about quality control of SPs’ performance focuses on feedback [1, 2] or completion of checklists [3, 4]. Since we did not find a published instrument meeting our needs for the assessment of patient portrayal, we developed such an instrument after being inspired by others [5] and used it in our high-stakes exam. Project description SP trainers from five medical faculties collected and prioritized quality criteria for patient portrayal. Items were revised twice, based on experiences during OSCEs. The final instrument contains 14 criteria for acting (i.e. adequate verbal and non-verbal expression) and standardization (i.e. verbatim delivery of the first sentence). All partners used the instrument during a high-stakes OSCE. SPs and trainers were introduced to the instrument. The tool was used in training (more than 100 observations) and during the exam (more than 250 observations). Outcome High quality of SPs’ patient portrayal during the exam was documented. More than 90% of SP performances were rated to be completely correct or sufficient. An increase in quality of performance between training and exam was noted. For example, the rate of completely correct reaction in medical tests increased from 88% to 95%. Together with 4% of sufficient performances these 95% add up to 99% of the reactions in medical tests meeting the standards of the exam. SP educators using the instrument reported an augmentation of SPs’ performance induced by the use of the instrument. Disadvantages mentioned were the high concentration needed to observe all criteria and the cumbersome handling of the paper-based forms. Discussion We were able to document a very high quality of SP performance in our exam. The data also indicates that our training is effective. We believe that the high concentration needed using the instrument is well invested, considering the observed enhancement of performance. The development of an iPad-based application for the form is planned to address the cumbersome handling of the paper.
Resumo:
Background Tumor necrosis factor (TNF) inhibition is central to the therapy of inflammatory bowel diseases (IBD). However, loss of response (LOR) is frequent and additional tests to help decision making with costly anti-TNF Therapy are needed. Methods Consecutive IBD Patients receiving anti-TNF therapy (Infliximab (IFX) or Adalimumab (after IFX LOR) from Bern University Hospital were identified and followed prospectively. Patient whole blood was stimulated with a dose-titration of two triggers of TLR receptors human: TNF and LPS. Median fluorescence intensity of CD62L on the surface of granulocytes was quantified by surface staining with specific antibodies (CD33, CD62L) and flow cytometry and logistic curves to these data permits the calculation of EC50 or the half maximal effective concentration TNF concentration to induce shedding [1]. A shift in the concentration were CD62L shedding occurred was seen before and after the anti-TNF agent administraion which permits to predict the response to the drug. This predicted response was correlated to the clinical evolution of the patients in order to analyze the ability of this test to identify LOR to IFX. Results We collected prospective clinical data and blood samples, before and after anti-TNF agent administration, on 33 IBD patients, 25 Crohn's disease and 8 ulcerative colitis patients (45% females) between June 2012 and November 2013. The assay showed a functional blockade of IFX (PFR) for 22 patients (17 CD and 5 UC) whereas 11 (8 CD and 3 UC) had no functional response (NR) to IFX. Clinical characteristics (e.g. diagnosis, disease location, smoking status, BMI and number of infusions) were no significantly different between predicted PFR and NR. Among the 22 Patients with PRF, only 1 patient was a clinical non responder (LOR to IFX), based on clinical prospective evaluation by IBD gastroenterologists (PJ, AM), and among the 11 predicted NR, 3 had no clinical LOR. Sensitivity of this test was 95% and specificity 73% and AUC adjusted for age and gender was 0.81 (Figure 1). During follow up (median 10 mo, 3–15) 8 “hard” outcomes occured (3 medic. flares, 4 resections and 1 new fistula) 2 in the PFR and 6 in the NR group (25% vs. 75%; p < 0.01). Correlation with clinical response is presented in Figure 2. Figure 1. Figure 2. Correlation clinical response - log EC50 changes: 1 No, 2 partial, 3 complete clinical response. Conclusion CD62L (L-Selectin) shedding is the first validated test of functional blockade of TNF alpha in anti-TNF treated IBD patients and will be a useful tool to guide medical decision on the use of anti-TNF agents. Comparative studies with ATI and trough level of IFX are ongoing. 1. Nicola Patuto, Emma Slack, Frank Seibold and Andrew J. Macpherson, (2011), Quantitating Anti-TNF Functionality to Inform Dosing and Choice of Therapy, Gastroenterology, 140 (5, Suppl. I), S689.
Resumo:
PURPOSE Austrian out-of-hospital emergency physicians (OOHEP) undergo mandatory biannual emergency physician refresher courses to maintain their licence. The purpose of this study was to compare different reported emergency skills and knowledge, recommended by the European Resuscitation Council (ERC) guidelines, between OOHEP who work regularly at an out-of-hospital emergency service and those who do not currently work as OOHEP but are licenced. METHODS We obtained data from 854 participants from 19 refresher courses. Demographics, questions about their practice and multiple-choice questions about ALS-knowledge were answered and analysed. We particularly explored the application of therapeutic hypothermia, intraosseous access, pocket guide use and knowledge about the participants' defibrillator in use. A multivariate logistic regression analysed differences between both groups of OOHEP. Age, gender, years of clinical experience, ERC-ALS provider course attendance and the self-reported number of resuscitations were control variables. RESULTS Licenced OOHEP who are currently employed in emergency service are significantly more likely to initiate intraosseous access (OR = 4.013, p < 0.01), they initiate mild-therapeutic hypothermia after successful resuscitation (OR = 2.550, p < 0.01) more often, and knowledge about the used defibrillator was higher (OR = 2.292, p < 0.01). No difference was found for the use of pocket guides.OOHEP who have attended an ERC-ALS provider course since 2005 have initiated more mild therapeutic hypothermia after successful resuscitation (OR = 1.670, p <0.05) as well as participants who resuscitated within the last year (OR = 2.324, p < 0.01), while older OOHEP initiated mild therapeutic hypothermia less often, measured per year of age (OR = 0.913, p <0.01). CONCLUSION Licenced and employed OOHEP implement ERC guidelines better into clinical practice, but more training on life-saving rescue techniques needs to be done to improve knowledge and to raise these rates of application.
Resumo:
Polymorbid patients, diverse diagnostic and therapeutic options, more complex hospital structures, financial incentives, benchmarking, as well as perceptional and societal changes put pressure on medical doctors, specifically if medical errors surface. This is particularly true for the emergency department setting, where patients face delayed or erroneous initial diagnostic or therapeutic measures and costly hospital stays due to sub-optimal triage. A "biomarker" is any laboratory tool with the potential better to detect and characterise diseases, to simplify complex clinical algorithms and to improve clinical problem solving in routine care. They must be embedded in clinical algorithms to complement and not replace basic medical skills. Unselected ordering of laboratory tests and shortcomings in test performance and interpretation contribute to diagnostic errors. Test results may be ambiguous with false positive or false negative results and generate unnecessary harm and costs. Laboratory tests should only be ordered, if results have clinical consequences. In studies, we must move beyond the observational reporting and meta-analysing of diagnostic accuracies for biomarkers. Instead, specific cut-off ranges should be proposed and intervention studies conducted to prove outcome relevant impacts on patient care. The focus of this review is to exemplify the appropriate use of selected laboratory tests in the emergency setting for which randomised-controlled intervention studies have proven clinical benefit. Herein, we focus on initial patient triage and allocation of treatment opportunities in patients with cardiorespiratory diseases in the emergency department. The following five biomarkers will be discussed: proadrenomedullin for prognostic triage assessment and site-of-care decisions, cardiac troponin for acute myocardial infarction, natriuretic peptides for acute heart failure, D-dimers for venous thromboembolism, C-reactive protein as a marker of inflammation, and procalcitonin for antibiotic stewardship in infections of the respiratory tract and sepsis. For these markers we provide an overview on physiopathology, historical evolution of evidence, strengths and limitations for a rational implementation into clinical algorithms. We critically discuss results from key intervention trials that led to their use in clinical routine and potential future indications. The rational for the use of all these biomarkers, first, tackle diagnostic ambiguity and consecutive defensive medicine, second, delayed and sub-optimal therapeutic decisions, and third, prognostic uncertainty with misguided triage and site-of-care decisions all contributing to the waste of our limited health care resources. A multifaceted approach for a more targeted management of medical patients from emergency admission to discharge including biomarkers, will translate into better resource use, shorter length of hospital stay, reduced overall costs, improved patients satisfaction and outcomes in terms of mortality and re-hospitalisation. Hopefully, the concepts outlined in this review will help the reader to improve their diagnostic skills and become more parsimonious laboratory test requesters.
Resumo:
Acute-on-chronic liver failure (ACLF) is characterized by acute decompensation (AD) of cirrhosis, organ failure(s), and high 28-day mortality. We investigated whether assessments of patients at specific time points predicted their need for liver transplantation (LT) or the potential futility of their care. We assessed clinical courses of 388 patients who had ACLF at enrollment, from February through September 2011, or during early (28-day) follow-up of the prospective multicenter European Chronic Liver Failure (CLIF) ACLF in Cirrhosis study. We assessed ACLF grades at different time points to define disease resolution, improvement, worsening, or steady or fluctuating course. ACLF resolved or improved in 49.2%, had a steady or fluctuating course in 30.4%, and worsened in 20.4%. The 28-day transplant-free mortality was low-to-moderate (6%-18%) in patients with nonsevere early course (final no ACLF or ACLF-1) and high-to-very high (42%-92%) in those with severe early course (final ACLF-2 or -3) independently of initial grades. Independent predictors of course severity were CLIF Consortium ACLF score (CLIF-C ACLFs) and presence of liver failure (total bilirubin ≥12 mg/dL) at ACLF diagnosis. Eighty-one percent had their final ACLF grade at 1 week, resulting in accurate prediction of short- (28-day) and mid-term (90-day) mortality by ACLF grade at 3-7 days. Among patients that underwent early LT, 75% survived for at least 1 year. Among patients with ≥4 organ failures, or CLIF-C ACLFs >64 at days 3-7 days, and did not undergo LT, mortality was 100% by 28 days. CONCLUSIONS Assessment of ACLF patients at 3-7 days of the syndrome provides a tool to define the emergency of LT and a rational basis for intensive care discontinuation owing to futility.
Resumo:
CONTEXT Radiolabelled choline positron emission tomography has changed the management of prostate cancer patients. However, new emerging radiopharmaceutical agents, like radiolabelled prostate specific membrane antigen, and new promising hybrid imaging will begin new challenges in the diagnostic field. OBJECTIVE The continuous evolution in nuclear medicine has led to the improvement in the detection of recurrent prostate cancer (PCa), particularly distant metastases. New horizons have been opened for radiolabelled choline positron emission tomography (PET)/computed tomography (CT) as a guide for salvage therapy or for the assessment of systemic therapies. In addition, new tracers and imaging tools have been recently tested, providing important information for the management of PCa patients. Herein we discuss: (1) the available evidence in literature on radiolabelled choline PET and their recent indications, (2) the role of alternative radiopharmaceutical agents, and (3) the advantages of a recent hybrid imaging device (PET/magnetic resonance imaging) in PCa. EVIDENCE ACQUISITION Data from recently published (2010-2015), original articles concerning the role of choline PET/CT, new emerging radiotracers, and a new imaging device are analysed. This review is reported according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. EVIDENCE SYNTHESIS In the restaging phase, the detection rate of choline PET varies between 4% and 97%, mainly depending on the site of recurrence and prostate-specific antigen levels. Both 68gallium (68Ga)-prostate specific membrane antigen and 18F-fluciclovine are shown to be more accurate in the detection of recurrent disease as compared with radiolabelled choline PET/CT. Particularly, Ga68-PSMA has a detection rate of 50% and 68%, respectively for prostate-specific antigen levels < 0.5ng/ml and 0.5-2ng/ml. Moreover, 68Ga- PSMA PET/magnetic resonance imaging demonstrated a particularly higher accuracy in detecting PCa than PET/CT. New tracers, such as radiolabelled bombesin or urokinase-type plasminogen activator receptor, are promising, but few data in clinical practice are available today. CONCLUSIONS Some limitations emerge from the published papers, both for radiolabelled choline PET/CT and also for new radiopharmaceutical agents. Efforts are still needed to enhance the impact of published data in the world of oncology, in particular when new radiopharmaceuticals are introduced into the clinical arena. PATIENT SUMMARY In the present review, the authors summarise the last evidences in clinical practice for the assessment of prostate cancer, by using nuclear medicine modalities, like positron emission tomography/computed tomography and positron emission tomography/magnetic resonance imaging.
Resumo:
Background: It is yet unclear if there are differences between using electronic key feature problems (KFPs) or electronic case-based multiple choice questions (cbMCQ) for the assessment of clinical decision making. Summary of Work: Fifth year medical students were exposed to clerkships which ended with a summative exam. Assessment of knowledge per exam was done by 6-9 KFPs, 9-20 cbMCQ and 9-28 MC questions. Each KFP consisted of a case vignette and three key features (KF) using “long menu” as question format. We sought students’ perceptions of the KFPs and cbMCQs in focus groups (n of students=39). Furthermore statistical data of 11 exams (n of students=377) concerning the KFPs and (cb)MCQs were compared. Summary of Results: The analysis of the focus groups resulted in four themes reflecting students’ perceptions of KFPs and their comparison with (cb)MCQ: KFPs were perceived as (i) more realistic, (ii) more difficult, (iii) more motivating for the intense study of clinical reasoning than (cb)MCQ and (iv) showed an overall good acceptance when some preconditions are taken into account. The statistical analysis revealed that there was no difference in difficulty; however KFP showed a higher discrimination and reliability (G-coefficient) even when corrected for testing times. Correlation of the different exam parts was intermediate. Conclusions: Students perceived the KFPs as more motivating for the study of clinical reasoning. Statistically KFPs showed a higher discrimination and higher reliability than cbMCQs. Take-home messages: Including KFPs with long menu questions into summative clerkship exams seems to offer positive educational effects.
Resumo:
Background. Previous studies show emergency rooms to be over crowded nation wide. With growing attention to this problem, the Houston-Galveston Area Council (H-GAC) initiated a study in 2005 to assess their region's emergency health care system, and continued this effort in 2007. The purpose of this study was to examine recent changes in volume, capacity and performance in the Houston-Galveston region's emergency health care system and determine whether the system has been able to effectively respond to the residents' demands. Methods. Data were collected by the Houston-Galveston Area Council and The Abaris Group using a self-administered 2002-2006 survey completed by administrators of the region's hospitals, EMS providers, and select fire departments that provide EMS services. Data from both studies were combined and matched to examine trends. Results. Volume increased among the reporting hospitals within the Houston-Galveston region from 2002 to 2006; however, capacity remained relatively unchanged. EMS providers reported higher average off load times in 2007 compared to 2005, but the increases were not statistically significant. Hospitals reported transferring a statistically significant greater percentage of patients in 2006 than 2004. There was no statistically significant change in any of the other measures. Conclusion. These findings indicate an increase in demand for the Houston-Galveston region's emergency healthcare services with no change in supply. Additional studies within the area are needed to fully assess and evaluate the impact of these changes on system performance. ^