861 resultados para MEDICAL PATIENTS
Resumo:
OBJECTIVES: To identify the prevalence of geriatric syndromes in the premorbid for all syndromes except falls (preadmission), admission, and discharge assessment periods and the incidence of new and significant worsening of existing syndromes at admission and discharge. DESIGN: Prospective cohort study. SETTING: Three acute care hospitals in Brisbane, Australia. PARTICIPANTS: Five hundred seventy-seven general medical patients aged 70 and older admitted to the hospital. MEASUREMENTS: Prevalence of syndromes in the premorbid (or preadmission for falls), admission, and discharge periods; incidence of new syndromes at admission and discharge; and significant worsening of existing syndromes at admission and discharge. RESULTS: The most frequently reported premorbid syndromes were bladder incontinence (44%), impairment in any activity of daily living (ADL) (42%). A high proportion (42%) experienced at least one fall in the 90 days before admission. Two-thirds of the participants experienced between one and five syndromes (cognitive impairment, dependence in any ADL item, bladder and bowel incontinence, pressure ulcer) before, at admission, and at discharge. A majority experienced one or two syndromes during the premorbid (49.4%), admission (57.0%), or discharge (49.0%) assessment period.The syndromes with a higher incidence of significant worsening at discharge (out of the proportion with the syndrome present premorbidly) were ADL limitation (33%), cognitive impairment (9%), and bladder incontinence (8%). Of the syndromes examined at discharge, a higher proportion of patients experienced the following new syndromes at discharge (absent premorbidly): ADL limitation (22%); and bladder incontinence (13%). CONCLUSION: Geriatric syndromes were highly prevalent. Many patients did not return to their premorbid function and acquired new syndromes.
Resumo:
Aim: Up to 60% of older medical patients are malnourished with further decline during hospital stay. There is limited evidence for effective nutrition intervention. Staff focus groups were conducted to improve understanding of potential contextual and cultural barriers to feeding older adults in hospital. Methods: Three focus groups involved 22 staff working on the acute medical wards of a large tertiary teaching hospital. Staff disciplines were nursing, dietetics, speech pathology, occupational therapy, physiotherapy, pharmacy. A semistructured topic guide was used by the same facilitator to prompt discussions on hospital nutrition care including barriers. Focus groups were tape-recorded, transcribed and analysed thematically. Results: All staff recognised malnutrition to be an important problem in older patients during hospital stay and identified patient-level barriers to nutrition care such as non-compliance to feeding plans and hospital-level barriers including nursing staff shortages. Differences between disciplines revealed a lack of a coordinated approach, including poor knowledge of nutrition care processes, poor interdisciplinary communication, and a lack of a sense of shared responsibility/coordinated approach to nutrition care. All staff talked about competing activities at meal times and felt disempowered to prioritise nutrition in the acute medical setting. Staff agreed education and ‘extra hands’ would address most barriers but did not consider organisational change. Conclusions: Redesigning the model of care to reprioritise meal-time activities and redefine multidisciplinary roles and responsibilities would support coordinated nutrition care. However, effectiveness may also depend on hospitalwide leadership and support to empower staff and increase accountability within a team-led approach.
Resumo:
Background & Aims: Inadequate feeding assistance and mealtime interruptions during hospitalisation may contribute to malnutrition and poor nutritional intake in older people. This study aimed to implement and compare three interventions designed to specifically address mealtime barriers and improve energy intakes of medical inpatients aged ≥65 years. Methods: Pre-post study compared three mealtime assistance interventions: PM: Protected Mealtimes with multidisciplinary education; AIN: additional assistant-in-nursing (AIN) with dedicated meal role; PM+AIN: combined intervention. Dietary intake of 254 patients (pre: n=115, post: n=141; mean age 80±8) was visually estimated on a single day in the first week of hospitalisation and compared with estimated energy requirements. Assistance activities were observed and recorded. Results: Mealtime assistance levels significantly increased in all interventions (p<0.01). Post-intervention participants were more likely to achieve adequate energy intake (OR=3.4, p=0.01), with no difference noted between interventions (p=0.29). Patients with cognitive impairment or feeding dependency appeared to gain substantial benefit from mealtime assistance interventions. Conclusions: Protected Mealtimes and additional AIN assistance (implemented alone or in combination) may produce modest improvements in nutritional intake. Targeted feeding assistance for certain patient groups holds promise; however, alternative strategies are required to address the complex problem of malnutrition in this population.
Resumo:
The risk for venous thromboembolism (VTE) in medical patients is high, but risk assessment is rarely performed because there is not yet a good method to identify candidates for prophylaxis. Purpose: To perform a systematic review about VTE risk factors (RFs) in hospitalized medical patients and generate recommendations (RECs) for prophylaxis that can be implemented into practice. Data sources: A multidisciplinary group of experts from 12 Brazilian Medical Societies searched MEDLINE, Cochrane, and LILACS. Study selection: Two experts independently classified the evidence for each RF by its scientific quality in a standardized manner. A risk-assessment algorithm was created based on the results of the review. Data synthesis: Several VTE RFs have enough evidence to support RECs for prophylaxis in hospitalized medical patients (eg, increasing age, heart failure, and stroke). Other factors are considered adjuncts of risk (eg, varices, obesity, and infections). According to the algorithm, hospitalized medical patients ≥40 years-old with decreased mobility, and ≥1 RFs should receive chemoprophylaxis with heparin, provided they don't have contraindications. High prophylactic doses of unfractionated heparin or low-molecular-weight-heparin must be administered and maintained for 6-14 days. Conclusions: A multidisciplinary group generated evidence-based RECs and an easy-to-use algorithm to facilitate VTE prophylaxis in medical patients. © 2007 Rocha et al, publisher and licensee Dove Medical Press Ltd.
Resumo:
To perform a systematic review of the utility of the Beck Depression Inventory for detecting depression in medical settings, this article focuses on the revised version of the scale (Beck Depression Inventory-II), which was reformulated according to the DSM-IV criteria for major depression. We examined relevant investigations with the Beck Depression Inventory-II for measuring depression in medical settings to provide guidelines for practicing clinicians. Considering the inclusion and exclusion criteria seventy articles were retained. Validation studies of the Beck Depression Inventory-II, in both primary care and hospital settings, were found for clinics of cardiology, neurology, obstetrics, brain injury, nephrology, chronic pain, chronic fatigue, oncology, and infectious disease. The Beck Depression Inventory-II showed high reliability and good correlation with measures of depression and anxiety. Its threshold for detecting depression varied according to the type of patients, suggesting the need for adjusted cut-off points. The somatic and cognitive-affective dimension described the latent structure of the instrument. The Beck Depression Inventory-II can be easily adapted in most clinical conditions for detecting major depression and recommending an appropriate intervention. Although this scale represents a sound path for detecting depression in patients with medical conditions, the clinician should seek evidence for how to interpret the score before using the Beck Depression Inventory-II to make clinical decisions
Resumo:
BACKGROUND: The adequacy of thromboprophylaxis prescriptions in acutely ill hospitalized medical patients needs improvement. OBJECTIVE: To prospectively assess the efficacy of thromboprophylaxis adequacy of various clinical decision support systems (CDSS) with the aim of increasing the use of explicit criteria for thromboprophylaxis prescription in nine Swiss medical services. METHODS: We randomly assigned medical services to a pocket digital assistant program (PDA), pocket cards (PC) and no CDSS (controls). In centers using an electronic chart, an e-alert system (eAlerts) was developed. After 4 months, we compared post-CDSS with baseline thromboprophylaxis adequacy for the various CDSS and control groups. RESULTS: Overall, 1085 patients were included (395 controls, 196 PC, 168 PDA, 326 eAlerts), 651 pre- and 434 post-CDSS implementation: 472 (43.5%) presented a risk of VTE justifying thromboprophylaxis (31.8% pre, 61.1% post) and 556 (51.2%) received thromboprophylaxis (54.2% pre, 46.8% post). The overall adequacy (% patients with adequate prescription) of pre- and post-CDSS implementation was 56.2 and 50.7 for controls (P = 0.29), 67.3 and 45.3 for PC (P = 0.002), 66.0 and 64.9 for PDA (P = 0.99), 50.5 and 56.2 for eAlerts (P = 0.37), respectively, eAlerts limited overprescription (56% pre, 31% post, P = 0.01). CONCLUSION: While pocket cards and handhelds did not improve thromboprophylaxis adequacy, eAlerts had a modest effect, particularly on the reduction of overprescription. This effect only partially contributes to the improvement of patient safety and more work is needed towards institution-tailored tools.
Resumo:
IMPORTANCE Because effective interventions to reduce hospital readmissions are often expensive to implement, a score to predict potentially avoidable readmissions may help target the patients most likely to benefit. OBJECTIVE To derive and internally validate a prediction model for potentially avoidable 30-day hospital readmissions in medical patients using administrative and clinical data readily available prior to discharge. DESIGN Retrospective cohort study. SETTING Academic medical center in Boston, Massachusetts. PARTICIPANTS All patient discharges from any medical services between July 1, 2009, and June 30, 2010. MAIN OUTCOME MEASURES Potentially avoidable 30-day readmissions to 3 hospitals of the Partners HealthCare network were identified using a validated computerized algorithm based on administrative data (SQLape). A simple score was developed using multivariable logistic regression, with two-thirds of the sample randomly selected as the derivation cohort and one-third as the validation cohort. RESULTS Among 10 731 eligible discharges, 2398 discharges (22.3%) were followed by a 30-day readmission, of which 879 (8.5% of all discharges) were identified as potentially avoidable. The prediction score identified 7 independent factors, referred to as the HOSPITAL score: h emoglobin at discharge, discharge from an o ncology service, s odium level at discharge, p rocedure during the index admission, i ndex t ype of admission, number of a dmissions during the last 12 months, and l ength of stay. In the validation set, 26.7% of the patients were classified as high risk, with an estimated potentially avoidable readmission risk of 18.0% (observed, 18.2%). The HOSPITAL score had fair discriminatory power (C statistic, 0.71) and had good calibration. CONCLUSIONS AND RELEVANCE This simple prediction model identifies before discharge the risk of potentially avoidable 30-day readmission in medical patients. This score has potential to easily identify patients who may need more intensive transitional care interventions.
Resumo:
There is a need to validate risk assessment tools for hospitalised medical patients at risk of venous thromboembolism (VTE). We investigated whether a predefined cut-off of the Geneva Risk Score, as compared to the Padua Prediction Score, accurately distinguishes low-risk from high-risk patients regardless of the use of thromboprophylaxis. In the multicentre, prospective Explicit ASsessment of Thromboembolic RIsk and Prophylaxis for Medical PATients in SwitzErland (ESTIMATE) cohort study, 1,478 hospitalised medical patients were enrolled of whom 637 (43%) did not receive thromboprophylaxis. The primary endpoint was symptomatic VTE or VTE-related death at 90 days. The study is registered at ClinicalTrials.gov, number NCT01277536. According to the Geneva Risk Score, the cumulative rate of the primary endpoint was 3.2% (95% confidence interval [CI] 2.2-4.6%) in 962 high-risk vs 0.6% (95% CI 0.2-1.9%) in 516 low-risk patients (p=0.002); among patients without prophylaxis, this rate was 3.5% vs 0.8% (p=0.029), respectively. In comparison, the Padua Prediction Score yielded a cumulative rate of the primary endpoint of 3.5% (95% CI 2.3-5.3%) in 714 high-risk vs 1.1% (95% CI 0.6-2.3%) in 764 low-risk patients (p=0.002); among patients without prophylaxis, this rate was 3.2% vs 1.5% (p=0.130), respectively. Negative likelihood ratio was 0.28 (95% CI 0.10-0.83) for the Geneva Risk Score and 0.51 (95% CI 0.28-0.93) for the Padua Prediction Score. In conclusion, among hospitalised medical patients, the Geneva Risk Score predicted VTE and VTE-related mortality and compared favourably with the Padua Prediction Score, particularly for its accuracy to identify low-risk patients who do not require thromboprophylaxis.
Resumo:
Both, underuse and overuse of thromboprophylaxis in hospitalised medical patients is common. We aimed to explore clinical factors associated with the use of pharmacological or mechanical thromboprophylaxis in acutely ill medical patients at high (Geneva Risk Score ≥ 3 points) vs low (Geneva Risk Score < 3 points) risk of venous thromboembolism. Overall, 1,478 hospitalised medical patients from eight large Swiss hospitals were enrolled in the prospective Explicit ASsessment of Thromboembolic RIsk and Prophylaxis for Medical PATients in SwitzErland (ESTIMATE) cohort study. The study is registered on ClinicalTrials.gov, number NCT01277536. Thromboprophylaxis increased stepwise with increasing Geneva Risk Score (p< 0.001). Among the 962 high-risk patients, 366 (38 %) received no thromboprophylaxis; cancer-associated thrombocytopenia (OR 4.78, 95 % CI 2.75-8.31, p< 0.001), active bleeding on admission (OR 2.88, 95 % CI 1.69-4.92, p< 0.001), and thrombocytopenia without cancer (OR 2.54, 95 % CI 1.31-4.95, p=0.006) were independently associated with the absence of prophylaxis. The use of thromboprophylaxis declined with increasing severity of thrombocytopenia (p=0.001). Among the 516 low-risk patients, 245 (48 %) received thromboprophylaxis; none of the investigated clinical factors predicted its use. In conclusion, in acutely ill medical patients, bleeding and thrombocytopenia were the most important factors for the absence of thromboprophylaxis among high-risk patients. The use of thromboprophylaxis among low-risk patients was inconsistent, without clearly identifiable predictors, and should be addressed in further research.
Resumo:
Background: Older adults experience functional decline in hospital leading to increased healthcare burden and morbidity. The benefits of augmented exercise in hospital remain uncertain. The aim of this trial is to measure the short and longer-term effects of augmented exercise for older medical in-patients on their physical performance, quality of life and health care utilisation. Design and Methods: Two hundred and twenty older medical patients will be blindly randomly allocated to the intervention or sham groups. Both groups will receive usual care (including routine physiotherapy care) augmented by two daily exercise sessions. The sham group will receive stretching and relaxation exercises while the intervention group will receive tailored strengthening and balance exercises. Differences between groups will be measured at baseline, discharge, and three months. The primary outcome measure will be length of stay. The secondary outcome measures will be healthcare utilisation, activity (accelerometry), physical performance (Short Physical Performance Battery), falls history in hospital and quality of life (EQ-5D-5 L). Discussion: This simple intervention has the potential to transform the outcomes of the older patient in the acute setting.
Resumo:
Background: Patients presenting to the emergency department (ED) currently face inacceptable delays in initial treatment, and long, costly hospital stays due to suboptimal initial triage and site-of-care decisions. Accurate ED triage should focus not only on initial treatment priority, but also on prediction of medical risk and nursing needs to improve site-of-care decisions and to simplify early discharge management. Different triage scores have been proposed, such as the Manchester triage system (MTS). Yet, these scores focus only on treatment priority, have suboptimal performance and lack validation in the Swiss health care system. Because the MTS will be introduced into clinical routine at the Kantonsspital Aarau, we propose a large prospective cohort study to optimize initial patient triage. Specifically, the aim of this trial is to derive a three-part triage algorithm to better predict (a) treatment priority; (b) medical risk and thus need for in-hospital treatment; (c) post-acute care needs of patients at the most proximal time point of ED admission. Methods/design: Prospective, observational, multicenter, multi-national cohort study. We will include all consecutive medical patients seeking ED care into this observational registry. There will be no exclusions except for non-adult and non-medical patients. Vital signs will be recorded and left over blood samples will be stored for later batch analysis of blood markers. Upon ED admission, the post-acute care discharge score (PACD) will be recorded. Attending ED physicians will adjudicate triage priority based on all available results at the time of ED discharge to the medical ward. Patients will be reassessed daily during the hospital course for medical stability and readiness for discharge from the nurses and if involved social workers perspective. To assess outcomes, data from electronic medical records will be used and all patients will be contacted 30 days after hospital admission to assess vital and functional status, re-hospitalization, satisfaction with care and quality of life measures. We aim to include between 5000 and 7000 patients over one year of recruitment to derive the three-part triage algorithm. The respective main endpoints were defined as (a) initial triage priority (high vs. low priority) adjudicated by the attending ED physician at ED discharge, (b) adverse 30 day outcome (death or intensive care unit admission) within 30 days following ED admission to assess patients risk and thus need for in-hospital treatment and (c) post acute care needs after hospital discharge, defined as transfer of patients to a post-acute care institution, for early recognition and planning of post-acute care needs. Other outcomes are time to first physician contact, time to initiation of adequate medical therapy, time to social worker involvement, length of hospital stay, reasons fordischarge delays, patient’s satisfaction with care, overall hospital costs and patients care needs after returning home. Discussion: Using a reliable initial triage system for estimating initial treatment priority, need for in-hospital treatment and post-acute care needs is an innovative and persuasive approach for a more targeted and efficient management of medical patients in the ED. The proposed interdisciplinary , multi-national project has unprecedented potential to improve initial triage decisions and optimize resource allocation to the sickest patients from admission to discharge. The algorithms derived in this study will be compared in a later randomized controlled trial against a usual care control group in terms of resource use, length of hospital stay, overall costs and patient’s outcomes in terms of mortality, re-hospitalization, quality of life and satisfaction with care.
Resumo:
BACKGROUND: Malnutrition, and poor intake during hospitalisation, are common in older medical patients. Better understanding of patient-specific factors associated with poor intake may inform nutritional interventions. AIMS: To measure the proportion of older medical patients with inadequate nutritional intake, and identify patient-related factors associated with this outcome. METHODS: Prospective cohort study enrolling consecutive consenting medical inpatients aged 65 years or older. Primary outcome was energy intake less than resting energy expenditure estimated using weight-based equations. Energy intake was calculated for a single day using direct observation of plate waste. Explanatory variables included age, gender, number of co-morbidities, number of medications, diagnosis, usual residence, nutritional status, functional and cognitive impairment, depressive symptoms, poor appetite, poor dentition, and dysphagia. RESULTS: Of 134 participants (mean age 80 years, 51% female), only 41% met estimated resting energy requirements. Mean energy intake was 1220 kcal/day (SD 440), or 18.1 kcal/kg/day. Factors associated with inadequate energy intake in multivariate analysis were poor appetite, higher BMI, diagnosis of infection or cancer, delirium and need for assistance with feeding. CONCLUSIONS: Inadequate nutritional intake is common, and patient factors contributing to poor intake need to be considered in nutritional interventions.
Resumo:
Background: Decreased ability to perform Activities of Daily Living (ADLs) during hospitalisation has negative consequences for patients and health service delivery. Objective: To develop an Index to stratify patients at lower and higher risk of a significant decline in ability to perform ADLs at discharge. Design: Prospective two cohort study comprising a derivation (n=389; mean age 82.3 years; SD� 7.1) and a validation cohort (n=153; mean age 81.5 years; SD� 6.1). Patients and setting: General medical patients aged = 70 years admitted to three university-affiliated acute care hospitals in Brisbane, Australia. Measurement and main results: The short ADL Scale was used to identify a significant decline in ability to perform ADLs from premorbid to discharge. In the derivation cohort, 77 patients (19.8%) experienced a significant decline. Four significant factors were identified for patients independent at baseline: 'requiring moderate assistance to being totally dependent on others with bathing'; 'difficulty understanding others (frequently or all the time)'; 'requiring moderate assistance to being totally dependent on others with performing housework'; a 'history of experiencing at least one fall in the previous 90 days prior to hospital admission' in addition to 'independent at baseline', which was protective against decline at discharge. 'Difficulty understanding others (frequently or all the time)' and 'requiring moderate assistance to being totally dependent on others with performing housework' were also predictors for patients dependent in ADLs at baseline. Sensitivity, specificity, Positive Predictive Value (PPV), and Negative Predictive Value (NPV) of the DADLD dichotomised risk scores were: 83.1% (95% CI 72.8; 90.7); 60.5% (95% CI 54.8; 65.9); 34.2% (95% CI 27.5; 41.5); 93.5% (95% CI 89.2; 96.5). In the validation cohort, 47 patients (30.7%) experienced a significant decline. Sensitivity, specificity, PPV and NPV of the DADLD were: 78.7% (95% CI 64.3; 89.3); 69.8% (95% CI 60.1, 78.3); 53.6% (95% CI 41.2; 65.7); 88.1% (95% CI 79.2; 94.1). Conclusions: The DADLD Index is a useful tool for identifying patients at higher risk of decline in ability to perform ADLs at discharge.
Resumo:
Emergency departments are challenging research settings, where truly informed consent can be difficult to obtain. A deeper understanding of emergency medical patients' opinions about research is needed. We conducted a systematic review and meta-summary of quantitative and qualitative studies on which values, attitudes, or beliefs of emergent medical research participants influence research participation. We included studies of adults that investigated opinions toward emergency medicine research participation. We excluded studies focused on the association between demographics or consent document features and participation and those focused on non-emergency research. In August 2011, we searched the following databases: MEDLINE, EMBASE, Google Scholar, Scirus, PsycINFO, AgeLine and Global Health. Titles, abstracts and then full manuscripts were independently evaluated by two reviewers. Disagreements were resolved by consensus and adjudicated by a third author. Studies were evaluated for bias using standardised scores. We report themes associated with participation or refusal. Our initial search produced over 1800 articles. A total of 44 articles were extracted for full-manuscript analysis, and 14 were retained based on our eligibility criteria. Among factors favouring participation, altruism and personal health benefit had the highest frequency. Mistrust of researchers, feeling like a 'guinea pig' and risk were leading factors favouring refusal. Many studies noted limitations of informed consent processes in emergent conditions. We conclude that highlighting the benefits to the participant and society, mitigating risk and increasing public trust may increase research participation in emergency medical research. New methods for conducting informed consent in such studies are needed.