863 resultados para ILL PATIENTS
Resumo:
Objective: Diarrhoea in the enterally tube fed (ETF) intensive care unit (ICU) patient is a multifactorial problem. Diarrhoeal aetiologies in this patient cohort remain debatable; however, the consequences of diarrhoea have been well established and include electrolyte imbalance, dehydration, bacterial translocation, peri anal wound contamination and sleep deprivation. This study examined the incidence of diarrhoea and explored factors contributing to the development of diarrhoea in the ETF, critically ill, adult patient. ---------- Method: After institutional ethical review and approval, a single centre medical chart audit was undertaken to examine the incidence of diarrhoea in ETF, critically ill patients. Retrospective, non-probability sequential sampling was used of all emergency admission adult ICU patients who met the inclusion/exclusion criteria. ---------- Results: Fifty patients were audited. Faecal frequency, consistency and quantity were considered important criteria in defining ETF diarrhoea. The incidence of diarrhoea was 78%. Total patient diarrhoea days (r = 0.422; p = 0.02) and total diarrhoea frequency (r = 0.313; p = 0.027) increased when the patient was ETF for longer periods of time. Increased severity of illness, peripheral oxygen saturation (Sp02), glucose control, albumin and white cell count were found to be statistically significant factors for the development of diarrhoea. ---------- Conclusion: Diarrhoea in ETF critically ill patients is multi-factorial. The early identification of diarrhoea risk factors and the development of a diarrhoea risk management algorithm is recommended.
Resumo:
Objective: The aim of this literature review is to identify the role of probiotics in the management of enteral tube feeding (ETF) diarrhoea in critically ill patients.---------- Background: Diarrhoea is a common gastrointestinal problem seen in ETF patients. The incidence of diarrhoea in tube fed patients varies from 2% to 68% across all patients. Despite extensive investigation, the pathogenesis surrounding ETF diarrhoea remains unclear. Evidence to support probiotics to manage ETF diarrhoea in critically ill patients remains sparse.---------- Method: Literature on ETF diarrhoea and probiotics in critically ill, adult patients was reviewed from 1980 to 2010. The Cochrane Library, Pubmed, Science Direct, Medline and the Cumulative Index of Nursing and Allied Health Literature (CINAHL) electronic databases were searched using specific inclusion/exclusion criteria. Key search terms used were: enteral nutrition, diarrhoea, critical illness, probiotics, probiotic species and randomised clinical control trial (RCT).---------- Results: Four RCT papers were identified with two reporting full studies, one reporting a pilot RCT and one conference abstract reporting an RCT pilot study. A trend towards a reduction in diarrhoea incidence was observed in the probiotic groups. However, mortality associated with probiotic use in some severely and critically ill patients must caution the clinician against its use.---------- Conclusion: Evidence to support probiotic use in the management of ETF diarrhoea in critically ill patients remains unclear. This paper argues that probiotics should not be administered to critically ill patients until further research has been conducted to examine the causal relationship between probiotics and mortality, irrespective of the patient's disease state or projected prophylactic benefit of probiotic administration.
Resumo:
Diarrhoea is a common complication observed in critically ill patients. Relationships between diarrhoea, enteral nutrition and aerobic intestinal microflora have been disconnectedly examined in this patient cohort. This research used a two-study, observational design to examine these associations. Higher diarrhoea incidence rates were observed when patients received enteral tube feeding, had abnormal serum blood results, received multiple medications and had aerobic microflora dysbiosis. Further, significant aerobic intestinal microflora changes were observed over time in patients who experienced diarrhoea. These results establish a platform for further work to improve the intestinal health of critically ill patients.
Resumo:
Purpose This study tested the effectiveness of a pressure ulcer (PU) prevention bundle in reducing the incidence of PUs in critically ill patients in two Saudi intensive care units (ICUs). Design A two-arm cluster randomized experimental control trial. Methods Participants in the intervention group received the PU prevention bundle, while the control group received standard skin care as per the local ICU policies. Data collected included demographic variables (age, diagnosis, comorbidities, admission trajectory, length of stay) and clinical variables (Braden Scale score, severity of organ function score, mechanical ventilation, PU presence, and staging). All patients were followed every two days from admission through to discharge, death, or up to a maximum of 28 days. Data were analyzed with descriptive correlation statistics, Kaplan-Meier survival analysis, and Poisson regression. Findings The total number of participants recruited was 140: 70 control participants (with a total of 728 days of observation) and 70 intervention participants (784 days of observation). PU cumulative incidence was significantly lower in the intervention group (7.14%) compared to the control group (32.86%). Poisson regression revealed the likelihood of PU development was 70% lower in the intervention group. The intervention group had significantly less Stage I (p = 002) and Stage II PU development (p = 026). Conclusions Significant improvements were observed in PU-related outcomes with the implementation of the PU prevention bundle in the ICU; PU incidence, severity, and total number of PUs per patient were reduced. Clinical Relevance Utilizing a bundle approach and standardized nursing language through skin assessment and translation of the knowledge to practice has the potential to impact positively on the quality of care and patient outcome.
Resumo:
Purpose To test an interventional patient skin integrity bundle, InSPiRE protocol, on the impact of pressure injuries (PrIs) in critically ill patients in an Australian adult intensive care unit (ICU). Methods Before and after design was used where the group of patients receiving the intervention (InSPiRE protocol) was compared with a similar control group who received standard care. Data collected included demographic and clinical variables, skin assessment, PrI presence and stage, and a Sequential Organ Failure Assessment (SOFA) score. Results Overall, 207 patients were enrolled, 105 in the intervention group and 102 in the control group. Most patients were men, mean age 55. The groups were similar on major demographic variables (age, SOFA scores, ICU length of stay). Pressure injury cumulative incidence was significantly lower in the intervention group (18%) compared to the control group for skin injuries(30.4%) (χ2=4.271, df=1, p=0.039) and mucous injuries (t test =3.27, p=<0.001) . Significantly fewer PrIs developing over time in the intervention group (Logrank= 11.842, df=1, p=<0.001) and patients developed fewer skin injuries (>3 PrIs/patient = 1/105) compared with the control group (>3 injuries/patient = 10/102) (p=0.018). Conclusion The intervention group, recieving the InSPiRE protocol, had lower PrI cumulative incidence, and reduced number and severity of PrIs that developed over time. Systematic and ongoing assessment of the patient's skin and PrI risk as well as implementation of tailored prevention measures are central to preventing PrIs.
Resumo:
Introduction Recent reports have highlighted the prevalence of vitamin D deficiency and suggested an association with excess mortality in critically ill patients. Serum vitamin D concentrations in these studies were measured following resuscitation. It is unclear whether aggressive fluid resuscitation independently influences serum vitamin D. Methods Nineteen patients undergoing cardiopulmonary bypass were studied. Serum 25(OH)D3, 1α,25(OH)2D3, parathyroid hormone, C-reactive protein (CRP), and ionised calcium were measured at five defined timepoints: T1 - baseline, T2 - 5 minutes after onset of cardiopulmonary bypass (CPB) (time of maximal fluid effect), T3 - on return to the intensive care unit, T4 - 24 hrs after surgery and T5 - 5 days after surgery. Linear mixed models were used to compare measures at T2-T5 with baseline measures. Results Acute fluid loading resulted in a 35% reduction in 25(OH)D3 (59 ± 16 to 38 ± 14 nmol/L, P < 0.0001) and a 45% reduction in 1α,25(OH)2D3 (99 ± 40 to 54 ± 22 pmol/L P < 0.0001) and i(Ca) (P < 0.01), with elevation in parathyroid hormone (P < 0.0001). Serum 25(OH)D3 returned to baseline only at T5 while 1α,25(OH)2D3 demonstrated an overshoot above baseline at T5 (P < 0.0001). There was a delayed rise in CRP at T4 and T5; this was not associated with a reduction in vitamin D levels at these time points. Conclusions Hemodilution significantly lowers serum 25(OH)D3 and 1α,25(OH)2D3, which may take up to 24 hours to resolve. Moreover, delayed overshoot of 1α,25(OH)2D3 needs consideration. We urge caution in interpreting serum vitamin D in critically ill patients in the context of major resuscitation, and would advocate repeating the measurement once the effects of the resuscitation have abated.
Resumo:
Assessment of the outcome of critical illness is complex. Severity scoring systems and organ dysfunction scores are traditional tools in mortality and morbidity prediction in intensive care. Their ability to explain risk of death is impressive for large cohorts of patients, but insufficient for an individual patient. Although events before intensive care unit (ICU) admission are prognostically important, the prediction models utilize data collected at and just after ICU admission. In addition, several biomarkers have been evaluated to predict mortality, but none has proven entirely useful in clinical practice. Therefore, new prognostic markers of critical illness are vital when evaluating the intensive care outcome. The aim of this dissertation was to investigate new measures and biological markers of critical illness and to evaluate their predictive value and association with mortality and disease severity. The impact of delay in emergency department (ED) on intensive care outcome, measured as hospital mortality and health-related quality of life (HRQoL) at 6 months, was assessed in 1537 consecutive patients admitted to medical ICU. Two new biological markers were investigated in two separate patient populations: in 231 ICU patients and 255 patients with severe sepsis or septic shock. Cell-free plasma DNA is a surrogate marker of apoptosis. Its association with disease severity and mortality rate was evaluated in ICU patients. Next, the predictive value of plasma DNA regarding mortality and its association with the degree of organ dysfunction and disease severity was evaluated in severe sepsis or septic shock. Heme oxygenase-1 (HO-1) is a potential regulator of apoptosis. Finally, HO-1 plasma concentrations and HO-1 gene polymorphisms and their association with outcome were evaluated in ICU patients. The length of ED stay was not associated with outcome of intensive care. The hospital mortality rate was significantly lower in patients admitted to the medical ICU from the ED than from the non-ED, and the HRQoL in the critically ill at 6 months was significantly lower than in the age- and sex-matched general population. In the ICU patient population, the maximum plasma DNA concentration measured during the first 96 hours in intensive care correlated significantly with disease severity and degree of organ failure and was independently associated with hospital mortality. In patients with severe sepsis or septic shock, the cell-free plasma DNA concentrations were significantly higher in ICU and hospital nonsurvivors than in survivors and showed a moderate discriminative power regarding ICU mortality. Plasma DNA was an independent predictor for ICU mortality, but not for hospital mortality. The degree of organ dysfunction correlated independently with plasma DNA concentration in severe sepsis and plasma HO-1 concentration in ICU patients. The HO-1 -413T/GT(L)/+99C haplotype was associated with HO-1 plasma levels and frequency of multiple organ dysfunction. Plasma DNA and HO-1 concentrations may support the assessment of outcome or organ failure development in critically ill patients, although their value is limited and requires further evaluation.
Resumo:
Acute renal failure (ARF) is a clinical syndrome characterized by rapidly decreasing glomerular filtration rate, which results in disturbances in electrolyte- and acid-base homeostasis, derangement of extracellular fluid volume, and retention of nitrogenous waste products, and is often associated with decreased urine output. ARF affects about 5-25% of patients admitted to intensive care units (ICUs), and is linked to high mortality and morbidity rates. In this thesis outcome of critically ill patients with ARF and factors related to outcome were evaluated. A total of 1662 patients from two ICUs and one acute dialysis unit in Helsinki University Hospital were included. In study I the prevalence of ARF was calculated and classified according to two ARF-specific scoring methods, the RIFLE classification and the classification created by Bellomo et al. (2001). Study II evaluated monocyte human histocompatibility leukocyte antigen-DR (HLA-DR) expression and plasma levels of one proinflammatory (interleukin (IL) 6) and two anti-inflammatory (IL-8 and IL-10) cytokines in predicting survival of critically ill ARF patients. Study III investigated serum cystatin C as a marker of renal function in ARF and its power in predicting survival of critically ill ARF patients. Study IV evaluated the effect of intermittent hemodiafiltration (HDF) on myoglobin elimination from plasma in severe rhabdomyolysis. Study V assessed long-term survival and health-related quality of life (HRQoL) in ARF patients. Neither of the ARF-specific scoring methods presented good discriminative power regarding hospital mortality. The maximum RIFLE score for the first three days in the ICU was an independent predictor of hospital mortality. As a marker of renal dysfunction, serum cystatin C failed to show benefit compared with plasma creatinine in detecting ARF or predicting patient survival. Neither cystatin C nor plasma concentrations of IL-6, IL-8, and IL-10, nor monocyte HLA-DR expression were clinically useful in predicting mortality in ARF patients. HDF may be used to clear myoglobin from plasma in rhabdomyolysis, especially if the alkalization of diuresis does not succeed. The long-term survival of patients with ARF was found to be poor. The HRQoL of those who survive is lower than that of the age- and gender-matched general population.
Resumo:
Patients with life-threatening conditions sometimes appear to make risky treatment decisions as their condition declines, contradicting the risk-averse behavior predicted by expected utility theory. Prospect theory accommodates such decisions by describing how individuals evaluate outcomes relative to a reference point and how they exhibit risk-seeking behavior over losses relative to that point. The authors show that a patient's reference point for his or her health is a key factor in determining which treatment option the patient selects, and they examine under what circumstances the more risky option is selected. The authors argue that patients' reference points may take time to adjust following a change in diagnosis, with implications for predicting under what circumstances a patient may select experimental or conventional therapies or select no treatment.
Resumo:
OBJECTIVE: It is not known how often physicians use metaphors and analogies, or whether they improve patients' perceptions of their physicians' ability to communicate effectively. Therefore, the objective of this study was to determine whether the use of metaphors and analogies in difficult conversations is associated with better patient ratings of their physicians' communication skills. DESIGN: Cross-sectional observational study of audio-recorded conversations between patients and physicians. SETTING: Three outpatient oncology practices. PATIENTS: Ninety-four patients with advanced cancer and 52 physicians. INTERVENTION: None. MAIN OUTCOME MEASURES: Conversations were reviewed and coded for the presence of metaphors and analogies. Patients also completed a 6-item rating of their physician's ability to communicate. RESULTS: In a sample of 101 conversations, coders identified 193 metaphors and 75 analogies. Metaphors appeared in approximately twice as many conversations as analogies did (65/101, 64% versus 31/101, 31%; sign test p < 0.001). Conversations also contained more metaphors than analogies (mean 1.6, range 0-11 versus mean 0.6, range 0-5; sign rank test p < 0.001). Physicians who used more metaphors elicited better patient ratings of communication (rho = 0.27; p = 0.006), as did physicians who used more analogies (Spearman rho = 0.34; p < 0.001). CONCLUSIONS: The use of metaphors and analogies may enhance physicians' ability to communicate.
Resumo:
Critically ill patients are at heightened risk for nosocomial infections. The anaphylatoxin C5a impairs phagocytosis by neutrophils. However, the mechanisms by which this occurs and the relevance for acquisition of nosocomial infection remain undetermined. We aimed to characterize mechanisms by which C5a inhibits phagocytosis in vitro and in critically ill patients, and to define the relationship between C5a-mediated dysfunction and acquisition of nosocomial infection. In healthy human neutrophils, C5a significantly inhibited RhoA activation, preventing actin polymerization and phagocytosis. RhoA inhibition was mediated by PI3Kd. The effects on RhoA, actin, and phagocytosis were fully reversed by GM-CSF. Parallel observations were made in neutrophils from critically ill patients, that is, impaired phagocytosis was associated with inhibition of RhoA and actin polymerization, and reversed by GM-CSF. Among a cohort of 60 critically ill patients, C5a-mediated neutrophil dysfunction (as determined by reduced CD88 expression) was a strong predictor for subsequent acquisition of nosocomial infection (relative risk, 5.8; 95% confidence interval, 1.5-22; P = .0007), and remained independent of time effects as assessed by survival analysis (hazard ratio, 5.0; 95% confidence interval, 1.3-8.3; P = .01). In conclusion, this study provides new insight into the mechanisms underlying immunocompromise in critical illness and suggests novel avenues for therapy and prevention of nosocomial infection.