917 resultados para Critically-ill obese patients
Resumo:
As a response to metabolic stress, obese critically-ill patients have the same risk of nutritional deficiency as the non-obese and can develop protein-energy malnutrition with accelerated loss of muscle mass. The primary aim of nutritional support in these patients should be to minimize loss of lean mass and accurately evaluate energy expenditure. However, routinely used formulae can overestimate calorie requirements if the patient's actual weight is used. Consequently, the use of adjusted or ideal weight is recommended with these formulae, although indirect calorimetry is the method of choice. Controversy surrounds the question of whether a strict nutritional support criterion, adjusted to the patient's requirements, should be applied or whether a certain degree of hyponutrition should be allowed. Current evidence suggested that hypocaloric nutrition can improve results, partly due to a lower rate of infectious complications and better control of hyperglycemia. Therefore, hypocaloric and hyperproteic nutrition, whether enteral or parenteral, should be standard practice in the nutritional support of critically-ill obese patients when not contraindicated. Widely accepted recommendations consist of no more than 60-70% of requirements or administration of 11-14 kcal/kg current body weight/day or 22-25 kcal/kg ideal weight/day, with 2-2.5 g/kg ideal weight/day of proteins. In a broad sense, hypocaloric-hyperprotein regimens can be considered specific to obese critically-ill patients, although the complications related to comorbidities in these patients may require other therapeutic possibilities to be considered, with specific nutrients for hyperglycemia, acute respiratory distress syndrome (ARDS) and sepsis. However, there are no prospective randomized trials with this type of nutrition in this specific population subgroup and the available data are drawn from the general population of critically-ill patients. Consequently, caution should be exercised when interpreting these data.
Resumo:
Background: Acute kidney injury (AKI) is a frequent complication in hospitalized patients, especially in those in intensive care units (ICU). The RIFLE classification might be a valid prognostic factor for critically ill cancer patients. The present study aims to evaluate the discriminatory capacity of RIFLE versus other general prognostic scores in predicting hospital mortality in critically ill cancer patients. Methods: This is a single-center study conducted in a cancer-specialized ICU in Brazil. All of the 288 patients hospitalized from May 2006 to June 2008 were included. RIFLE classification, APACHE II, SOFA, and SAPS II scores were calculated and the area under receiver operating characteristic (AROC) curves and logistic multiple regression were performed using hospital mortality as the outcome. Results: AKI, defined by RIFLE criteria, was observed in 156 (54.2%) patients. The distribution of patients with any degree of AKI was: risk, n = 96 (33.3%); injury, n = 30 (10.4%), and failure, n = 30 (10.4%). Mortality was 13.6% for non-AKI patients, 49% for RIFLE `R` patients, 62.3% for RIFLE `I` patients, and 86.8% for RIFLE `F` patients (p = 0.0006). Logistic regression analysis showed that RIFLE criteria, APACHE II, SOFA, and SAPS II were independent factors for mortality in this population. The discrimination of RIFLE was good (AROC 0.801, 95% CI 0.748-0.854) but inferior compared to those of APACHE II (AROC 0.940, 95% CI 0.915-0.966), SOFA (AROC 0.910, 95% CI 0.876-0.943), and SAPS II (AROC 0.869, 95% CI 0.827-0.912). Conclusion: AKI is a frequent complication in ICU patients with cancer. RIFLE was inferior to commonly used prognostic scores for predicting mortality in this cohort of patients. Copyright (C) 2011 S. Karger AG, Basel
Resumo:
BACKGROUND: Infections are a leading cause of death in patients with advanced cirrhosis, but there are relatively few data on the epidemiology of infection in intensive care unit (ICU) patients with cirrhosis. AIMS: We used data from the Extended Prevalence of Infection in Intensive Care (EPIC) II 1-day point-prevalence study to better define the characteristics of infection in these patients. METHODS: We compared characteristics, including occurrence and types of infections in non-cirrhotic and cirrhotic patients who had not undergone liver transplantation. RESULTS: The EPIC II database includes 13,796 adult patients from 1265 ICUs: 410 of the patients had cirrhosis. The prevalence of infection was higher in cirrhotic than in non-cirrhotic patients (59 vs. 51%, P < 0.01). The lungs were the most common site of infection in all patients, but abdominal infections were more common in cirrhotic than in non-cirrhotic patients (30 vs. 19%, P < 0.01). Infected cirrhotic patients more often had Gram-positive (56 vs. 47%, P < 0.05) isolates than did infected non-cirrhotic patients. Methicillin-resistant Staphylococcus aureus (MRSA) was more frequent in cirrhotic patients. The hospital mortality rate of cirrhotic patients was 42%, compared to 24% in the non-cirrhotic population (P < 0.001). Severe sepsis and septic shock were associated with higher in-hospital mortality rates in cirrhotic than in non-cirrhotic patients (41% and 71% vs. 30% and 49%, respectively, P < 0.05). CONCLUSIONS: Infection is more common in cirrhotic than in non-cirrhotic ICU patients and more commonly caused by Gram-positive organisms, including MRSA. Infection in patients with cirrhosis was associated with higher mortality rates than in non-cirrhotic patients.
Resumo:
INTRODUCTION: For decades, clinicians dealing with immunocompromised and critically ill patients have perceived a link between Candida colonization and subsequent infection. However, the pathophysiological progression from colonization to infection was clearly established only through the formal description of the colonization index (CI) in critically ill patients. Unfortunately, the literature reflects intense confusion about the pathophysiology of invasive candidiasis and specific associated risk factors. METHODS: We review the contribution of the CI in the field of Candida infection and its development in the 20 years following its original description in 1994. The development of the CI enabled an improved understanding of the pathogenesis of invasive candidiasis and the use of targeted empirical antifungal therapy in subgroups of patients at increased risk for infection. RESULTS: The recognition of specific characteristics among underlying conditions, such as neutropenia, solid organ transplantation, and surgical and nonsurgical critical illness, has enabled the description of distinct epidemiological patterns in the development of invasive candidiasis. CONCLUSIONS: Despite its limited bedside practicality and before confirmation of potentially more accurate predictors, such as specific biomarkers, the CI remains an important way to characterize the dynamics of colonization, which increases early in patients who develop invasive candidiasis.
Resumo:
PURPOSE: Adequate empirical antibiotic dose selection for critically ill burn patients is difficult due to extreme variability in drug pharmacokinetics. Therapeutic drug monitoring (TDM) may aid antibiotic prescription and implementation of initial empirical antimicrobial dosage recommendations. This study evaluated how gradual TDM introduction altered empirical dosages of meropenem and imipenem/cilastatin in our burn ICU. METHODS: Imipenem/cilastatin and meropenem use and daily empirical dosage at a five-bed burn ICU were analyzed retrospectively. Data for all burn admissions between 2001 and 2011 were extracted from the hospital's computerized information system. For each patient receiving a carbapenem, episodes of infection were reviewed and scored according to predefined criteria. Carbapenem trough serum levels were characterized. Prior to May 2007, TDM was available only by special request. Real-time carbapenem TDM was introduced in June 2007; it was initially available weekly and has been available 4 days a week since 2010. RESULTS: Of 365 patients, 229 (63%) received antibiotics (109 received carbapenems). Of 23 TDM determinations for imipenem/cilastatin, none exceeded the predefined upper limit and 11 (47.8%) were insufficient; the number of TDM requests was correlated with daily dose (r=0.7). Similar numbers of inappropriate meropenem trough levels (30.4%) were below and above the upper limit. Real-time TDM introduction increased the empirical dose of imipenem/cilastatin, but not meropenem. CONCLUSIONS: Real-time carbapenem TDM availability significantly altered the empirical daily dosage of imipenem/cilastatin at our burn ICU. Further studies are needed to evaluate the individual impact of TDM-based antibiotic adjustment on infection outcomes in these patients.
Resumo:
Hypomagnesemia is the most common electrolyte disturbance seen upon admission to the intensive care unit (ICU). Reliable predictors of its occurrence are not described. The objective of this prospective study was to determine factors predictive of hypomagnesemia upon admission to the ICU. In a single tertiary cancer center, 226 patients with different diagnoses upon entering were studied. Hypomagnesemia was defined by serum levels <1.5 mg/dl. Demographic data, type of cancer, cause of admission, previous history of arrhythmia, cardiovascular disease, renal failure, drug administration (particularly diuretics, antiarrhythmics, chemotherapy and platinum compounds), previous nutrition intake and presence of hypovolemia were recorded for each patient. Blood was collected for determination of serum magnesium, potassium, sodium, calcium, phosphorus, blood urea nitrogen and creatinine levels. Upon admission, 103 (45.6%) patients had hypomagnesemia and 123 (54.4%) had normomagnesemia. A normal dietary habit prior to ICU admission was associated with normal Mg levels (P = 0.007) and higher average levels of serum Mg (P = 0.002). Postoperative patients (N = 182) had lower levels of serum Mg (0.60 ± 0.14 mmol/l compared with 0.66 ± 0.17 mmol/l, P = 0.006). A stepwise multiple linear regression disclosed that only normal dietary habits (OR = 0.45; CI = 0.26-0.79) and the fact of being a postoperative patient (OR = 2.42; CI = 1.17-4.98) were significantly correlated with serum Mg levels (overall model probability = 0.001). These findings should be used to identify patients at risk for such disturbance, even in other critically ill populations.
Resumo:
AIM: identify and analyze in the literature the evidence of randomized controlled trials on care related to the suctioning of endotracheal secretions in intubated, critically ill adult patients undergoing mechanical ventilation. METHOD: the search was conducted in the PubMed, EMBASE, CENTRAL, CINAHL and LILACS databases. From the 631 citations found, 17 studies were selected. RESULTS: Evidence was identified for six categories of intervention related to endotracheal suctioning, which were analyzed according to outcomes related to hemodynamic and blood gas alterations, microbial colonization, nosocomial infection, and others. CONCLUSIONS: although the evidence obtained is relevant to the practice of endotracheal aspiration, the risks of bias found in the studies selected compromise the evidence's reliability.
Resumo:
Critically ill and injured patients require pain relief and sedation to reduce the body's stress response and to facilitate painful diagnostic and therapeutic procedures. Presently, the level of sedation and analgesia is guided by the use of clinical scores which can be unreliable. There is therefore, a need for an objective measure of sedation and analgesia. The Bispectral Index (BIS) and Patient State Index (PSI) were recently introduced into clinical practice as objective measures of the depth of analgesia and sedation. ^ Aim. To compare the different measures of sedation and analgesia (BIS and PSI) to the standard and commonly used modified Ramsay Score (MRS) and determine if the monitors can be used interchangeably. ^ Methods. MRS, BIS and PSI values were obtained in 50 postoperative cardiac surgery patients requiring analgesia and sedation from June to December 2004. The MRS, BIS and PSI values were assessed hourly for up to 6-h by a single observer. ^ The relationship between BIS and PSI values were explored using scatter plots and correlation between MRS, BIS and PSI was determined using Spearman's correlation coefficient. Intra-class correlation (ICC) was used to determine the inter-rater reliability of MRS, BIS and PSI. Kappa statistics was used to further evaluate the agreement between BIS and PSI at light, moderate and deep levels of sedation. ^ Results. There was a positive correlation between BIS and PSI values (Rho = 0.731, p<0.001). Intra-class correlation between BIS and PSI was 0.58, MRS and BIS 0.43 and MRS and PSI 0.27. Using Kappa statistics, agreement between MRS and BIS was 0.35 (95% CI: 0.27–0.43) and for MRS and PSI was 0.21 (95% CI: 0.15–0.28). The kappa statistic for BIS and PSI was 0.45 (95% CI: 0.37–0.52). Receiver operating characteristics (ROC) curves constructed to detect undersedation indicated an area under the curve (AUC) of 0.91 (95% CI = 0.87 to 0.94) for the BIS and 0.84 (95% CI = 0.79 to 0.88) for the PSI. For detection of oversedation, AUC for the BIS was 0.89 (95% CI = 0.84 to 0.92) and 0.80 (95% CI = 0.75 to 0.85) for the PSI. ^ Conclusions. There is a statistically significant positive correlation between the BIS and PSI but poor correlation and poor test agreement between the MRS and BIS as well as MRS and PSI. Both the BIS and PSI demonstrated a high level of prediction for undersedation and oversedation; however, the BIS and PSI can not be considered interchangeable monitors of sedation. ^
Resumo:
Objective. The objective of this study is to determine the prevalence of MRSA colonization in adult patients admitted to intensive care units at an urban tertiary care hospital in Houston, Texas and to evaluate the risk factors associated with colonization during a three month active-screening pilot project. Design. This study used secondary data from a small cross-sectional pilot project. Methods. All patients admitted to the seven specialty ICUs were screened for MRSA by nasal culture. Results were obtained utilizing the BD GeneOhm™ IDI-MRSA assay in vitro diagnostic test, for rapid MRSA detection. Statistical analysis was performed using the STATA 10, Epi Info, and JavaStat. Results . 1283/1531 (83.4%) adult ICU admissions were screened for nasal MRSA colonization. Of those screened, demographic and risk factor data was available for 1260/1283 (98.2%). Unresolved results were obtained for 73 patients. Therefore, a total of 1187/1531 (77.5%) of all ICU admissions during the three month study period are described in this analysis. Risk factors associated with colonization included the following: hospitalization within the last six months (odds ratio 2.48 [95% CI, 1.70-3.63], p=0.000), hospitalization within the last 12 months, (odds ratio 2.27 [95% CI, 1.57-3.80], p=0.000), and having diabetes mellitus (odds ratio 1.63 [95% CI, 1.14-2.32], p=0.007). Conclusion. Based on the literature, the prevalence of MRSA for this population is typical of other prevalence studies conducted in the United States and coincides with the continual increasing trend of MRSA colonization. Significant risk factors were similar to those found in previous studies. Overall, the active surveillance screening pilot project has provided valuable information on a population not widely addressed. These findings can aid in future interventions for the education, control, prevention, and treatment of MRSA. ^
Resumo:
Objective: To determine whether antibiotic prophylaxis reduces respiratory tract infections and overall mortality in unselected critically ill adult patients.