935 resultados para Non-critically ill patients
Resumo:
Background The aim if this study was to compare percutaneous drainage (PD) of the gallbladder to emergency cholecystectomy (EC) in a well-defined patient group with sepsis related to acute calculous/acalculous cholecystitis (ACC/AAC).Methods Between 2001 and 2007, all consecutive patients of our ICU treated by either PD or EC were retrospectively analyzed. Cases were collected from a prospective database. Percutaneous drainage was performed by a transhepatic route and EC by open or laparoscopic approach. Patients' general condition and organ dysfunction were assessed by two validated scoring systems (SAPS II and SOFA, respectively). Morbidity, mortality, and long-term outcome were systematically reviewed and analyzed in both groups.Results Forty-two patients [median age = 65.5 years (range = 32-94)] were included; 45% underwent EC (ten laparoscopic, nine open) and 55% PD (n = 23). Both patient groups had similar preoperative characteristics. Percutaneous drainage and EC were successful in 91 and 100% of patients, respectively. Organ dysfunctions were similarly improved by the third postoperative/postdrainage days. Despite undergoing PD, two patients required EC due to gangrenous cholecystitis. The conversion rate after laparoscopy was 20%. Overall morbidity was 8.7% after PD and 47% after EC (P = 0.011). Major morbidity was 0% after PD and 21% after EC (P = 0.034). The mortality rate was not different (13% after PD and 16% after EC, P = 1.0) and the deaths were all related to the patients' preexisting disease. Hospital and ICU stays were not different. Recurrent symptoms (17%) occurred only after ACC in the PD group.Conclusions In high-risk patients, PD and EC are both efficient in the resolution of acute cholecystitis sepsis. However, EC is associated with a higher procedure-related morbidity and the laparoscopic approach is not always possible. Percutaneous drainage represents a valuable intervention, but secondary cholecystectomy is mandatory in cases of acute calculous cholecystitis.
Resumo:
OBJECTIVES: To determine the prevalence, predictors, and clinical significance of electrographic seizures (ESz) and other continuous electroencephalographic monitoring findings in critically ill patients with central nervous system infections. DESIGN: Retrospective cohort study. SETTING: Eighteen-bed neurocritical care unit. PATIENTS: We identified 42 consecutive patients with primary central nervous system infection (viral, 27 patients [64%]; bacterial, 8 patients [18%]; and fungal or parasitic, 7 patients [17%]) who underwent continuous electroencephalographic monitoring between January 1, 1996, and February 28, 2007. MAIN OUTCOME MEASURES: Presence of ESz or periodic epileptiform discharges (PEDs). RESULTS: Electrographic seizures were recorded in 14 patients (33%), and PEDs were recorded in 17 patients (40%). Twenty patients (48%) had either PEDs or ESz. Of the 14 patients with ESz, only 5 (36%) had a clinical correlate. Periodic epileptiform discharges (odds ratio=13.4; P=.001) and viral cause (odds ratio=13.0; P=.02) were independently associated with ESz. Both ESz (odds ratio=5.9; P=.02) and PEDs (odds ratio=6.1; P=.01) were independently associated with poor outcome at discharge (severe disability, vegetative state, or death). CONCLUSIONS: In patients with central nervous system infections undergoing continuous electroencephalographic monitoring, ESz and/or PEDs were frequent, occurring in 48% of our cohort. More than half of the ESz had no clinical correlate. Both ESz and PEDs were independently associated with poor outcome. Additional studies are needed to determine whether prevention or treatment of these electrographic findings improves outcome.
Resumo:
BACKGROUND: Critically ill patients have considerable oxidative stress. Glutamine and antioxidant supplementation may offer therapeutic benefit, although current data are conflicting. METHODS: In this blinded 2-by-2 factorial trial, we randomly assigned 1223 critically ill adults in 40 intensive care units (ICUs) in Canada, the United States, and Europe who had multiorgan failure and were receiving mechanical ventilation to receive supplements of glutamine, antioxidants, both, or placebo. Supplements were started within 24 hours after admission to the ICU and were provided both intravenously and enterally. The primary outcome was 28-day mortality. Because of the interim-analysis plan, a P value of less than 0.044 at the final analysis was considered to indicate statistical significance. RESULTS: There was a trend toward increased mortality at 28 days among patients who received glutamine as compared with those who did not receive glutamine (32.4% vs. 27.2%; adjusted odds ratio, 1.28; 95% confidence interval [CI], 1.00 to 1.64; P=0.05). In-hospital mortality and mortality at 6 months were significantly higher among those who received glutamine than among those who did not. Glutamine had no effect on rates of organ failure or infectious complications. Antioxidants had no effect on 28-day mortality (30.8%, vs. 28.8% with no antioxidants; adjusted odds ratio, 1.09; 95% CI, 0.86 to 1.40; P=0.48) or any other secondary end point. There were no differences among the groups with respect to serious adverse events (P=0.83). CONCLUSIONS: Early provision of glutamine or antioxidants did not improve clinical outcomes, and glutamine was associated with an increase in mortality among critically ill patients with multiorgan failure. (Funded by the Canadian Institutes of Health Research; ClinicalTrials.gov number, NCT00133978.).
Resumo:
BACKGROUND: Conversion of glucose into lipid (de novo lipogenesis; DNL) is a possible fate of carbohydrate administered during nutritional support. It cannot be detected by conventional methods such as indirect calorimetry if it does not exceed lipid oxidation. OBJECTIVE: The objective was to evaluate the effects of carbohydrate administered as part of continuous enteral nutrition in critically ill patients. DESIGN: This was a prospective, open study including 25 patients nonconsecutively admitted to a medicosurgical intensive care unit. Glucose metabolism and hepatic DNL were measured in the fasting state or after 3 d of continuous isoenergetic enteral feeding providing 28%, 53%, or 75% carbohydrate. RESULTS: DNL increased with increasing carbohydrate intake (f1.gif" BORDER="0"> +/- SEM: 7.5 +/- 1.2% with 28% carbohydrate, 9.2 +/- 1.5% with 53% carbohydrate, and 19.4 +/- 3.8% with 75% carbohydrate) and was nearly zero in a group of patients who had fasted for an average of 28 h (1.0 +/- 0.2%). In multiple regression analysis, DNL was correlated with carbohydrate intake, but not with body weight or plasma insulin concentrations. Endogenous glucose production, assessed with a dual-isotope technique, was not significantly different between the 3 groups of patients (13.7-15.3 micromol * kg(-1) * min(-1)), indicating impaired suppression by carbohydrate feeding. Gluconeogenesis was measured with [(13)C]bicarbonate, and increased as the carbohydrate intake increased (from 2.1 +/- 0.5 micromol * kg(-1) * min(-1) with 28% carbohydrate intake to 3.7 +/- 0.3 micromol * kg(-1) * min(-1) with 75% carbohydrate intake, P: < 0. 05). CONCLUSION: Carbohydrate feeding fails to suppress endogenous glucose production and gluconeogenesis, but stimulates DNL in critically ill patients.
Resumo:
Antibiotics are widely used in critical care and adequate empirical treatments has a significant impact on the outcome of many patients. Most nosocomial infections may be due to multidrug-resistant pathogens and requires empirical borad spectrum coverage before identification of the etiologic agents. This is associated with overuse of antibiotics which contributes to the further increase in multidrug-resistances. In this context, new strategies targeted at antibiotic control, such as guidelines and de-escalation are needed to control this evolution.
Resumo:
BACKGROUND: Enteral nutrition (EN) is recommended for patients in the intensive-care unit (ICU), but it does not consistently achieve nutritional goals. We assessed whether delivery of 100% of the energy target from days 4 to 8 in the ICU with EN plus supplemental parenteral nutrition (SPN) could optimise clinical outcome. METHODS: This randomised controlled trial was undertaken in two centres in Switzerland. We enrolled patients on day 3 of admission to the ICU who had received less than 60% of their energy target from EN, were expected to stay for longer than 5 days, and to survive for longer than 7 days. We calculated energy targets with indirect calorimetry on day 3, or if not possible, set targets as 25 and 30 kcal per kg of ideal bodyweight a day for women and men, respectively. Patients were randomly assigned (1:1) by a computer-generated randomisation sequence to receive EN or SPN. The primary outcome was occurrence of nosocomial infection after cessation of intervention (day 8), measured until end of follow-up (day 28), analysed by intention to treat. This trial is registered with ClinicalTrials.gov, number NCT00802503. FINDINGS: We randomly assigned 153 patients to SPN and 152 to EN. 30 patients discontinued before the study end. Mean energy delivery between day 4 and 8 was 28 kcal/kg per day (SD 5) for the SPN group (103% [SD 18%] of energy target), compared with 20 kcal/kg per day (7) for the EN group (77% [27%]). Between days 9 and 28, 41 (27%) of 153 patients in the SPN group had a nosocomial infection compared with 58 (38%) of 152 patients in the EN group (hazard ratio 0·65, 95% CI 0·43-0·97; p=0·0338), and the SPN group had a lower mean number of nosocomial infections per patient (-0·42 [-0·79 to -0·05]; p=0·0248). INTERPRETATION: Individually optimised energy supplementation with SPN starting 4 days after ICU admission could reduce nosocomial infections and should be considered as a strategy to improve clinical outcome in patients in the ICU for whom EN is insufficient. FUNDING: Foundation Nutrition 2000Plus, ICU Quality Funds, Baxter, and Fresenius Kabi.
Resumo:
Background: Development of three classification trees (CT) based on the CART (Classification and Regression Trees), CHAID (Chi-Square Automatic Interaction Detection) and C4.5 methodologies for the calculation of probability of hospital mortality; the comparison of the results with the APACHE II, SAPS II and MPM II-24 scores, and with a model based on multiple logistic regression (LR). Methods: Retrospective study of 2864 patients. Random partition (70:30) into a Development Set (DS) n = 1808 and Validation Set (VS) n = 808. Their properties of discrimination are compared with the ROC curve (AUC CI 95%), Percent of correct classification (PCC CI 95%); and the calibration with the Calibration Curve and the Standardized Mortality Ratio (SMR CI 95%). Results: CTs are produced with a different selection of variables and decision rules: CART (5 variables and 8 decision rules), CHAID (7 variables and 15 rules) and C4.5 (6 variables and 10 rules). The common variables were: inotropic therapy, Glasgow, age, (A-a)O2 gradient and antecedent of chronic illness. In VS: all the models achieved acceptable discrimination with AUC above 0.7. CT: CART (0.75(0.71-0.81)), CHAID (0.76(0.72-0.79)) and C4.5 (0.76(0.73-0.80)). PCC: CART (72(69- 75)), CHAID (72(69-75)) and C4.5 (76(73-79)). Calibration (SMR) better in the CT: CART (1.04(0.95-1.31)), CHAID (1.06(0.97-1.15) and C4.5 (1.08(0.98-1.16)). Conclusion: With different methodologies of CTs, trees are generated with different selection of variables and decision rules. The CTs are easy to interpret, and they stratify the risk of hospital mortality. The CTs should be taken into account for the classification of the prognosis of critically ill patients.
Resumo:
The antibiotic pipeline continues to diminish and the majority of the public remains unaware of this critical situation. The cause of the decline of antibiotic development is multifactorial and currently most ICUs are confronted with the challenge of multidrug-resistant organisms. Antimicrobial multidrug resistance is expanding all over the world, with extreme and pandrug resistance being increasingly encountered, especially in healthcare-associated infections in large highly specialized hospitals. Antibiotic stewardship for critically ill patients translated into the implementation of specific guidelines, largely promoted by the Surviving Sepsis Campaign, targeted at education to optimize choice, dosage, and duration of antibiotics in order to improve outcomes and reduce the development of resistance. Inappropriate antimicrobial therapy, meaning the selection of an antibiotic to which the causative pathogen is resistant, is a consistent predictor of poor outcomes in septic patients. Therefore, pharmacokinetically/pharmacodynamically optimized dosing regimens should be given to all patients empirically and, once the pathogen and susceptibility are known, local stewardship practices may be employed on the basis of clinical response to redefine an appropriate regimen for the patient. This review will focus on the most severely ill patients, for whom substantial progress in organ support along with diagnostic and therapeutic strategies markedly increased the risk of nosocomial infections.
Resumo:
BACKGROUND: The purpose of this study was to confirm the prognostic value of pancreatic stone protein (PSP) in patients with severe infections requiring ICU management and to develop and validate a model to enhance mortality prediction by combining severity scores with biomarkers. METHODS: We enrolled prospectively patients with severe sepsis or septic shock in mixed tertiary ICUs in Switzerland (derivation cohort) and Brazil (validation cohort). Severity scores (APACHE [Acute Physiology and Chronic Health Evaluation] II or Simplified Acute Physiology Score [SAPS] II) were combined with biomarkers obtained at the time of diagnosis of sepsis, including C-reactive-protein, procalcitonin (PCT), and PSP. Logistic regression models with the lowest prediction errors were selected to predict in-hospital mortality. RESULTS: Mortality rates of patients with septic shock enrolled in the derivation cohort (103 out of 158) and the validation cohort (53 out of 91) were 37% and 57%, respectively. APACHE II and PSP were significantly higher in dying patients. In the derivation cohort, the models combining either APACHE II, PCT, and PSP (area under the receiver operating characteristic curve [AUC], 0.721; 95% CI, 0.632-0.812) or SAPS II, PCT, and PSP (AUC, 0.710; 95% CI, 0.617-0.802) performed better than each individual biomarker (AUC PCT, 0.534; 95% CI, 0.433-0.636; AUC PSP, 0.665; 95% CI, 0.572-0.758) or severity score (AUC APACHE II, 0.638; 95% CI, 0.543-0.733; AUC SAPS II, 0.598; 95% CI, 0.499-0.698). These models were externally confirmed in the independent validation cohort. CONCLUSIONS: We confirmed the prognostic value of PSP in patients with severe sepsis and septic shock requiring ICU management. A model combining severity scores with PCT and PSP improves mortality prediction in these patients.