952 resultados para Ill Patients
Resumo:
BACKGROUND: Conversion of glucose into lipid (de novo lipogenesis; DNL) is a possible fate of carbohydrate administered during nutritional support. It cannot be detected by conventional methods such as indirect calorimetry if it does not exceed lipid oxidation. OBJECTIVE: The objective was to evaluate the effects of carbohydrate administered as part of continuous enteral nutrition in critically ill patients. DESIGN: This was a prospective, open study including 25 patients nonconsecutively admitted to a medicosurgical intensive care unit. Glucose metabolism and hepatic DNL were measured in the fasting state or after 3 d of continuous isoenergetic enteral feeding providing 28%, 53%, or 75% carbohydrate. RESULTS: DNL increased with increasing carbohydrate intake (f1.gif" BORDER="0"> +/- SEM: 7.5 +/- 1.2% with 28% carbohydrate, 9.2 +/- 1.5% with 53% carbohydrate, and 19.4 +/- 3.8% with 75% carbohydrate) and was nearly zero in a group of patients who had fasted for an average of 28 h (1.0 +/- 0.2%). In multiple regression analysis, DNL was correlated with carbohydrate intake, but not with body weight or plasma insulin concentrations. Endogenous glucose production, assessed with a dual-isotope technique, was not significantly different between the 3 groups of patients (13.7-15.3 micromol * kg(-1) * min(-1)), indicating impaired suppression by carbohydrate feeding. Gluconeogenesis was measured with [(13)C]bicarbonate, and increased as the carbohydrate intake increased (from 2.1 +/- 0.5 micromol * kg(-1) * min(-1) with 28% carbohydrate intake to 3.7 +/- 0.3 micromol * kg(-1) * min(-1) with 75% carbohydrate intake, P: < 0. 05). CONCLUSION: Carbohydrate feeding fails to suppress endogenous glucose production and gluconeogenesis, but stimulates DNL in critically ill patients.
Resumo:
The main objective of this article is to assess the risk factors and the types of surface for the development of pressure ulcers (PU) on critical ill patients in an Intensive Care Unit (ICU)
Resumo:
Antibiotics are widely used in critical care and adequate empirical treatments has a significant impact on the outcome of many patients. Most nosocomial infections may be due to multidrug-resistant pathogens and requires empirical borad spectrum coverage before identification of the etiologic agents. This is associated with overuse of antibiotics which contributes to the further increase in multidrug-resistances. In this context, new strategies targeted at antibiotic control, such as guidelines and de-escalation are needed to control this evolution.
Resumo:
RATIONALE, AIMS AND OBJECTIVES: There is little evidence regarding the benefit of stress ulcer prophylaxis (SUP) outside a critical care setting. Overprescription of SUP is not devoid of risks. This prospective study aimed to evaluate the use of proton pump inhibitors (PPIs) for SUP in a general surgery department. METHOD: Data collection was performed prospectively during an 8-week period on patients hospitalized in a general surgery department (58 beds) by pharmacists. Patients with a PPI prescription for the treatment of ulcers, gastro-oesophageal reflux disease, oesophagitis or epigastric pain were excluded. Patients admitted twice during the study period were not reincluded. The American Society of Health-System Pharmacists guidelines on SUP were used to assess the appropriateness of de novo PPI prescriptions. RESULTS: Among 255 patients in the study, 138 (54%) received a prophylaxis with PPI, of which 86 (62%) were de novo PPI prescriptions. A total of 129 patients (94%) received esomeprazole (according to the hospital drug policy). The most frequent dosage was at 40 mg once daily. Use of PPI for SUP was evaluated in 67 patients. A total of 53 patients (79%) had no risk factors for SUP. Twelve and two patients had one or two risk factors, respectively. At discharge, PPI prophylaxis was continued in 33% of patients with a de novo PPI prescription. CONCLUSIONS: This study highlights the overuse of PPIs in non-intensive care unit patients and the inappropriate continuation of PPI prescriptions at discharge. Treatment recommendations for SUP are needed to restrict PPI use for justified indications.
Resumo:
BACKGROUND: Enteral nutrition (EN) is recommended for patients in the intensive-care unit (ICU), but it does not consistently achieve nutritional goals. We assessed whether delivery of 100% of the energy target from days 4 to 8 in the ICU with EN plus supplemental parenteral nutrition (SPN) could optimise clinical outcome. METHODS: This randomised controlled trial was undertaken in two centres in Switzerland. We enrolled patients on day 3 of admission to the ICU who had received less than 60% of their energy target from EN, were expected to stay for longer than 5 days, and to survive for longer than 7 days. We calculated energy targets with indirect calorimetry on day 3, or if not possible, set targets as 25 and 30 kcal per kg of ideal bodyweight a day for women and men, respectively. Patients were randomly assigned (1:1) by a computer-generated randomisation sequence to receive EN or SPN. The primary outcome was occurrence of nosocomial infection after cessation of intervention (day 8), measured until end of follow-up (day 28), analysed by intention to treat. This trial is registered with ClinicalTrials.gov, number NCT00802503. FINDINGS: We randomly assigned 153 patients to SPN and 152 to EN. 30 patients discontinued before the study end. Mean energy delivery between day 4 and 8 was 28 kcal/kg per day (SD 5) for the SPN group (103% [SD 18%] of energy target), compared with 20 kcal/kg per day (7) for the EN group (77% [27%]). Between days 9 and 28, 41 (27%) of 153 patients in the SPN group had a nosocomial infection compared with 58 (38%) of 152 patients in the EN group (hazard ratio 0·65, 95% CI 0·43-0·97; p=0·0338), and the SPN group had a lower mean number of nosocomial infections per patient (-0·42 [-0·79 to -0·05]; p=0·0248). INTERPRETATION: Individually optimised energy supplementation with SPN starting 4 days after ICU admission could reduce nosocomial infections and should be considered as a strategy to improve clinical outcome in patients in the ICU for whom EN is insufficient. FUNDING: Foundation Nutrition 2000Plus, ICU Quality Funds, Baxter, and Fresenius Kabi.
Resumo:
Background: Development of three classification trees (CT) based on the CART (Classification and Regression Trees), CHAID (Chi-Square Automatic Interaction Detection) and C4.5 methodologies for the calculation of probability of hospital mortality; the comparison of the results with the APACHE II, SAPS II and MPM II-24 scores, and with a model based on multiple logistic regression (LR). Methods: Retrospective study of 2864 patients. Random partition (70:30) into a Development Set (DS) n = 1808 and Validation Set (VS) n = 808. Their properties of discrimination are compared with the ROC curve (AUC CI 95%), Percent of correct classification (PCC CI 95%); and the calibration with the Calibration Curve and the Standardized Mortality Ratio (SMR CI 95%). Results: CTs are produced with a different selection of variables and decision rules: CART (5 variables and 8 decision rules), CHAID (7 variables and 15 rules) and C4.5 (6 variables and 10 rules). The common variables were: inotropic therapy, Glasgow, age, (A-a)O2 gradient and antecedent of chronic illness. In VS: all the models achieved acceptable discrimination with AUC above 0.7. CT: CART (0.75(0.71-0.81)), CHAID (0.76(0.72-0.79)) and C4.5 (0.76(0.73-0.80)). PCC: CART (72(69- 75)), CHAID (72(69-75)) and C4.5 (76(73-79)). Calibration (SMR) better in the CT: CART (1.04(0.95-1.31)), CHAID (1.06(0.97-1.15) and C4.5 (1.08(0.98-1.16)). Conclusion: With different methodologies of CTs, trees are generated with different selection of variables and decision rules. The CTs are easy to interpret, and they stratify the risk of hospital mortality. The CTs should be taken into account for the classification of the prognosis of critically ill patients.
Resumo:
The antibiotic pipeline continues to diminish and the majority of the public remains unaware of this critical situation. The cause of the decline of antibiotic development is multifactorial and currently most ICUs are confronted with the challenge of multidrug-resistant organisms. Antimicrobial multidrug resistance is expanding all over the world, with extreme and pandrug resistance being increasingly encountered, especially in healthcare-associated infections in large highly specialized hospitals. Antibiotic stewardship for critically ill patients translated into the implementation of specific guidelines, largely promoted by the Surviving Sepsis Campaign, targeted at education to optimize choice, dosage, and duration of antibiotics in order to improve outcomes and reduce the development of resistance. Inappropriate antimicrobial therapy, meaning the selection of an antibiotic to which the causative pathogen is resistant, is a consistent predictor of poor outcomes in septic patients. Therefore, pharmacokinetically/pharmacodynamically optimized dosing regimens should be given to all patients empirically and, once the pathogen and susceptibility are known, local stewardship practices may be employed on the basis of clinical response to redefine an appropriate regimen for the patient. This review will focus on the most severely ill patients, for whom substantial progress in organ support along with diagnostic and therapeutic strategies markedly increased the risk of nosocomial infections.
Resumo:
BACKGROUND: The purpose of this study was to confirm the prognostic value of pancreatic stone protein (PSP) in patients with severe infections requiring ICU management and to develop and validate a model to enhance mortality prediction by combining severity scores with biomarkers. METHODS: We enrolled prospectively patients with severe sepsis or septic shock in mixed tertiary ICUs in Switzerland (derivation cohort) and Brazil (validation cohort). Severity scores (APACHE [Acute Physiology and Chronic Health Evaluation] II or Simplified Acute Physiology Score [SAPS] II) were combined with biomarkers obtained at the time of diagnosis of sepsis, including C-reactive-protein, procalcitonin (PCT), and PSP. Logistic regression models with the lowest prediction errors were selected to predict in-hospital mortality. RESULTS: Mortality rates of patients with septic shock enrolled in the derivation cohort (103 out of 158) and the validation cohort (53 out of 91) were 37% and 57%, respectively. APACHE II and PSP were significantly higher in dying patients. In the derivation cohort, the models combining either APACHE II, PCT, and PSP (area under the receiver operating characteristic curve [AUC], 0.721; 95% CI, 0.632-0.812) or SAPS II, PCT, and PSP (AUC, 0.710; 95% CI, 0.617-0.802) performed better than each individual biomarker (AUC PCT, 0.534; 95% CI, 0.433-0.636; AUC PSP, 0.665; 95% CI, 0.572-0.758) or severity score (AUC APACHE II, 0.638; 95% CI, 0.543-0.733; AUC SAPS II, 0.598; 95% CI, 0.499-0.698). These models were externally confirmed in the independent validation cohort. CONCLUSIONS: We confirmed the prognostic value of PSP in patients with severe sepsis and septic shock requiring ICU management. A model combining severity scores with PCT and PSP improves mortality prediction in these patients.
Resumo:
PURPOSE: Unlike in the outpatient setting, delivery of aerosols to critically ill patients may be considered complex, particularly in ventilated patients, and benefits remain to be proven. Many factors influence aerosol delivery and recommendations exist, but little is known about knowledge translation into clinical practice. METHODS: Two-week cross-sectional study to assess the prevalence of aerosol therapy in 81 intensive and intermediate care units in 22 countries. All aerosols delivered to patients breathing spontaneously, ventilated invasively or noninvasively (NIV) were recorded, and drugs, devices, ventilator settings, circuit set-up, humidification and side effects were noted. RESULTS: A total of 9714 aerosols were administered to 678 of the 2808 admitted patients (24 %, CI95 22-26 %), whereas only 271 patients (10 %) were taking inhaled medication before admission. There were large variations among centers, from 0 to 57 %. Among intubated patients 22 % (n = 262) received aerosols, and 50 % (n = 149) of patients undergoing NIV, predominantly (75 %) inbetween NIV sessions. Bronchodilators (n = 7960) and corticosteroids (n = 1233) were the most frequently delivered drugs (88 % overall), predominantly but not exclusively (49 %) administered to patients with chronic airway disease. An anti-infectious drug was aerosolized 509 times (5 % of all aerosols) for nosocomial infections. Jet-nebulizers were the most frequently used device (56 %), followed by metered dose inhalers (23 %). Only 106 (<1 %) mild side effects were observed, despite frequent suboptimal set-ups such as an external gas supply of jet nebulizers for intubated patients. CONCLUSIONS: Aerosol therapy concerns every fourth critically ill patient and one-fifth of ventilated patients.
Resumo:
The results of recent large-scale clinical trials have led us to review our understanding of the metabolic response to stress and the most appropriate means of managing nutrition in critically ill patients. This review presents an update in this field, identifying and discussing a number of areas for which consensus has been reached and others where controversy remains and presenting areas for future research. We discuss optimal calorie and protein intake, the incidence and management of re-feeding syndrome, the role of gastric residual volume monitoring, the place of supplemental parenteral nutrition when enteral feeding is deemed insufficient, the role of indirect calorimetry, and potential indications for several pharmaconutrients.
Resumo:
The aims of this study were to determine whether standard base excess (SBE) is a useful diagnostic tool for metabolic acidosis, whether metabolic acidosis is clinically relevant in daily evaluation of critically ill patients, and to identify the most robust acid-base determinants of SBE. Thirty-one critically ill patients were enrolled. Arterial blood samples were drawn at admission and 24 h later. SBE, as calculated by Van Slyke's (SBE VS) or Wooten's (SBE W) equations, accurately diagnosed metabolic acidosis (AUC = 0.867, 95%CI = 0.690-1.043 and AUC = 0.817, 95%CI = 0.634-0.999, respectively). SBE VS was weakly correlated with total SOFA (r = -0.454, P < 0.001) and was similar to SBE W (r = -0.482, P < 0.001). All acid-base variables were categorized as SBE VS <-2 mEq/L or SBE VS <-5 mEq/L. SBE VS <-2 mEq/L was better able to identify strong ion gap acidosis than SBE VS <-5 mEq/L; there were no significant differences regarding other variables. To demonstrate unmeasured anions, anion gap (AG) corrected for albumin (AG A) was superior to AG corrected for albumin and phosphate (AG A+P) when strong ion gap was used as the standard method. Mathematical modeling showed that albumin level, apparent strong ion difference, AG A, and lactate concentration explained SBE VS variations with an R² = 0.954. SBE VS with a cut-off value of <-2 mEq/L was the best tool to diagnose clinically relevant metabolic acidosis. To analyze the components of SBE VS shifts at the bedside, AG A, apparent strong ion difference, albumin level, and lactate concentration are easily measurable variables that best represent the partitioning of acid-base derangements.
Resumo:
Our objective was to compare the pattern of organ dysfunctions and outcomes of critically ill patients with systemic lupus erythematosus (SLE) with patients with other systemic rheumatic diseases (SRD). We studied 116 critically ill SRD patients, 59 SLE and 57 other-SRD patients. The SLE group was younger and included more women. Respiratory failure (61%) and shock (39%) were the most common causes of ICU admission for other-SRD and SLE groups, respectively. ICU length-of-stay was similar for the two groups. The 60-day survival adjusted for the groups’ baseline imbalances was not different (P = 0.792). Total SOFA scores were equal for the two groups at admission and during ICU stay, although respiratory function was worse in the other-SRD group at admission and renal and hematological functions were worse in the SLE group at admission. The incidence of severe respiratory dysfunction (respiratory SOFA >2) at admission was higher in the other-SRD group, whereas severe hematological dysfunction (hematological SOFA >2) during ICU stay was higher in the SLE group. SLE patients were younger and displayed a decreased incidence of respiratory failure compared to patients with other-SRDs. However, the incidences of renal and hematological failure and the presence of shock at admission were higher in the SLE group. The 60-day survival rates were similar.