92 resultados para Logistic regression model
Resumo:
PURPOSE To develop a score predicting the risk of adverse events (AEs) in pediatric patients with cancer who experience fever and neutropenia (FN) and to evaluate its performance. PATIENTS AND METHODS Pediatric patients with cancer presenting with FN induced by nonmyeloablative chemotherapy were observed in a prospective multicenter study. A score predicting the risk of future AEs (ie, serious medical complication, microbiologically defined infection, radiologically confirmed pneumonia) was developed from a multivariate mixed logistic regression model. Its cross-validated predictive performance was compared with that of published risk prediction rules. Results An AE was reported in 122 (29%) of 423 FN episodes. In 57 episodes (13%), the first AE was known only after reassessment after 8 to 24 hours of inpatient management. Predicting AE at reassessment was better than prediction at presentation with FN. A differential leukocyte count did not increase the predictive performance. The score predicting future AE in 358 episodes without known AE at reassessment used the following four variables: preceding chemotherapy more intensive than acute lymphoblastic leukemia maintenance (weight = 4), hemoglobin > or = 90 g/L (weight = 5), leukocyte count less than 0.3 G/L (weight = 3), and platelet count less than 50 G/L (weight = 3). A score (sum of weights) > or = 9 predicted future AEs. The cross-validated performance of this score exceeded the performance of published risk prediction rules. At an overall sensitivity of 92%, 35% of the episodes were classified as low risk, with a specificity of 45% and a negative predictive value of 93%. CONCLUSION This score, based on four routinely accessible characteristics, accurately identifies pediatric patients with cancer with FN at risk for AEs after reassessment.
Resumo:
ABSTRACT: BACKGROUND: Chest pain raises concern for the possibility of coronary heart disease. Scoring methods have been developed to identify coronary heart disease in emergency settings, but not in primary care. METHODS: Data were collected from a multicenter Swiss clinical cohort study including 672 consecutive patients with chest pain, who had visited one of 59 family practitioners' offices. Using delayed diagnosis we derived a prediction rule to rule out coronary heart disease by means of a logistic regression model. Known cardiovascular risk factors, pain characteristics, and physical signs associated with coronary heart disease were explored to develop a clinical score. Patients diagnosed with angina or acute myocardial infarction within the year following their initial visit comprised the coronary heart disease group. RESULTS: The coronary heart disease score was derived from eight variables: age, gender, duration of chest pain from 1 to 60 minutes, substernal chest pain location, pain increases with exertion, absence of tenderness point at palpation, cardiovascular risks factors, and personal history of cardiovascular disease. Area under the receiver operating characteristics curve was of 0.95 with a 95% confidence interval of 0.92; 0.97. From this score, 413 patients were considered as low risk for values of percentile 5 of the coronary heart disease patients. Internal validity was confirmed by bootstrapping. External validation using data from a German cohort (Marburg, n = 774) revealed a receiver operating characteristics curve of 0.75 (95% confidence interval, 0.72; 0.81) with a sensitivity of 85.6% and a specificity of 47.2%. CONCLUSIONS: This score, based only on history and physical examination, is a complementary tool for ruling out coronary heart disease in primary care patients complaining of chest pain.
Resumo:
PURPOSE To develop a score predicting the risk of adverse events (AEs) in pediatric patients with cancer who experience fever and neutropenia (FN) and to evaluate its performance. PATIENTS AND METHODS Pediatric patients with cancer presenting with FN induced by nonmyeloablative chemotherapy were observed in a prospective multicenter study. A score predicting the risk of future AEs (ie, serious medical complication, microbiologically defined infection, radiologically confirmed pneumonia) was developed from a multivariate mixed logistic regression model. Its cross-validated predictive performance was compared with that of published risk prediction rules. Results An AE was reported in 122 (29%) of 423 FN episodes. In 57 episodes (13%), the first AE was known only after reassessment after 8 to 24 hours of inpatient management. Predicting AE at reassessment was better than prediction at presentation with FN. A differential leukocyte count did not increase the predictive performance. The score predicting future AE in 358 episodes without known AE at reassessment used the following four variables: preceding chemotherapy more intensive than acute lymphoblastic leukemia maintenance (weight = 4), hemoglobin > or = 90 g/L (weight = 5), leukocyte count less than 0.3 G/L (weight = 3), and platelet count less than 50 G/L (weight = 3). A score (sum of weights) > or = 9 predicted future AEs. The cross-validated performance of this score exceeded the performance of published risk prediction rules. At an overall sensitivity of 92%, 35% of the episodes were classified as low risk, with a specificity of 45% and a negative predictive value of 93%. CONCLUSION This score, based on four routinely accessible characteristics, accurately identifies pediatric patients with cancer with FN at risk for AEs after reassessment.
Resumo:
Postoperative delirium after cardiac surgery is associated with increased morbidity and mortality as well as prolonged stay in both the intensive care unit and the hospital. The authors sought to identify modifiable risk factors associated with the development of postoperative delirium in elderly patients after elective cardiac surgery in order to be able to design follow-up studies aimed at the prevention of delirium by optimizing perioperative management. A post hoc analysis of data from patients enrolled in a randomized controlled trial was performed. A single university hospital. One hundred thirteen patients aged 65 or older undergoing elective cardiac surgery with cardiopulmonary bypass. None. MEASUREMENTS AND MAINS RESULTS: Screening for delirium was performed using the Confusion Assessment Method (CAM) on the first 6 postoperative days. A multivariable logistic regression model was developed to identify significant risk factors and to control for confounders. Delirium developed in 35 of 113 patients (30%). The multivariable model showed the maximum value of C-reactive protein measured postoperatively, the dose of fentanyl per kilogram of body weight administered intraoperatively, and the duration of mechanical ventilation to be independently associated with delirium. In this post hoc analysis, larger doses of fentanyl administered intraoperatively and longer duration of mechanical ventilation were associated with postoperative delirium in the elderly after cardiac surgery. Prospective randomized trials should be performed to test the hypotheses that a reduced dose of fentanyl administered intraoperatively, the use of a different opioid, or weaning protocols aimed at early extubation prevent delirium in these patients.
Resumo:
BACKGROUND: The purpose of the optic nerve sheath diameter (ONSD) research group project is to establish an individual patient-level database from high quality studies of ONSD ultrasonography for the detection of raised intracranial pressure (ICP), and to perform a systematic review and an individual patient data meta-analysis (IPDMA), which will provide a cutoff value to help physicians making decisions and encourage further research. Previous meta-analyses were able to assess the diagnostic accuracy of ONSD ultrasonography in detecting raised ICP but failed to determine a precise cutoff value. Thus, the ONSD research group was founded to synthesize data from several recent studies on the subject and to provide evidence on the diagnostic accuracy of ONSD ultrasonography in detecting raised ICP. METHODS: This IPDMA will be conducted in different phases. First, we will systematically search for eligible studies. To be eligible, studies must have compared ONSD ultrasonography to invasive intracranial devices, the current reference standard for diagnosing raised ICP. Subsequently, we will assess the quality of studies included based on the QUADAS-2 tool, and then collect and validate individual patient data. The objectives of the primary analyses will be to assess the diagnostic accuracy of ONSD ultrasonography and to determine a precise cutoff value for detecting raised ICP. Secondly, we will construct a logistic regression model to assess whether patient and study characteristics influence diagnostic accuracy. DISCUSSION: We believe that this IPD MA will provide the most reliable basis for the assessment of diagnostic accuracy of ONSD ultrasonography for detecting raised ICP and to provide a cutoff value. We also hope that the creation of the ONSD research group will encourage further study. TRIAL REGISTRATION: PROSPERO registration number: CRD42012003072.
Resumo:
We investigated the association between diet and head and neck cancer (HNC) risk using data from the International Head and Neck Cancer Epidemiology (INHANCE) consortium. The INHANCE pooled data included 22 case-control studies with 14,520 cases and 22,737 controls. Center-specific quartiles among the controls were used for food groups, and frequencies per week were used for single food items. A dietary pattern score combining high fruit and vegetable intake and low red meat intake was created. Odds ratios (OR) and 95% confidence intervals (CI) for the dietary items on the risk of HNC were estimated with a two-stage random-effects logistic regression model. An inverse association was observed for higher-frequency intake of fruit (4th vs. 1st quartile OR = 0.52, 95% CI = 0.43-0.62, p (trend) < 0.01) and vegetables (OR = 0.66, 95% CI = 0.49-0.90, p (trend) = 0.01). Intake of red meat (OR = 1.40, 95% CI = 1.13-1.74, p p (trend) < 0.01) was positively associated with HNC risk. Higher dietary pattern scores, reflecting high fruit/vegetable and low red meat intake, were associated with reduced HNC risk (per score increment OR = 0.90, 95% CI = 0.84-0.97).
Resumo:
BACKGROUND: Some physicians are still concerned about the safety of treatment at home of patients with acute deep venous thrombosis (DVT). METHODS: We used data from the RIETE (Registro Informatizado de la Enfermedad TromboEmbólica) registry to compare the outcomes in consecutive outpatients with acute lower limb DVT according to initial treatment at home or in the hospital. A propensity score-matching analysis was carried out with a logistic regression model. RESULTS: As of December 2012, 13,493 patients had been enrolled. Of these, 4456 (31%) were treated at home. Patients treated at home were more likely to be male and younger and to weigh more; they were less likely than those treated in the hospital to have chronic heart failure, lung disease, renal insufficiency, anemia, recent bleeding, immobilization, or cancer. During the first week of anticoagulation, 27 patients (0.20%) suffered pulmonary embolism (PE), 12 (0.09%) recurrent DVT, and 51 (0.38%) major bleeding; 80 (0.59%) died. When only patients treated at home were considered, 12 (0.27%) had PE, 4 (0.09%) had recurrent DVT, 6 (0.13%) bled, and 4 (0.09%) died (no fatal PE, 3 fatal bleeds). After propensity analysis, patients treated at home had a similar rate of venous thromboembolism recurrences and a lower rate of major bleeding (odds ratio, 0.4; 95% confidence interval, 0.1-1.0) or death (odds ratio, 0.2; 95% confidence interval, 0.1-0.7) within the first week compared with those treated in the hospital. CONCLUSIONS: In outpatients with DVT, home treatment was associated with a better outcome than treatment in the hospital. These data may help safely treat more DVT patients at home.
Resumo:
Ventilator-associated pneumonia (VAP) affects mortality, morbidity and cost of critical care. Reliable risk estimation might improve end-of-life decisions, resource allocation and outcome. Several scoring systems for survival prediction have been established and optimised over the last decades. Recently, new biomarkers have gained interest in the prognostic field. We assessed whether midregional pro-atrial natriuretic peptide (MR-proANP) and procalcitonin (PCT) improve the predictive value of the Simplified Acute Physiologic Score (SAPS) II and Sequential Related Organ Failure Assessment (SOFA) in VAP. Specified end-points of a prospective multinational trial including 101 patients with VAP were analysed. Death <28 days after VAP onset was the primary end-point. MR-proANP and PCT were elevated at the onset of VAP in nonsurvivors compared with survivors (p = 0.003 and p = 0.017, respectively) and their slope of decline differed significantly (p = 0.018 and p = 0.039, respectively). Patients with the highest MR-proANP quartile at VAP onset were at increased risk for death (log rank p = 0.013). In a logistic regression model, MR-proANP was identified as the best predictor of survival. Adding MR-proANP and PCT to SAPS II and SOFA improved their predictive properties (area under the curve 0.895 and 0.880). We conclude that the combination of two biomarkers, MR-proANP and PCT, improve survival prediction of clinical severity scores in VAP.
Resumo:
BACKGROUND: Artemisinin-based combination therapy (ACT) has been promoted as a means to reduce malaria transmission due to their ability to kill both asexual blood stages of malaria parasites, which sustain infections over long periods and the immature derived sexual stages responsible for infecting mosquitoes and onward transmission. Early studies reported a temporal association between ACT introduction and reduced malaria transmission in a number of ecological settings. However, these reports have come from areas with low to moderate malaria transmission, been confounded by the presence of other interventions or environmental changes that may have reduced malaria transmission, and have not included a comparison group without ACT. This report presents results from the first large-scale observational study to assess the impact of case management with ACT on population-level measures of malaria endemicity in an area with intense transmission where the benefits of effective infection clearance might be compromised by frequent and repeated re-infection. METHODS: A pre-post observational study with a non-randomized comparison group was conducted at two sites in Tanzania. Both sites used sulphadoxine-pyrimethamine (SP) monotherapy as a first-line anti-malarial from mid-2001 through 2002. In 2003, the ACT, artesunate (AS) co-administered with SP (AS + SP), was introduced in all fixed health facilities in the intervention site, including both public and registered non-governmental facilities. Population-level prevalence of Plasmodium falciparum asexual parasitaemia and gametocytaemia were assessed using light microscopy from samples collected during representative household surveys in 2001, 2002, 2004, 2005 and 2006. FINDINGS: Among 37,309 observations included in the analysis, annual asexual parasitaemia prevalence in persons of all ages ranged from 11% to 28% and gametocytaemia prevalence ranged from <1% to 2% between the two sites and across the five survey years. A multivariable logistic regression model was fitted to adjust for age, socioeconomic status, bed net use and rainfall. In the presence of consistently high coverage and efficacy of SP monotherapy and AS + SP in the comparison and intervention areas, the introduction of ACT in the intervention site was associated with a modest reduction in the adjusted asexual parasitaemia prevalence of 5 percentage-points or 23% (p < 0.0001) relative to the comparison site. Gametocytaemia prevalence did not differ significantly (p = 0.30). INTERPRETATION: The introduction of ACT at fixed health facilities only modestly reduced asexual parasitaemia prevalence. ACT is effective for treatment of uncomplicated malaria and should have substantial public health impact on morbidity and mortality, but is unlikely to reduce malaria transmission substantially in much of sub-Saharan Africa where individuals are rapidly re-infected.
Resumo:
BACKGROUND: Present combination antiretroviral therapy (cART) alone does not cure HIV infection and requires lifelong drug treatment. The potential role of HIV therapeutic vaccines as part of an HIV cure is under consideration. Our aim was to assess the efficacy, safety, and immunogenicity of Vacc-4x, a peptide-based HIV-1 therapeutic vaccine targeting conserved domains on p24(Gag), in adults infected with HIV-1. METHODS: Between July, 2008, and June, 2010, we did a multinational double-blind, randomised, phase 2 study comparing Vacc-4x with placebo. Participants were adults infected with HIV-1 who were aged 18-55 years and virologically suppressed on cART (viral load <50 copies per mL) with CD4 cell counts of 400 × 10(6) cells per L or greater. The trial was done at 18 sites in Germany, Italy, Spain, the UK, and the USA. Participants were randomly assigned (2:1) to Vacc-4x or placebo. Group allocation was masked from participants and investigators. Four primary immunisations, weekly for 4 weeks, containing Vacc-4x (or placebo) were given intradermally after administration of adjuvant. Booster immunisations were given at weeks 16 and 18. At week 28, cART was interrupted for up to 24 weeks. The coprimary endpoints were cART resumption and changes in CD4 counts during treatment interruption. Analyses were by modified intention to treat: all participants who received one intervention. Furthermore, safety, viral load, and immunogenicity (as measured by ELISPOT and proliferation assays) were assessed. The 52 week follow-up period was completed in June, 2011. For the coprimary endpoints the proportion of participants who met the criteria for cART resumption was analysed with a logistic regression model with the treatment effect being assessed in a model including country as a covariate. This study is registered with ClinicalTrials.gov, number NCT00659789. FINDINGS: 174 individuals were screened; because of slow recruitment, enrolment stopped with 136 of a planned 345 participants and 93 were randomly assigned to receive Vacc-4x and 43 to receive placebo. There were no differences between the two groups for the primary efficacy endpoints in those participants who stopped cART at week 28. Of the participants who resumed cART, 30 (34%) were in the Vacc-4x group and 11 (29%) in the placebo group, and percentage changes in CD4 counts were not significant (mean treatment difference -5·71, 95% CI -13·01 to 1·59). However, a significant difference in viral load was noted for the Vacc-4x group both at week 48 (median 23 100 copies per mL Vacc-4x vs 71 800 copies per mL placebo; p=0·025) and week 52 (median 19 550 copies per mL vs 51 000 copies per mL; p=0·041). One serious adverse event, exacerbation of multiple sclerosis, was reported as possibly related to study treatment. Vacc-4x was immunogenic, inducing proliferative responses in both CD4 and CD8 T-cell populations. INTERPRETATION: The proportion of participants resuming cART before end of study and change in CD4 counts during the treatment interruption showed no benefit of vaccination. Vacc-4x was safe, well tolerated, immunogenic, seemed to contribute to a viral-load setpoint reduction after cART interruption, and might be worth consideration in future HIV-cure investigative strategies. FUNDING: Norwegian Research Council GLOBVAC Program and Bionor Pharma ASA.
Resumo:
Background: Modelling epidemiological knowledge in validated clinical scores is a practical mean of integrating EBM to usual care. Existing scores about cardiovascular disease have been largely developed in emergency settings, but few in primary care. Such a toll is needed for general practitioners (GP) to evaluate the probability of ischemic heart disease (IHD) in patients with non-traumatic chest pain. Objective: To develop a predictive model to use as a clinical score for detecting IHD in patients with non-traumatic chest-pain in primary care. Methods: A post-hoc secondary analysis on data from an observational study including 672 patients with chest pain of which 85 had IHD diagnosed by their GP during the year following their inclusion. Best subset method was used to select 8 predictive variables from univariate analysis and fitted in a multivariate logistic regression model to define the score. Reliability of the model was assessed using split-group method. Results: Significant predictors were: age (0-3 points), gender (1 point), having at least one cardiovascular risks factor (hypertension, dyslipidemia, diabetes, smoking, family history of CVD; 3 points), personal history of cardiovascular disease (1 point), duration of chest pain from 1 to 60 minutes (2 points), substernal chest pain (1 point), pain increasing with exertion (1 point) and absence of tenderness at palpation (1 point). Area under the ROC curve for the score was of 0.95 (IC95% 0.93; 0.97). Patients were categorised in three groups, low risk of IHD (score under 6; n = 360), moderate risk of IHD (score from 6 to 8; n = 187) and high risk of IHD (score from 9-13; n = 125). Prevalence of IHD in each group was respectively of 0%, 6.7%, 58.5%. Reliability of the model seems satisfactory as the model developed from the derivation set predicted perfectly (p = 0.948) the number of patients in each group in the validation set. Conclusion: This clinical score based only on history and physical exams can be an important tool in the practice of the general physician for the prediction of ischemic heart disease in patients complaining of chest pain. The score below 6 points (in more than half of our population) can avoid demanding complementary exams for selected patients (ECG, laboratory tests) because of the very low risk of IHD. Score above 6 points needs investigation to detect or rule out IHD. Further external validation is required in ambulatory settings.
Resumo:
BACKGROUND: The clinical course of HIV-1 infection is highly variable among individuals, at least in part as a result of genetic polymorphisms in the host. Toll-like receptors (TLRs) have a key role in innate immunity and mutations in the genes encoding these receptors have been associated with increased or decreased susceptibility to infections. OBJECTIVES: To determine whether single-nucleotide polymorphisms (SNPs) in TLR2-4 and TLR7-9 influenced the natural course of HIV-1 infection. METHODS: Twenty-eight SNPs in TLRs were analysed in HAART-naive HIV-positive patients from the Swiss HIV Cohort Study. The SNPs were detected using Sequenom technology. Haplotypes were inferred using an expectation-maximization algorithm. The CD4 T cell decline was calculated using a least-squares regression. Patients with a rapid CD4 cell decline, less than the 15th percentile, were defined as rapid progressors. The risk of rapid progression associated with SNPs was estimated using a logistic regression model. Other candidate risk factors included age, sex and risk groups (heterosexual, homosexual and intravenous drug use). RESULTS: Two SNPs in TLR9 (1635A/G and +1174G/A) in linkage disequilibrium were associated with the rapid progressor phenotype: for 1635A/G, odds ratio (OR), 3.9 [95% confidence interval (CI),1.7-9.2] for GA versus AA and OR, 4.7 (95% CI,1.9-12.0) for GG versus AA (P = 0.0008). CONCLUSION: Rapid progression of HIV-1 infection was associated with TLR9 polymorphisms. Because of its potential implications for intervention strategies and vaccine developments, additional epidemiological and experimental studies are needed to confirm this association.
Resumo:
OBJECTIVES:: For certain major operations, inpatient mortality risk is lower in high-volume hospitals than those in low-volume hospitals. Extending the analysis to a broader range of interventions and outcomes is necessary before adopting policies based on minimum volume thresholds. METHODS:: Using the United States 2004 Nationwide Inpatient Sample, we assessed the effect of intervention-specific and overall hospital volume on surgical complications, potentially avoidable reoperations, and deaths across 1.4 million interventions in 353 hospitals. Outcome variations across hospitals were analyzed through a 3-level hierarchical logistic regression model (patients, surgical interventions, and hospitals), which took into account interventions on multiple organs, 144 intervention categories, and structural hospital characteristics. Discriminative performance and calibration were good. RESULTS:: Hospitals with more experience in a given intervention had similar reoperation rates but lower mortality and complication rates: odds ratio per volume deciles 0.93 and 0.97. However, the benefit was limited to heart surgery and a small number of other operations. Risks were higher for hospitals that performed more interventions overall: odds ratio per 1000 for each event was approximately 1.02. Even after adjustment for specific volume, mortality varied substantially across both high- and low-volume hospitals. CONCLUSION:: Although the link between specific volume and certain inpatient outcomes suggests that specialization might help improve surgical safety, the variable magnitude of this link and the heterogeneity of hospital effect do not support the systematic use of volume-based referrals. It may be more efficient to monitor risk-adjusted postoperative outcomes and to investigate facilities with worse than expected outcomes.
Resumo:
We present the most comprehensive comparison to date of the predictive benefit of genetics in addition to currently used clinical variables, using genotype data for 33 single-nucleotide polymorphisms (SNPs) in 1,547 Caucasian men from the placebo arm of the REduction by DUtasteride of prostate Cancer Events (REDUCE®) trial. Moreover, we conducted a detailed comparison of three techniques for incorporating genetics into clinical risk prediction. The first method was a standard logistic regression model, which included separate terms for the clinical covariates and for each of the genetic markers. This approach ignores a substantial amount of external information concerning effect sizes for these Genome Wide Association Study (GWAS)-replicated SNPs. The second and third methods investigated two possible approaches to incorporating meta-analysed external SNP effect estimates - one via a weighted PCa 'risk' score based solely on the meta analysis estimates, and the other incorporating both the current and prior data via informative priors in a Bayesian logistic regression model. All methods demonstrated a slight improvement in predictive performance upon incorporation of genetics. The two methods that incorporated external information showed the greatest receiver-operating-characteristic AUCs increase from 0.61 to 0.64. The value of our methods comparison is likely to lie in observations of performance similarities, rather than difference, between three approaches of very different resource requirements. The two methods that included external information performed best, but only marginally despite substantial differences in complexity.
Resumo:
BACKGROUND: The aim of the current study was to assess whether widely used nutritional parameters are correlated with the nutritional risk score (NRS-2002) to identify postoperative morbidity and to evaluate the role of nutritionists in nutritional assessment. METHODS: A randomized trial on preoperative nutritional interventions (NCT00512213) provided the study cohort of 152 patients at nutritional risk (NRS-2002 ≥3) with a comprehensive phenotyping including diverse nutritional parameters (n=17), elaborated by nutritional specialists, and potential demographic and surgical (n=5) confounders. Risk factors for overall, severe (Dindo-Clavien 3-5) and infectious complications were identified by univariate analysis; parameters with P<0.20 were then entered in a multiple logistic regression model. RESULTS: Final analysis included 140 patients with complete datasets. Of these, 61 patients (43.6%) were overweight, and 72 patients (51.4%) experienced at least one complication of any degree of severity. Univariate analysis identified a correlation between few (≤3) active co-morbidities (OR=4.94; 95% CI: 1.47-16.56, p=0.01) and overall complications. Patients screened as being malnourished by nutritional specialists presented less overall complications compared to the not malnourished (OR=0.47; 95% CI: 0.22-0.97, p=0.043). Severe postoperative complications occurred more often in patients with low lean body mass (OR=1.06; 95% CI: 1-1.12, p=0.028). Few (≤3) active co-morbidities (OR=8.8; 95% CI: 1.12-68.99, p=0.008) were related with postoperative infections. Patients screened as being malnourished by nutritional specialists presented less infectious complications (OR=0.28; 95% CI: 0.1-0.78), p=0.014) as compared to the not malnourished. Multivariate analysis identified few co-morbidities (OR=6.33; 95% CI: 1.75-22.84, p=0.005), low weight loss (OR=1.08; 95% CI: 1.02-1.14, p=0.006) and low hemoglobin concentration (OR=2.84; 95% CI: 1.22-6.59, p=0.021) as independent risk factors for overall postoperative complications. Compliance with nutritional supplements (OR=0.37; 95% CI: 0.14-0.97, p=0.041) and supplementation of malnourished patients as assessed by nutritional specialists (OR=0.24; 95% CI: 0.08-0.69, p=0.009) were independently associated with decreased infectious complications. CONCLUSIONS: Nutritional support based upon NRS-2002 screening might result in overnutrition, with potentially deleterious clinical consequences. We emphasize the importance of detailed assessment of the nutritional status by a dedicated specialist before deciding on early nutritional intervention for patients with an initial NRS-2002 score of ≥3.