942 resultados para Cox Proportional Hazards Model
Resumo:
BACKGROUND Waist circumference (WC) is a simple and reliable measure of fat distribution that may add to the prediction of type 2 diabetes (T2D), but previous studies have been too small to reliably quantify the relative and absolute risk of future diabetes by WC at different levels of body mass index (BMI). METHODS AND FINDINGS The prospective InterAct case-cohort study was conducted in 26 centres in eight European countries and consists of 12,403 incident T2D cases and a stratified subcohort of 16,154 individuals from a total cohort of 340,234 participants with 3.99 million person-years of follow-up. We used Prentice-weighted Cox regression and random effects meta-analysis methods to estimate hazard ratios for T2D. Kaplan-Meier estimates of the cumulative incidence of T2D were calculated. BMI and WC were each independently associated with T2D, with WC being a stronger risk factor in women than in men. Risk increased across groups defined by BMI and WC; compared to low normal weight individuals (BMI 18.5-22.4 kg/m(2)) with a low WC (<94/80 cm in men/women), the hazard ratio of T2D was 22.0 (95% confidence interval 14.3; 33.8) in men and 31.8 (25.2; 40.2) in women with grade 2 obesity (BMI≥35 kg/m(2)) and a high WC (>102/88 cm). Among the large group of overweight individuals, WC measurement was highly informative and facilitated the identification of a subgroup of overweight people with high WC whose 10-y T2D cumulative incidence (men, 70 per 1,000 person-years; women, 44 per 1,000 person-years) was comparable to that of the obese group (50-103 per 1,000 person-years in men and 28-74 per 1,000 person-years in women). CONCLUSIONS WC is independently and strongly associated with T2D, particularly in women, and should be more widely measured for risk stratification. If targeted measurement is necessary for reasons of resource scarcity, measuring WC in overweight individuals may be an effective strategy, since it identifies a high-risk subgroup of individuals who could benefit from individualised preventive action.
Resumo:
Between 1984 and 2006, 12 959 people with HIV/AIDS (PWHA) in the Swiss HIV Cohort Study contributed a total of 73 412 person-years (py) of follow-up, 35 551 of which derived from PWHA treated with highly active antiretroviral therapy (HAART). Five hundred and ninety-seven incident Kaposi sarcoma (KS) cases were identified of whom 52 were among HAART users. Cox regression was used to estimate hazard ratios (HR) and corresponding 95% confidence intervals (CI). Kaposi sarcoma incidence fell abruptly in 1996-1998 to reach a plateau at 1.4 per 1000 py afterwards. Men having sex with men and birth in Africa or the Middle East were associated with KS in both non-users and users of HAART but the risk pattern by CD4 cell count differed. Only very low CD4 cell count (<50 cells microl(-1)) at enrollment or at HAART initiation were significantly associated with KS among HAART users. The HR for KS declined steeply in the first months after HAART initiation and continued to be low 7-10 years afterwards (HR, 0.06; 95% CI, 0.02-0.17). Thirty-three out of 52 (63.5%) KS cases among HAART users arose among PWHA who had stopped treatment or used HAART for less than 6 months.
Resumo:
BACKGROUND Identifying individuals at high risk of excess weight gain may help targeting prevention efforts at those at risk of various metabolic diseases associated with weight gain. Our aim was to develop a risk score to identify these individuals and validate it in an external population. METHODS We used lifestyle and nutritional data from 53°758 individuals followed for a median of 5.4 years from six centers of the European Prospective Investigation into Cancer and Nutrition (EPIC) to develop a risk score to predict substantial weight gain (SWG) for the next 5 years (derivation sample). Assuming linear weight gain, SWG was defined as gaining ≥ 10% of baseline weight during follow-up. Proportional hazards models were used to identify significant predictors of SWG separately by EPIC center. Regression coefficients of predictors were pooled using random-effects meta-analysis. Pooled coefficients were used to assign weights to each predictor. The risk score was calculated as a linear combination of the predictors. External validity of the score was evaluated in nine other centers of the EPIC study (validation sample). RESULTS Our final model included age, sex, baseline weight, level of education, baseline smoking, sports activity, alcohol use, and intake of six food groups. The model's discriminatory ability measured by the area under a receiver operating characteristic curve was 0.64 (95% CI = 0.63-0.65) in the derivation sample and 0.57 (95% CI = 0.56-0.58) in the validation sample, with variation between centers. Positive and negative predictive values for the optimal cut-off value of ≥ 200 points were 9% and 96%, respectively. CONCLUSION The present risk score confidently excluded a large proportion of individuals from being at any appreciable risk to develop SWG within the next 5 years. Future studies, however, may attempt to further refine the positive prediction of the score.
Resumo:
BACKGROUND The purpose of this study was to assess the incidence of neurological complications in patients with infective endocarditis, the risk factors for their development, their influence on the clinical outcome, and the impact of cardiac surgery. METHODS AND RESULTS This was a retrospective analysis of prospectively collected data on a multicenter cohort of 1345 consecutive episodes of left-sided infective endocarditis from 8 centers in Spain. Cox regression models were developed to analyze variables predictive of neurological complications and associated mortality. Three hundred forty patients (25%) experienced such complications: 192 patients (14%) had ischemic events, 86 (6%) had encephalopathy/meningitis, 60 (4%) had hemorrhages, and 2 (1%) had brain abscesses. Independent risk factors associated with all neurological complications were vegetation size ≥3 cm (hazard ratio [HR] 1.91), Staphylococcus aureus as a cause (HR 2.47), mitral valve involvement (HR 1.29), and anticoagulant therapy (HR 1.31). This last variable was particularly related to a greater incidence of hemorrhagic events (HR 2.71). Overall mortality was 30%, and neurological complications had a negative impact on outcome (45% of deaths versus 24% in patients without these complications; P<0.01), although only moderate to severe ischemic stroke (HR 1.63) and brain hemorrhage (HR 1.73) were significantly associated with a poorer prognosis. Antimicrobial treatment reduced (by 33% to 75%) the risk of neurological complications. In patients with hemorrhage, mortality was higher when surgery was performed within 4 weeks of the hemorrhagic event (75% versus 40% in later surgery). CONCLUSIONS Moderate to severe ischemic stroke and brain hemorrhage were found to have a significant negative impact on the outcome of infective endocarditis. Early appropriate antimicrobial treatment is critical, and transitory discontinuation of anticoagulant therapy should be considered.
Resumo:
BACKGROUND Understanding of the genetic basis of type 2 diabetes (T2D) has progressed rapidly, but the interactions between common genetic variants and lifestyle risk factors have not been systematically investigated in studies with adequate statistical power. Therefore, we aimed to quantify the combined effects of genetic and lifestyle factors on risk of T2D in order to inform strategies for prevention. METHODS AND FINDINGS The InterAct study includes 12,403 incident T2D cases and a representative sub-cohort of 16,154 individuals from a cohort of 340,234 European participants with 3.99 million person-years of follow-up. We studied the combined effects of an additive genetic T2D risk score and modifiable and non-modifiable risk factors using Prentice-weighted Cox regression and random effects meta-analysis methods. The effect of the genetic score was significantly greater in younger individuals (p for interaction = 1.20×10-4). Relative genetic risk (per standard deviation [4.4 risk alleles]) was also larger in participants who were leaner, both in terms of body mass index (p for interaction = 1.50×10-3) and waist circumference (p for interaction = 7.49×10-9). Examination of absolute risks by strata showed the importance of obesity for T2D risk. The 10-y cumulative incidence of T2D rose from 0.25% to 0.89% across extreme quartiles of the genetic score in normal weight individuals, compared to 4.22% to 7.99% in obese individuals. We detected no significant interactions between the genetic score and sex, diabetes family history, physical activity, or dietary habits assessed by a Mediterranean diet score. CONCLUSIONS The relative effect of a T2D genetic risk score is greater in younger and leaner participants. However, this sub-group is at low absolute risk and would not be a logical target for preventive interventions. The high absolute risk associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Resumo:
INTRODUCTION Statins have pleiotropic effects that could influence the prevention and outcome of some infectious diseases. There is no information about their specific effect on Staphylococcus aureus bacteremia (SAB). METHODS A prospective cohort study including all SAB diagnosed in patients aged ≥18 years admitted to a 950-bed tertiary hospital from March 2008 to January 2011 was performed. The main outcome variable was 14-day mortality, and the secondary outcome variables were 30-day mortality, persistent bacteremia (PB) and presence of severe sepsis or septic shock at diagnosis of SAB. The effect of statin therapy at the onset of SAB was studied by multivariate logistic regression and Cox regression analysis, including a propensity score for statin therapy. RESULTS We included 160 episodes. Thirty-three patients (21.3%) were receiving statins at the onset of SAB. 14-day mortality was 21.3%. After adjustment for age, Charlson index, Pitt score, adequate management, and high risk source, statin therapy had a protective effect on 14-day mortality (adjusted OR = 0.08; 95% CI: 0.01-0.66; p = 0.02), and PB (OR = 0.89; 95% CI: 0.27-1.00; p = 0.05) although the effect was not significant on 30-day mortality (OR = 0.35; 95% CI: 0.10-1.23; p = 0.10) or presentation with severe sepsis or septic shock (adjusted OR = 0.89; CI 95%: 0.27-2.94; p = 0.8). An effect on 30-day mortality could neither be demonstrated on Cox analysis (adjusted HR = 0.5; 95% CI: 0.19-1.29; p = 0.15). CONCLUSIONS Statin treatment in patients with SAB was associated with lower early mortality and PB. Randomized studies are necessary to identify the role of statins in the treatment of patients with SAB.
Resumo:
OBJECTIVES: The aim of this study was to evaluate the risk factors associated with Contegra graft (Medtronic Minneapolis, MN, USA) infection after reconstruction of the right ventricular outflow tract. METHODS: One hundred and six Contegra grafts were implanted between April 1999 and April 2010 for the Ross procedure (n = 46), isolated pulmonary valve replacement (n = 32), tetralogy of Fallot (n = 24), double-outlet right ventricle (n = 7), troncus arteriosus (n = 4), switch operation (n = 1) and redo of pulmonary valve replacement (n = 2). The median age of the patients was 13 years (range 0-54 years). A follow-up was completed in all cases with a median duration of 7.6 years (range 1.7-12.7 years). RESULTS: There were 3 cases of in-hospital mortality. The survival rate during 7 years was 95.7%. Despite the lifelong endocarditis prophylaxis, Contegra graft infection was diagnosed in 12 (11.3%) patients at a median time of 4.4 years (ranging from 0.4 to 8.7 years). Univariate analysis of preoperative, perioperative and postoperative variables was performed and the following risk factors for time to infection were identified: female gender with a hazard ratio (HR) of 0.19 (P = 0.042), systemic-to-pulmonary shunt (HR 6.46, P < 0.01), hypothermia (HR 0.79, P = 0.014), postoperative renal insufficiency (HR 11.97, P = 0.015) and implantation of permanent pacemaker during hospitalization (HR 5.29, P = 0.075). In 2 cases, conservative therapy was successful and, in 10 patients, replacement of the infected valve was performed. The Contegra graft was replaced by a homograft in 2 cases and by a new Contegra graft in 8 cases. Cox's proportional hazard model indicated that time to graft infection was significantly associated with tetralogy of Fallot (HR 0.06, P = 0.01), systemic-to-pulmonary shunt (HR 64.71, P < 0.01) and hypothermia (HR 0.77, P < 0.01). CONCLUSION: Contegra graft infection affected 11.3% of cases in our cohort, and thus may be considered as a frequent entity that can be predicted by both intraoperative and early postoperative factors. After the diagnosis of infection associated with the Contegra graft was confirmed, surgical treatment was the therapy of choice.
Resumo:
BACKGROUND: Toll-like receptors (TLRs) are essential components of the immune response to fungal pathogens. We examined the role of TLR polymorphisms in conferring a risk of invasive aspergillosis among recipients of allogeneic hematopoietic-cell transplants. METHODS: We analyzed 20 single-nucleotide polymorphisms (SNPs) in the toll-like receptor 2 gene (TLR2), the toll-like receptor 3 gene (TLR3), the toll-like receptor 4 gene (TLR4), and the toll-like receptor 9 gene (TLR9) in a cohort of 336 recipients of hematopoietic-cell transplants and their unrelated donors. The risk of invasive aspergillosis was assessed with the use of multivariate Cox regression analysis. The analysis was replicated in a validation study involving 103 case patients and 263 matched controls who received hematopoietic-cell transplants from related and unrelated donors. RESULTS: In the discovery study, two donor TLR4 haplotypes (S3 and S4) increased the risk of invasive aspergillosis (adjusted hazard ratio for S3, 2.20; 95% confidence interval [CI], 1.14 to 4.25; P=0.02; adjusted hazard ratio for S4, 6.16; 95% CI, 1.97 to 19.26; P=0.002). The haplotype S4 was present in carriers of two SNPs in strong linkage disequilibrium (1063 A/G [D299G] and 1363 C/T [T399I]) that influence TLR4 function. In the validation study, donor haplotype S4 also increased the risk of invasive aspergillosis (adjusted odds ratio, 2.49; 95% CI, 1.15 to 5.41; P=0.02); the association was present in unrelated recipients of hematopoietic-cell transplants (odds ratio, 5.00; 95% CI, 1.04 to 24.01; P=0.04) but not in related recipients (odds ratio, 2.29; 95% CI, 0.93 to 5.68; P=0.07). In the discovery study, seropositivity for cytomegalovirus (CMV) in donors or recipients, donor positivity for S4, or both, as compared with negative results for CMV and S4, were associated with an increase in the 3-year probability of invasive aspergillosis (12% vs. 1%, P=0.02) and death that was not related to relapse (35% vs. 22%, P=0.02). CONCLUSIONS: This study suggests an association between the donor TLR4 haplotype S4 and the risk of invasive aspergillosis among recipients of hematopoietic-cell transplants from unrelated donors.
Resumo:
BACKGROUND: In contrast with established evidence linking high doses of ionizing radiation with childhood cancer, research on low-dose ionizing radiation and childhood cancer has produced inconsistent results. OBJECTIVE: We investigated the association between domestic radon exposure and childhood cancers, particularly leukemia and central nervous system (CNS) tumors. METHODS: We conducted a nationwide census-based cohort study including all children < 16 years of age living in Switzerland on 5 December 2000, the date of the 2000 census. Follow-up lasted until the date of diagnosis, death, emigration, a child's 16th birthday, or 31 December 2008. Domestic radon levels were estimated for each individual home address using a model developed and validated based on approximately 45,000 measurements taken throughout Switzerland. Data were analyzed with Cox proportional hazard models adjusted for child age, child sex, birth order, parents' socioeconomic status, environmental gamma radiation, and period effects. RESULTS: In total, 997 childhood cancer cases were included in the study. Compared with children exposed to a radon concentration below the median (< 77.7 Bq/m3), adjusted hazard ratios for children with exposure ≥ the 90th percentile (≥ 139.9 Bq/m3) were 0.93 (95% CI: 0.74, 1.16) for all cancers, 0.95 (95% CI: 0.63, 1.43) for all leukemias, 0.90 (95% CI: 0.56, 1.43) for acute lymphoblastic leukemia, and 1.05 (95% CI: 0.68, 1.61) for CNS tumors. CONCLUSIONS: We did not find evidence that domestic radon exposure is associated with childhood cancer, despite relatively high radon levels in Switzerland.
Resumo:
BACKGROUND: Prognosis prediction for resected primary colon cancer is based on the T-stage Node Metastasis (TNM) staging system. We investigated if four well-documented gene expression risk scores can improve patient stratification. METHODS: Microarray-based versions of risk-scores were applied to a large independent cohort of 688 stage II/III tumors from the PETACC-3 trial. Prognostic value for relapse-free survival (RFS), survival after relapse (SAR), and overall survival (OS) was assessed by regression analysis. To assess improvement over a reference, prognostic model was assessed with the area under curve (AUC) of receiver operating characteristic (ROC) curves. All statistical tests were two-sided, except the AUC increase. RESULTS: All four risk scores (RSs) showed a statistically significant association (single-test, P < .0167) with OS or RFS in univariate models, but with HRs below 1.38 per interquartile range. Three scores were predictors of shorter RFS, one of shorter SAR. Each RS could only marginally improve an RFS or OS model with the known factors T-stage, N-stage, and microsatellite instability (MSI) status (AUC gains < 0.025 units). The pairwise interscore discordance was never high (maximal Spearman correlation = 0.563) A combined score showed a trend to higher prognostic value and higher AUC increase for OS (HR = 1.74, 95% confidence interval [CI] = 1.44 to 2.10, P < .001, AUC from 0.6918 to 0.7321) and RFS (HR = 1.56, 95% CI = 1.33 to 1.84, P < .001, AUC from 0.6723 to 0.6945) than any single score. CONCLUSIONS: The four tested gene expression-based risk scores provide prognostic information but contribute only marginally to improving models based on established risk factors. A combination of the risk scores might provide more robust information. Predictors of RFS and SAR might need to be different.
Resumo:
BACKGROUND: The chemokine RANTES (regulated on activation, normal T-cell expressed and secreted)/CCL5 is involved in the pathogenesis of cardiovascular disease in mice, whereas less is known in humans. We hypothesised that its relevance for atherosclerosis should be reflected by associations between CCL5 gene variants, RANTES serum concentrations and protein levels in atherosclerotic plaques and risk for coronary events. METHODS AND FINDINGS: We conducted a case-cohort study within the population-based MONICA/KORA Augsburg studies. Baseline RANTES serum levels were measured in 363 individuals with incident coronary events and 1,908 non-cases (mean follow-up: 10.2±4.8 years). Cox proportional hazard models adjusting for age, sex, body mass index, metabolic factors and lifestyle factors revealed no significant association between RANTES and incident coronary events (HR [95% CI] for increasing RANTES tertiles 1.0, 1.03 [0.75-1.42] and 1.11 [0.81-1.54]). None of six CCL5 single nucleotide polymorphisms and no common haplotype showed significant associations with coronary events. Also in the CARDIoGRAM study (>22,000 cases, >60,000 controls), none of these CCL5 SNPs was significantly associated with coronary artery disease. In the prospective Athero-Express biobank study, RANTES plaque levels were measured in 606 atherosclerotic lesions from patients who underwent carotid endarterectomy. RANTES content in atherosclerotic plaques was positively associated with macrophage infiltration and inversely associated with plaque calcification. However, there was no significant association between RANTES content in plaques and risk for coronary events (mean follow-up 2.8±0.8 years). CONCLUSIONS: High RANTES plaque levels were associated with an unstable plaque phenotype. However, the absence of associations between (i) RANTES serum levels, (ii) CCL5 genotypes and (iii) RANTES content in carotid plaques and either coronary artery disease or incident coronary events in our cohorts suggests that RANTES may not be a novel coronary risk biomarker. However, the potential relevance of RANTES levels in platelet-poor plasma needs to be investigated in further studies.
Resumo:
BACKGROUND: Postmenopausal women with hormone receptor-positive early breast cancer have persistent, long-term risk of breast-cancer recurrence and death. Therefore, trials assessing endocrine therapies for this patient population need extended follow-up. We present an update of efficacy outcomes in the Breast International Group (BIG) 1-98 study at 8·1 years median follow-up. METHODS: BIG 1-98 is a randomised, phase 3, double-blind trial of postmenopausal women with hormone receptor-positive early breast cancer that compares 5 years of tamoxifen or letrozole monotherapy, or sequential treatment with 2 years of one of these drugs followed by 3 years of the other. Randomisation was done with permuted blocks, and stratified according to the two-arm or four-arm randomisation option, participating institution, and chemotherapy use. Patients, investigators, data managers, and medical reviewers were masked. The primary efficacy endpoint was disease-free survival (events were invasive breast cancer relapse, second primaries [contralateral breast and non-breast], or death without previous cancer event). Secondary endpoints were overall survival, distant recurrence-free interval (DRFI), and breast cancer-free interval (BCFI). The monotherapy comparison included patients randomly assigned to tamoxifen or letrozole for 5 years. In 2005, after a significant disease-free survival benefit was reported for letrozole as compared with tamoxifen, a protocol amendment facilitated the crossover to letrozole of patients who were still receiving tamoxifen alone; Cox models and Kaplan-Meier estimates with inverse probability of censoring weighting (IPCW) are used to account for selective crossover to letrozole of patients (n=619) in the tamoxifen arm. Comparison of sequential treatments to letrozole monotherapy included patients enrolled and randomly assigned to letrozole for 5 years, letrozole for 2 years followed by tamoxifen for 3 years, or tamoxifen for 2 years followed by letrozole for 3 years. Treatment has ended for all patients and detailed safety results for adverse events that occurred during the 5 years of treatment have been reported elsewhere. Follow-up is continuing for those enrolled in the four-arm option. BIG 1-98 is registered at clinicaltrials.govNCT00004205. FINDINGS: 8010 patients were included in the trial, with a median follow-up of 8·1 years (range 0-12·4). 2459 were randomly assigned to monotherapy with tamoxifen for 5 years and 2463 to monotherapy with letrozole for 5 years. In the four-arm option of the trial, 1546 were randomly assigned to letrozole for 5 years, 1548 to tamoxifen for 5 years, 1540 to letrozole for 2 years followed by tamoxifen for 3 years, and 1548 to tamoxifen for 2 years followed by letrozole for 3 years. At a median follow-up of 8·7 years from randomisation (range 0-12·4), letrozole monotherapy was significantly better than tamoxifen, whether by IPCW or intention-to-treat analysis (IPCW disease-free survival HR 0·82 [95% CI 0·74-0·92], overall survival HR 0·79 [0·69-0·90], DRFI HR 0·79 [0·68-0·92], BCFI HR 0·80 [0·70-0·92]; intention-to-treat disease-free survival HR 0·86 [0·78-0·96], overall survival HR 0·87 [0·77-0·999], DRFI HR 0·86 [0·74-0·998], BCFI HR 0·86 [0·76-0·98]). At a median follow-up of 8·0 years from randomisation (range 0-11·2) for the comparison of the sequential groups with letrozole monotherapy, there were no statistically significant differences in any of the four endpoints for either sequence. 8-year intention-to-treat estimates (each with SE ≤1·1%) for letrozole monotherapy, letrozole followed by tamoxifen, and tamoxifen followed by letrozole were 78·6%, 77·8%, 77·3% for disease-free survival; 87·5%, 87·7%, 85·9% for overall survival; 89·9%, 88·7%, 88·1% for DRFI; and 86·1%, 85·3%, 84·3% for BCFI. INTERPRETATION: For postmenopausal women with endocrine-responsive early breast cancer, a reduction in breast cancer recurrence and mortality is obtained by letrozole monotherapy when compared with tamoxifen montherapy. Sequential treatments involving tamoxifen and letrozole do not improve outcome compared with letrozole monotherapy, but might be useful strategies when considering an individual patient's risk of recurrence and treatment tolerability. FUNDING: Novartis, United States National Cancer Institute, International Breast Cancer Study Group.
Resumo:
PURPOSE: In the setting of a prospective clinical trial, we determined the predictive value of the methylation status of the O-6-methylguanine-DNA methyltransferase (MGMT) promoter for outcome in glioblastoma patients treated with the alkylating agent temozolomide. Expression of this excision repair enzyme has been associated with resistance to alkylating chemotherapy. EXPERIMENTAL DESIGN: The methylation status of MGMT in the tumor biopsies was evaluated in 38 patients undergoing resection for newly diagnosed glioblastoma and enrolled in a Phase II trial testing concomitant and adjuvant temozolomide and radiation. The epigenetic silencing of the MGMT gene was determined using methylation-specific PCR. RESULTS: Inactivation of the MGMT gene by promoter methylation was associated with longer survival (P = 0.0051; Log-rank test). At 18 months, survival was 62% (16 of 26) for patients testing positive for a methylated MGMT promoter but reached only 8% (1 of 12) in absence of methylation (P = 0.002; Fisher's exact test). In the presence of other clinically relevant factors, methylation of the MGMT promoter remains the only significant predictor (P = 0.017; Cox regression). CONCLUSIONS: This prospective clinical trial identifies MGMT-methylation status as an independent predictor for glioblastoma patients treated with a methylating agent. The association of the epigenetic inactivation of the DNA repair gene MGMT with better outcome in this homogenous cohort may have important implications for the design of future trials and supports efforts to deplete MGMT by O-6-benzylguanine, a noncytotoxic substrate of this enzyme.
Fatigue and weight loss predict survival on circadian chemotherapy for metastatic colorectal cancer.
Resumo:
BACKGROUND: Chemotherapy-induced neutropenia has been associated with prolonged survival selectively in patients on a conventional schedule (combined 5-fluorouracil, leucovorin, and oxaliplatin [FOLFOX2]) but not on a chronomodulated schedule of the same drugs administered at specific circadian times (chronoFLO4). The authors hypothesized that the early occurrence of chemotherapy-induced symptoms correlated with circadian disruption would selectively hinder the efficacy of chronotherapy. METHODS: Fatigue and weight loss (FWL) were considered to be associated with circadian disruption based on previous data. Patients with metastatic colorectal cancer (nâeuro0/00=âeuro0/00543) from an international phase 3 trial comparing FOLFOX2 with chronoFLO4 were categorized into 4 subgroups according to the occurrence of FWL or other clinically relevant toxicities during the initial 2 courses of chemotherapy. Multivariate Cox models were used to assess the role of toxicity on the time to progression (TTP) and overall survival (OS). RESULTS: The proportions of patients in the 4 subgroups were comparable in both treatment arms (Pâeuro0/00=âeuro0/00.77). No toxicity was associated with TTP or OS on FOLFOX2. The median OS on FOLFOX2 ranged from 16.4 (95% confidence limits [CL], 7.2-25.6 months) to 19.8 months (95% CL, 17.7-22.0 months) according to toxicity subgroup (Pâeuro0/00=âeuro0/00.45). Conversely, FWL, but no other toxicity, independently predicted for significantly shorter TTP (Pâeuro0/00<âeuro0/00.0001) and OS (Pâeuro0/00=âeuro0/00.001) on chronoFLO4. The median OS on chronoFLO4 was 13.8 months (95% CL, 10.4-17.2 months) or 21.1 months (95% CL, 19.0-23.1 months) according to presence or absence of chemotherapy-induced FWL, respectively. CONCLUSIONS: Early onset chemotherapy-induced FWL was an independent predictor of poor TTP and OS only on chronotherapy. Dynamic monitoring to detect early chemotherapy-induced circadian disruption could allow the optimization of rapid chronotherapy and concomitant improvements in safety and efficacy.
Resumo:
BACKGROUND: Chronic kidney disease (CKD) is associated to a higher stroke risk. Anemia is a common consequence of CKD, and is also a possible risk factor for cerebrovascular diseases. The purpose of this study was to examine if anemia and CKD are independent risk factors for mortality after stroke. METHODS: This historic cohort study was based on a stroke registry and included patients treated for a first clinical stroke in the stroke unit of one academic hospital over a three-year period. Mortality predictors comprised demographic characteristics, CKD, glomerular filtration rate (GFR), anemia and other stroke risk factors. GFR was estimated by means of the simplified Modification of Diet in Renal Disease formula. Renal function was assessed according to the Kidney Disease Outcomes Quality Initiative (K/DOQI)-CKD classification in five groups. A value of hemoglobin < 120 g/L in women and < 130 g/L in men on admission defined anemia. Kaplan-Meier survival curves and Cox models were used to describe and analyze one-year survival. RESULTS: Among 890 adult stroke patients, the mean (Standard Deviation) calculated GFR was 64.3 (17.8) ml/min/1.73 m2 and 17% had anemia. Eighty-two (10%) patients died during the first year after discharge. Among those, 50 (61%) had K/DOQI CKD stages 3 to 5 and 32 (39%) stages 1 or 2 (p < 0.001). Anemia was associated with an increased risk of death one year after discharge (p < 0.001). After adjustment for other factors, a higher hemoglobin level was independently associated with decreased mortality one year after discharge [hazard ratio (95% CI) 0.98 (0.97-1.00)]. CONCLUSIONS: Both CKD and anemia are frequent among stroke patients and are potential risk factors for decreased one-year survival. The inclusion of patients with a first-ever clinical stroke only and the determination of anemia based on one single measure, on admission, constitute limitations to the external validity. We should investigate if an early detection and management of both CKD and anemia could improve survival in stroke patients.