950 resultados para Increasing hazard ratio
Resumo:
Background: Adrenocortical tumors are heterogeneous neoplasms with incompletely understood pathogenesis. IGF-II overexpression has been consistently demonstrated in adult adrenocortical carcinomas. Objectives: The objective of the study was to analyze expression of IGF-II and its receptor (IGF-IR) in pediatric and adult adrenocortical tumors and the effects of a selective IGF-IR kinase inhibitor (NVP-AEW541) on adrenocortical tumor cells. Patients: Fifty-seven adrenocortical tumors (37 adenomas and 20 carcinomas) from 23 children and 34 adults were studied. Methods: Gene expression was determined by quantitative real-time PCR. Cell proliferation and apoptosis were analyzed in NCI H295 cells and a new cell line established from a pediatric adrenocortical adenoma. Results: IGF-II transcripts were overexpressed in both pediatric adrenocortical carcinomas and adenomas. Otherwise, IGF-II was mainly overexpressed in adult adrenocortical carcinomas (270.5 +/- 130.2 vs. 16.1 +/- 13.3; P = 0.0001). IGF-IR expression was significantly higher in pediatric adrenocortical carcinomas than adenomas (9.1 +/- 3.1 vs. 2.6 +/- 0.3; P = 0.0001), whereas its expression was similar in adult adrenocortical carcinomas and adenomas. IGF-IR expression was a predictor of metastases in pediatric adrenocortical tumors in univariate analysis (hazard ratio 1.84; 95% confidence interval 1.28 -2.66; P = 0.01). Furthermore, NVP-AEW541 blocked cell proliferation in a dose-and time-dependent manner in both cell lines through a significant increase of apoptosis. Conclusion: IGF-IR overexpression was a biomarker of pediatric adrenocortical carcinomas. Additionally, a selective IGF-IR kinase inhibitor had antitumor effects in adult and pediatric adrenocortical tumor cell lines, suggesting that IGF-IR inhibitors represent a promising therapy for human adrenocortical carcinoma.
Resumo:
Background and objectives Fibroblast growth factor 23 (FGF-23) has emerged as a new factor in mineral metabolism in chronic kidney disease (CKD). An important regulator of phosphorus homeostasis, FGF-23 has been shown to independently predict CKD progression in nondiabetic renal disease. We analyzed the relation between FGF-23 and renal outcome in diabetic nephropathy (DN). Design, setting, participants, & measurements DN patients participating in a clinical trial (enalapril+placebo versus enalapril+losartan) had baseline data collected and were followed until June 2009 or until the primary outcome was reached. Four patients were lost to follow-up. The composite primary outcome was defined as death, doubling of serum creatinine, and/or dialysis need. Results At baseline, serum FGF-23 showed a significant association with serum creatinine, intact parathyroid hormone, proteirturia, urinary fractional excretion of phosphate, male sex, and race. Interestingly, FGF-23 was not related to calcium, phosphorus, 25OH-vitamin D, or 24-hour urinary phosphorus. Mean follow-up time was 30.7 +/- 10 months. Cox regression showed that FGF-23 was an independent predictor of the primary outcome, even after adjustment for creatinine clearance and intact parathyroid hormone (10 pg/ml FGF-23 increase = hazard ratio, 1.09; 95% CI, 1.01 to 1.16, P = 0.02). Finally, Kaplan-Meier analysis showed a significantly higher risk of the primary outcome in patients with FGF-23 values of >70 pg/ml. Conclusions FGF-23 is a significant independent predictor of renal outcome in patients with macroalbuminuric DN. Further studies should clarify whether this relation is causal and whether FGF-23 should be a new therapeutic target for CKD prevention. Clin J Am Soc Nephrol 6: 241-247, 2011. doi: 10.2215/CJN.04250510
Resumo:
Objective. To evaluate the beneficial effect of antimalarial treatment on lupus survival in a large, multiethnic, international longitudinal inception cohort. Methods. Socioeconomic and demographic characteristics, clinical manifestations, classification criteria, laboratory findings, and treatment variables were examined in patients with systemic lupus erythematosus (SLE) from the Grupo Latino Americano de Estudio del Lupus Eritematoso (GLADEL) cohort. The diagnosis of SLE, according to the American College of Rheumatology criteria, was assessed within 2 years of cohort entry. Cause of death was classified as active disease, infection, cardiovascular complications, thrombosis, malignancy, or other cause. Patients were subdivided by antimalarial use, grouped according to those who had received antimalarial drugs for at least 6 consecutive months (user) and those who had received antimalarial drugs for <6 consecutive months or who had never received antimalarial drugs (nonuser). Results. Of the 1,480 patients included in the GLADEL cohort, 1,141 (77%) were considered antimalarial users, with a mean duration of drug exposure of 48.5 months (range 6-98 months). Death occurred in 89 patients (6.0%). A lower mortality rate was observed in antimalarial users compared with nonusers (4.4% versus 11.5%; P < 0.001). Seventy patients (6.1%) had received antimalarial drugs for 6-11 months, 146 (12.8%) for 1-2 years, and 925 (81.1%) for >2 years. Mortality rates among users by duration of antimalarial treatment (per 1,000 person-months of followup) were 3.85 (95% confidence interval [95% CI] 1.41-8.37), 2.7 (95% CI 1.41-4.76), and 0.54 (95% CI 0.37-0.77), respectively, while for nonusers, the mortality rate was 3.07 (95% CI 2.18-4.20) (P for trend < 0.001). After adjustment for potential confounders in a Cox regression model, antimalarial use was associated with a 38% reduction in the mortality rate (hazard ratio 0.62, 95% CI 0.39-0.99). Conclusion. Antimalarial drugs were shown to have a protective effect, possibly in a time-dependent manner, on SLE survival. These results suggest that the use of antimalarial treatment should be recommended for patients with lupus.
Resumo:
Background-Coronary artery bypass graft surgery with cardiopulmonary bypass is a safe, routine procedure. Nevertheless, significant morbidity remains, mostly because of the body`s response to the nonphysiological nature of cardiopulmonary bypass. Few data are available on the effects of off-pump coronary artery bypass graft surgery (OPCAB) on cardiac events and long-term clinical outcomes. Methods and Results-In a single-center randomized trial, 308 patients undergoing coronary artery bypass graft surgery were randomly assigned: 155 to OPCAB and 153 to on-pump CAB (ONCAB). Primary composite end points were death, myocardial infarction, further revascularization (surgery or angioplasty), or stroke. After 5-year follow-up, the primary composite end point was not different between groups (hazard ratio 0.71, 95% CI 0.41 to 1.22; P=0.21). A statistical difference was found between OPCAB and ONCAB groups in the duration of surgery (240 +/- 65 versus 300 +/- 87.5 minutes; P<0.001), in the length of ICU stay (19.5 +/- 17.8 versus 43 +/- 17.0 hours; P<0.001), time to extubation (4.6 +/- 6.8 versus 9.3 +/- 5.7 hours; P<0.001), hospital stay (6 +/- 2 versus 9 +/- 2 days; P<0.001), higher incidence of atrial fibrillation (35 versus 4% of patients; P<0.001), and blood requirements (31 versus 61% of patients; P<0.001), respectively. The number of grafts per patient was higher in the ONCAB than the OPCAB group (2.97 versus 2.49 grafts/patient; P<0.001). Conclusions-No difference was found between groups in the primary composite end point at 5-years follow-up. Although OPCAB surgery was related to a lower number of grafts and higher episodes of atrial fibrillation, it had no significant implications related to long-term outcomes.
Resumo:
Background-This study compared the 10-year follow-up of percutaneous coronary intervention (PCI), coronary artery surgery (CABG), and medical treatment (MT) in patients with multivessel coronary artery disease, stable angina, and preserved ventricular function. Methods and Results-The primary end points were overall mortality, Q-wave myocardial infarction, or refractory angina that required revascularization. All data were analyzed according to the intention-to-treat principle. At a single institution, 611 patients were randomly assigned to CABG (n = 203), PCI (n = 205), or MT (n = 203). The 10-year survival rates were 74.9% with CABG, 75.1% with PCI, and 69% with MT (P = 0.089). The 10-year rates of myocardial infarction were 10.3% with CABG, 13.3% with PCI, and 20.7% with MT (P < 0.010). The 10-year rates of additional revascularizations were 7.4% with CABG, 41.9% with PCI, and 39.4% with MT (P < 0.001). Relative to the composite end point, Cox regression analysis showed a higher incidence of primary events in MT than in CABG (hazard ratio 2.35, 95% confidence interval 1.78 to 3.11) and in PCI than in CABG (hazard ratio 1.85, 95% confidence interval 1.39 to 2.47). Furthermore, 10-year rates of freedom from angina were 64% with CABG, 59% with PCI, and 43% with MT (P < 0.001). Conclusions-Compared with CABG, MT was associated with a significantly higher incidence of subsequent myocardial infarction, a higher rate of additional revascularization, a higher incidence of cardiac death, and consequently a 2.29-fold increased risk of combined events. PCI was associated with an increased need for further revascularization, a higher incidence of myocardial infarction, and a 1.46-fold increased risk of combined events compared with CABG. Additionally, CABG was better than MT at eliminating anginal symptoms.
Resumo:
Background We validated a strategy for diagnosis of coronary artery disease ( CAD) and prediction of cardiac events in high-risk renal transplant candidates ( at least one of the following: age >= 50 years, diabetes, cardiovascular disease). Methods A diagnosis and risk assessment strategy was used in 228 renal transplant candidates to validate an algorithm. Patients underwent dipyridamole myocardial stress testing and coronary angiography and were followed up until death, renal transplantation, or cardiac events. Results The prevalence of CAD was 47%. Stress testing did not detect significant CAD in 1/3 of patients. The sensitivity, specificity, and positive and negative predictive values of the stress test for detecting CAD were 70, 74, 69, and 71%, respectively. CAD, defined by angiography, was associated with increased probability of cardiac events [log-rank: 0.001; hazard ratio: 1.90, 95% confidence interval (CI): 1.29-2.92]. Diabetes (P=0.03; hazard ratio: 1.58, 95% CI: 1.06-2.45) and angiographically defined CAD (P=0.03; hazard ratio: 1.69, 95% CI: 1.08-2.78) were the independent predictors of events. Conclusion The results validate our observations in a smaller number of high-risk transplant candidates and indicate that stress testing is not appropriate for the diagnosis of CAD or prediction of cardiac events in this group of patients. Coronary angiography was correlated with events but, because less than 50% of patients had significant disease, it seems premature to recommend the test to all high-risk renal transplant candidates. The results suggest that angiography is necessary in many high-risk renal transplant candidates and that better noninvasive methods are still lacking to identify with precision patients who will benefit from invasive procedures. Coron Artery Dis 21: 164-167 (C) 2010 Wolters Kluwer Health vertical bar Lippincott Williams & Wilkins.
Resumo:
Background. We assessed the results of a noninvasive therapeutic strategy on the long-term occurrence of cardiac events and death in a registry of patients with chronic kidney disease (CKD) and coronary artery disease (CAD). Methods. We analyzed 519 patients with CKD (56+/-9 years, 67% men, 67% whites) on maintenance hemodialysis with clinical or scintigraphic evidence of CAD by using coronary angiography. Results. In 230 (44%) patients, coronary angiography revealed significant CAD (lumen reduction >= 70%). Subjects with significant CAD were kept on medical treatment (MT; n=184) or referred for myocardial revascularization (percutaneous transluminal coronary angioplasty/coronary artery bypass graft-intervention; n=30) according to American College of Cardiology/American Heart Association guidelines. In addition, 16 subjects refused intervention and were also followed-up. Event-free survival for patients on MT at 12, 36, and 60 months was 86%, 71%, and 57%, whereas overall survival was 89%, 71%, and 50% in the same period, respectively. Patients who refused intervention had a significantly worse prognosis compared with those who actually underwent intervention (events: hazard ratio=4.50; % confidence interval=1.48-15.10; death: hazard ratio=3.39; % confidence interval 1.41-8.45). Conclusions. In patients with CKD and significant CAD, MT promotes adequate long-term event-free survival. However, failure to perform a coronary intervention when necessary results in an accentuated increased risk of events and death.
Resumo:
Background-Peculiar aspects of Chagas cardiomyopathy raise concerns about efficacy and safety of sympathetic blockade. We studied the influence of beta-blockers in patients with Chagas cardiomyopathy. Methods and Results-We examined REMADHE trial and grouped patients according to etiology (Chagas versus non-Chagas) and beta-blocker therapy. Primary end point was all-cause mortality or heart transplantation. Altogether 456 patients were studied; 27 (5.9%) were submitted to heart transplantation and 202 (44.3%) died. Chagas etiology was present in 68 (14.9%) patients; they had lower body mass index (24.1+/-4.1 versus 26.3+/-5.1, P=0.001), smaller end-diastolic left ventricle diameter (6.7+/-1.0 mm versus 7.0+/-0.9 mm, P=0.001), smaller proportion of beta-blocker therapy (35.8% versus 68%, P<0.001), and higher proportion of spironolactone therapy (74.6% versus 57.8%, P=0.003). Twenty-four (35.8%) patients with Chagas disease were under beta-blocker therapy and had lower serum sodium (136.6+/-3.1 versus 138.4+/-3.1 mEqs, P=0.05) and lower body mass index (22.5+/-3.3 versus 24.9+/-4.3, P=0.03) compared with those who received beta-blockers. Survival was lower in patients with Chagas heart disease as compared with other etiologies. When only patients under beta-blockers were considered, the survival of patients with Chagas disease was similar to that of other etiologies. The survival of patients with beta-blockers was higher than that of patients without beta-blockers. In Cox regression model, left ventricle end-diastolic diameter (hazard ratio, 1.78; CI, 1.15 to 2.76; P=0.009) and beta-blockers (hazard ratio, 0.37; CI, 0.14 to 0.97; P=0.044) were associated with better survival. Conclusions-Our study suggests that beta-blockers may have beneficial effects on survival of patients with heart failure and Chagas heart disease and warrants further investigation in a prospective, randomized trial.
Resumo:
Background Heart failure and diabetes often occur simultaneously in patients, but the prognostic value of glycemia in chronic heart failure is debatable. We evaluated the role of glycemia on prognosis of heart failure. Methods Outpatients with chronic heart failure from the Long-term Prospective Randomized Controlled Study Using Repetitive Education at Six-Month Intervals and Monitoring for Adherence in Heart Failure Outpatients (REMADHE) trial were grouped according to the presence of diabetes and level of glycemia. All-cause mortality/heart transplantation and unplanned hospital admission were evaluated. Results Four hundred fifty-six patients were included (135 [29.5%] female, 124 [27.2%] with diabetes mellitus, age of w50.2 +/- 11.4 years, and left-ventricle ejection fraction of 34.7% +/- 10.5%). During follow-up (3.6 +/- 2.2 years), 27 (5.9%) patients were submitted to heart transplantation and 202 (44.2%) died; survival was similar in patients with and without diabetes mellitus. When patients with and without diabetes were categorized according to glucose range (glycemia <= 100 mg/dL [5.5 mmol/L]), as well as when distributed in quintiles of glucose, the survival was significantly worse among patients with lower levels of glycemia. This finding persisted in Cox proportional hazards regression model that included gender, etiology, left ventricle ejection fraction, left ventricle diastolic diameter, creatinine level and beta-blocker therapy, and functional status (hazard ratio 1.45, 95% CI 1.09-1.69, P = .039). No difference regarding unplanned hospital admission was found. Conclusion We report on an inverse association between glycemia and mortality in outpatients with chronic heart failure. These results point to a new pathophysiologic understanding of the interactions between diabetes mellitus, hyperglycemia, and heart disease. (Am Heart J 2010; 159: 90-7.)
Resumo:
Background-Prasugrel is a novel thienopyridine that reduces new or recurrent myocardial infarctions (MIs) compared with clopidogrel in patients with acute coronary syndrome undergoing percutaneous coronary intervention. This effect must be balanced against an increased bleeding risk. We aimed to characterize the effect of prasugrel with respect to the type, size, and timing of MI using the universal classification of MI. Methods and Results-We studied 13 608 patients with acute coronary syndrome undergoing percutaneous coronary intervention randomized to prasugrel or clopidogrel and treated for 6 to 15 months in the Trial to Assess Improvement in Therapeutic Outcomes by Optimizing Platelet Inhibition With Prasugrel-Thrombolysis in Myocardial Infarction (TRITON-TIMI 38). Each MI underwent supplemental classification as spontaneous, secondary, or sudden cardiac death (types 1, 2, and 3) or procedure related (Types 4 and 5) and examined events occurring early and after 30 days. Prasugrel significantly reduced the overall risk of MI (7.4% versus 9.7%; hazard ratio [HR], 0.76; 95% confidence interval [CI], 0.67 to 0.85; P < 0.0001). This benefit was present for procedure-related MIs (4.9% versus 6.4%; HR, 0.76; 95% CI, 0.66 to 0.88; P = 0.0002) and nonprocedural (type 1, 2, or 3) MIs (2.8% versus 3.7%; HR, 0.72; 95% CI, 0.59 to 0.88; P = 0.0013) and consistently across MI size, including MIs with a biomarker peak >= 5 times the reference limit (HR. 0.74; 95% CI, 0.64 to 0.86; P = 0.0001). In landmark analyses starting at 30 days, patients treated with prasugrel had a lower risk of any MI (2.9% versus 3.7%; HR, 0.77; P = 0.014), including nonprocedural MI (2.3% versus 3.1%; HR, 0.74; 95% CI, 0.60 to 0.92; P = 0.0069). Conclusion-Treatment with prasugrel compared with clopidogrel for up to 15 months in patients with acute coronary syndrome undergoing percutaneous coronary intervention significantly reduces the risk of MIs that are procedure related and spontaneous and those that are small and large, including new MIs occurring during maintenance therapy. (Circulation. 2009; 119: 2758-2764.)
Resumo:
BACKGROUND The assessment of myocardial viability has been used to identify patients with coronary artery disease and left ventricular dysfunction in whom coronary-artery bypass grafting (CABG) will provide a survival benefit. However, the efficacy of this approach is uncertain. METHODS In a substudy of patients with coronary artery disease and left ventricular dysfunction who were enrolled in a randomized trial of medical therapy with or without CABG, we used single-photon-emission computed tomography (SPECT), dobutamine echocardiography, or both to assess myocardial viability on the basis of pre-specified thresholds. RESULTS Among the 1212 patients enrolled in the randomized trial, 601 underwent assessment of myocardial viability. Of these patients, we randomly assigned 298 to receive medical therapy plus CABG and 303 to receive medical therapy alone. A total of 178 of 487 patients with viable myocardium (37%) and 58 of 114 patients without viable myocardium (51%) died (hazard ratio for death among patients with viable myocardium, 0.64; 95% confidence interval [CI], 0.48 to 0.86; P = 0.003). However, after adjustment for other baseline variables, this association with mortality was not significant (P = 0.21). There was no significant interaction between viability status and treatment assignment with respect to mortality (P = 0.53). CONCLUSIONS The presence of viable myocardium was associated with a greater likelihood of survival in patients with coronary artery disease and left ventricular dysfunction, but this relationship was not significant after adjustment for other baseline variables. The assessment of myocardial viability did not identify patients with a differential survival benefit from CABG, as compared with medical therapy alone.
Resumo:
Background: Chagas` disease is the illness caused by the protozoan Trypanosoma cruzi and it is still endemic in Latin America. Heart transplantation is a therapeutic option for patients with end-stage Chagas` cardiomyopathy. Nevertheless, reactivation may occur after transplantation, leading to higher morbidity and graft dysfunction. This study aimed to identify risk factors for Chagas` disease reactivation episodes. Methods: This investigation is a retrospective cohort study of all Chagas` disease heart transplant recipients from September 1985 through September 2004. Clinical, microbiologic and histopathologic data were reviewed. Statistical analysis was performed with SPSS (version 13) software. Results: Sixty-four (21.9%) patients with chronic Chagas` disease underwent heart transplantation during the study period. Seventeen patients (26.5%) had at least one episode of Chagas` disease reactivation, and univariate analysis identified number of rejection episodes (p = 0.013) and development of neoplasms (p = 0.040) as factors associated with Chagas` disease reactivation episodes. Multivariate analysis showed that number of rejection episodes (hazard ratio = 1.31; 95% confidence interval [CI]: 1.06 to 1.62; p = 0.011), neoplasms (hazard ratio = 5.07; 95% CI: 1.49 to 17.20; p = 0.009) and use of mycophenolate mofetil (hazard ratio = 3.14; 95% CI: 1.00 to 9.84; p = 0.049) are independent determinants for reactivation after transplantation. Age (p = 0.88), male gender (p = 0.15), presence of rejection (p = 0.17), cytomegalovirus infection (p = 0.79) and mortality after hospital discharge (p = 0.15) showed no statistically significant difference. Conclusions: Our data suggest that events resulting in greater immunosuppression status contribute to Chagas` disease reactivation episodes after heart transplantation and should alert physicians to make an early diagnosis and perform pre-emptive therapy. Although reactivation led to a high rate of morbidity, a low mortality risk was observed.
Resumo:
Purpose The third-generation nonsteroidal aromatase inhibitors (AIs) are increasingly used as adjuvant and first-line advanced therapy for postmenopausal, hormone receptor-positive (HR +) breast cancer. Because many patients subsequently experience progression or relapse, it is important to identify agents with efficacy after AI failure. Materials and Methods Evaluation of Faslodex versus Exemestane Clinical Trial (EFECT) is a randomized, double-blind, placebo controlled, multicenter phase III trial of fulvestrant versus exemestane in postmenopausal women with HR + advanced breast cancer (ABC) progressing or recurring after nonsteroidal AI. The primary end point was time to progression (TTP). A fulvestrant loading-dose (LD) regimen was used: 500 mg intramuscularly on day 0, 250 mg on days 14, 28, and 250 mg every 28 days thereafter. Exemestane 25 mg orally was administered once daily. Results A total of 693 women were randomly assigned to fulvestrant (n = 351) or exemestane ( n = 342). Approximately 60% of patients had received at least two prior endocrine therapies. Median TTP was 3.7 months in both groups ( hazard ratio = 0.963; 95% CI, 0.819 to 1.133; P = .6531). The overall response rate ( 7.4% v 6.7%; P = .736) and clinical benefit rate ( 32.2% v 31.5%; P = .853) were similar between fulvestrant and exemestane respectively. Median duration of clinical benefit was 9.3 and 8.3 months, respectively. Both treatments were well tolerated, with no significant differences in the incidence of adverse events or quality of life. Pharmacokinetic data confirm that steady-state was reached within 1 month with the LD schedule of fulvestrant. Conclusion Fulvestrant LD and exemestane are equally active and well-tolerated in a meaningful proportion of postmenopausal women with ABC who have experienced progression or recurrence during treatment with a nonsteroidal AI.
Resumo:
Background Treatment with adjuvant trastuzumab for 1 year improves disease-free survival and overall survival in patients with human epidermal growth factor receptor 2 (HER2)-positive early breast cancer. We aimed to assess disease-free survival and overall survival after a median follow-up of 4 years for patients enrolled on the Herceptin Adjuvant (HERA) trial. Methods The HERA trial is an international, multicentre, randomised, open-label, phase 3 trial comparing treatment with trastuzumab for 1 and 2 years with observation after standard neoadjuvant, adjuvant chemotherapy, or both in patients with HER2-positive early breast cancer. The primary endpoint was disease-free survival. After a positive first interim analysis at a median follow-up of 1 year for the comparison of treatment with trastuzumab for 1 year with observation, event-free patients in the observation group were allowed to cross over to receive trastuzumab. We report trial outcomes for the 1-year trastuzumab and observation groups at a median follow-up of 48.4 months (IQR 42.0-56.5) and assess the effect of the extensive crossover to trastuzumab. Our analysis was by intention-to-treat. The HERA trial is registered with the European Clinical Trials Database, number 2005-002385-11. Findings The HERA trial population comprised 1698 patients randomly assigned to the observation group and 1703 to the 1-year trastuzumab group. Intention-to-treat analysis of disease-free survival showed a significant benefit in favour of patients in the 1-year trastuzumab group (4-year disease-free survival 78.6%) compared with the observation group (4-year disease-free survival 72.2%; hazard ratio [HR] 0.76; 95% CI 0.66-0.87; p<0.0001). Intention-to-treat analysis of overall survival showed no significant difference in the risk of death (4-year overall survival 89.3% vs 87.7%, respectively; HR 0.85; 95% CI 0.70-1.04; p=0.11). Overall, 885 patients (52%) of the 1698 patients in the observation group crossed over to receive trastuzumab, and began treatment at median 22.8 months (range 4.5-52.7) from randomisation. In a non-randomised comparison, patients in the selective-crossover cohort had fewer disease-free survival events than patients remaining in the observation group (adjusted HR 0.68; 95% CI 0.51-0.90; p=0.0077). Higher incidences of grade 3-4 and fatal adverse events were noted on 1-year trastuzumab than in the observation group. The most common grade 3 or 4 adverse events, each in less than 1% of patients, were congestive cardiac failure, hypertension, arthralgia, back pain, central-line infection, hot flush, headache, and diarrhoea. Interpretation Treatment with adjuvant trastuzumab for 1 year after chemotherapy is associated with significant clinical benefit at 4-year median follow-up. The substantial selective crossover of patients in the observation group to trastuzumab was associated with improved outcomes for this cohort.
Resumo:
Methods. We studied participants with acute and/or early HIV infection and TDR in 2 cohorts (San Francisco, California, and Sao Paulo, Brazil). We followed baseline mutations longitudinally and compared replacement rates between mutation classes with use of a parametric proportional hazards model. Results. Among 75 individuals with 195 TDR mutations, M184V/I became undetectable markedly faster than did nonnucleoside reverse-transcriptase inhibitor (NNRTI) mutations (hazard ratio, 77.5; 95% confidence interval [CI], 14.7-408.2; P < .0001), while protease inhibitor and NNRTI replacement rates were similar. Higher plasma HIV-1 RNA level predicted faster mutation replacement, but this was not statistically significant (hazard ratio, 1.71 log(10) copies/mL; 95% CI, .90-3.25 log(10) copies/mL; P = .11). We found substantial person-to-person variability in mutation replacement rates not accounted for by viral load or mutation class (P < .0001). Conclusions. The rapid replacement of M184V/I mutations is consistent with known fitness costs. The long-term persistence of NNRTI and protease inhibitor mutations suggests a risk for person-to-person propagation. Host and/or viral factors not accounted for by viral load or mutation class are likely influencing mutation replacement and warrant further study.