916 resultados para Proportional Hazards Model
Resumo:
Background and aim of the study: The natriuretic peptides, brain natriuretic peptide (BNP) and its N-terminal prohormone (NT-proBNP), can be used as diagnostic and prognostic markers for aortic stenosis (AS). However, the association between BNP, NT-proBNP, and long-term clinical outcomes in patients with severe AS remains uncertain. Methods: A total of 64 patients with severe AS was prospectively enrolled into the study, and underwent clinical and echocardiographic assessments at baseline. Blood samples were drawn for plasma BNP and NT-proBNP analyses. The primary outcome was death from any cause, through a six-year follow up period. Cox proportional hazards modeling was used to examine the association between natriuretic peptides and long-term mortality, adjusting for important clinical factors. Results: During a mean period of 1,520 681 days, 51 patients (80%) were submitted to aortic valve replacement, and 13 patients (20%) were medically managed without surgical interventions. Mortality rates were 13.7% in the surgical group and 62% in the medically managed group (p <0.001). Patients with higher plasma BNP (>135 pg/ml) and NT-proBNP (>1,150 pg/ml) levels at baseline had a greater risk of long-term mortality (hazard ratio [HR] 3.2, 95% confidence interval [CI] 1.1-9.1; HR 4.3, 95% CI 1.4-13.5, respectively). After adjusting for important covariates, both BNP and NT-proBNP remained independently associated with long-term mortality (HR 2.9, 95%CI 1.5-5.7; HR 1.8, 95%CI 1.1-3.1, respectively). Conclusion: In patients with severe AS, plasma BNP and NT-proBNP levels were associated with long-term mortality. The use of these biomarkers to guide treatment might represent an interesting approach that deserves further evaluation. The Journal of Heart Valve Disease 2012;21:331-336
Resumo:
Objectives: We assessed mortality associated with immunologic and virologic patterns of response at 6 months of highly active antiretroviral therapy (HAART) in HIV-infected individuals from resource-limited countries in Africa and South America. Methods: Patients who initiated HAART between 1996 and 2007, aged 16 years or older, and had at least 1 measurement (HIV-1 RNA plasma viral load or CD4 cell count) at 6 months of therapy (3-9 month window) were included. Therapy response was categorized as complete, discordant (virologic only or immunologic only), and absent. Associations between 6-month response to therapy and all-cause mortality were assessed by Cox proportional hazards regression. Robust standard errors were calculated to account for intrasite correlation. Results: A total of 7160 patients, corresponding to 15,107 person-years, were analyzed. In multivariable analysis adjusted for age at HAART initiation, baseline clinical stage and CD4 cell count, year of HAART initiation, clinic, occurrence of an AIDS-defining condition within the first 6 months of treatment, and discordant and absent responses were associated with increased risk of death. Conclusions: Similar to reports from high-income countries, discordant immunologic and virologic responses were associated with intermediate risk of death compared with complete and no response in this large cohort of HIV-1 patients from resource-limited countries. Our results support a recommendation for wider availability of plasma viral load testing to monitor antiretroviral therapy in these settings.
Resumo:
The aim of this analysis was to assess the effect of body mass index (BMI) on 1-year outcomes in patients enrolled in a contemporary percutaneous coronary intervention trial comparing a sirolimus-eluting stent with a durable polymer to a biolimus-eluting stent with a biodegradable polymer. A total of 1,707 patients who underwent percutaneous coronary intervention were randomized to treatment with either biolimus-eluting stents (n = 857) or sirolimus-eluting stents (n = 850). Patients were assigned to 1 of 3 groups according to BMI: normal (<25 kg/m(2)), overweight (25 to 30 kg/m(2)), or obese (>30 kg/m(2)). At 1 year, the incidence of the composite of cardiac death, myocardial infarction, and clinically justified target vessel revascularization was assessed. In addition, rates of clinically justified target lesion revascularization and stent thrombosis were assessed. Cox proportional-hazards analysis, adjusted for clinical differences, was used to develop models for 1-year mortality. Forty-five percent of the patients (n = 770) were overweight, 26% (n = 434) were obese, and 29% (n = 497) had normal BMIs. At 1-year follow-up, the cumulative rate of cardiac death, myocardial infarction, and clinically justified target vessel revascularization was significantly higher in the obese group (8.7% in normal-weight, 11.3% in overweight, and 14.5% in obese patients, p = 0.01). BMI (hazard ratio 1.47, 95% confidence interval 1.02 to 2.14, p = 0.04) was an independent predictor of stent thrombosis. Stent type had no impact on the composite of cardiac death, myocardial infarction, and clinically justified target vessel revascularization at 1 year in the 3 BMI groups (hazard ratio 1.08, 95% confidence interval 0.63 to 1.83, p = 0.73). In conclusion, BMI was an independent predictor of major adverse cardiac events at 1-year clinical follow-up. The higher incidence of stent thrombosis in the obese group may suggest the need for a weight-adjusted dose of clopidogrel.
Resumo:
The purpose of this study was to assess the impact of body mass index (BMI) on clinical outcome of patients treated by percutaneous coronary intervention (PCI) using drug-eluting stents. Patients were stratified according to BMI as normal (<25 kg/m(2)), overweight (25 to 30 kg/m(2)), or obese (>30 kg/m(2)). At 5-year follow-up all-cause death, myocardial infarction, clinically justified target vessel revascularization (TVR), and definite stent thrombosis were assessed. A complete dataset was available in 7,427 patients, of which 45%, 22%, and 33% were classified according to BMI as overweight, obese, and normal, respectively. Mean age of patients was significantly older in those with a normal BMI (p <0.05). Incidence of diabetes mellitus, hypertension, and dyslipidemia increased as BMI increased (p <0.05). Significantly higher rates of TVR (15.3% vs 12.8%, p = 0.02) and early stent thrombosis (1.5% vs 0.9%, p = 0.04) were observed in the obese compared to the normal BMI group. No significant difference among the 3 BMI groups was observed for the composite of death/myocardial infarction/TVR or for definite stent thrombosis at 5 years, whereas the normal BMI group was at higher risk for all-cause death at 5 years (obese vs normal BMI, hazard ratio 0.74, confidence interval 0.53 to 0.99, p = 0.05; overweight vs normal BMI, hazard ratio 0.73, confidence interval 0.59 to 0.94, p = 0.01) in the multivariate Cox proportional hazard model. Age resulted in a linearly dependent covariate with BMI in the all-cause 5-year mortality multivariate model (p = 0.001). In conclusion, the "obesity paradox" observed in 5-year all-cause mortality could be explained by the higher rate of elderly patients in the normal BMI group and the existence of colinearity between BMI and age. However, obese patients had a higher rate of TVR and early stent thrombosis and a higher rate of other risk factors such as diabetes mellitus, hypertension, and hypercholesterolemia.
Resumo:
Background: With expanding pediatric antiretroviral therapy (ART) access, children will begin to experience treatment failure and require second-line therapy. We evaluated the probability and determinants of virologic failure and switching in children in South Africa. Methods: Pooled analysis of routine individual data from children who initiated ART in 7 South African treatment programs with 6-monthly viral load and CD4 monitoring produced Kaplan-Meier estimates of probability of virologic failure (2 consecutive unsuppressed viral loads with the second being >1000 copies/mL, after ≥24 weeks of therapy) and switch to second-line. Cox-proportional hazards models stratified by program were used to determine predictors of these outcomes. Results: The 3-year probability of virologic failure among 5485 children was 19.3% (95% confidence interval: 17.6 to 21.1). Use of nevirapine or ritonavir alone in the initial regimen (compared with efavirenz) and exposure to prevention of mother to child transmission regimens were independently associated with failure [adjusted hazard ratios (95% confidence interval): 1.77 (1.11 to 2.83), 2.39 (1.57 to 3.64) and 1.40 (1.02 to 1.92), respectively]. Among 252 children with ≥1 year follow-up after failure, 38% were switched to second-line. Median (interquartile range) months between failure and switch was 5.7 (2.9-11.0). Conclusions: Triple ART based on nevirapine or ritonavir as a single protease inhibitor seems to be associated with a higher risk of virologic failure. A low proportion of virologically failing children were switched.
Resumo:
Objective: To investigate the predictive value of the Strauss and Carpenter Prognostic Scale (SCPS) for transition to a first psychotic episode in subjects clinically at high risk (CHR) of psychosis. Method: Two hundred and forty-four CHR subjects participating in the European Prediction of Psychosis Study were assessed with the SCPS, an instrument that has been shown to predict outcome in patients with schizophrenia reliably. Results: At 18-month follow-up, 37 participants had made the transition to psychosis. The SCPS total score was predictive of a first psychotic episode (P < 0.0001). SCPS items that remained as independent predictors in the Cox proportional hazard model were as follows: most usual quality of useful work in the past year (P = 0.006), quality of social relations (P = 0.006), presence of thought disorder, delusions or hallucinations in the past year (P = 0.001) and reported severity of subjective distress in past month (P = 0.003). Conclusion: The SCPS could make a valuable contribution to a more accurate prediction of psychosis in CHR subjects as a second-step tool. SCPS items assessing quality of useful work and social relations, positive symptoms and subjective distress have predictive value for transition. Further research should focus on investigating whether targeted early interventions directed at the predictive domains may improve outcomes.
Resumo:
BACKGROUND: Tumor levels of steroid hormone receptors, a factor used to select adjuvant treatment for early-stage breast cancer, are currently determined with immunohistochemical assays. These assays have a discordance of 10%-30% with previously used extraction assays. We assessed the concordance and predictive value of hormone receptor status as determined by immunohistochemical and extraction assays on specimens from International Breast Cancer Study Group Trials VIII and IX. These trials predominantly used extraction assays and compared adjuvant chemoendocrine therapy with endocrine therapy alone among pre- and postmenopausal patients with lymph node-negative breast cancer. Trial conclusions were that combination therapy provided a benefit to pre- and postmenopausal patients with estrogen receptor (ER)-negative tumors but not to ER-positive postmenopausal patients. ER-positive premenopausal patients required further study. METHODS: Tumor specimens from 571 premenopausal and 976 postmenopausal patients on which extraction assays had determined ER and progesterone receptor (PgR) levels before randomization from October 1, 1988, through October 1, 1999, were re-evaluated with an immunohistochemical assay in a central pathology laboratory. The endpoint was disease-free survival. Hazard ratios of recurrence or death for treatment comparisons were estimated with Cox proportional hazards regression models, and discriminatory ability was evaluated with the c index. All statistical tests were two-sided. RESULTS: Concordance of hormone receptor status determined by both assays ranged from 74% (kappa = 0.48) for PgR among postmenopausal patients to 88% (kappa = 0.66) for ER in postmenopausal patients. Hazard ratio estimates were similar for the association between disease-free survival and ER status (among all patients) or PgR status (among postmenopausal patients) as determined by the two methods. However, among premenopausal patients treated with endocrine therapy alone, the discriminatory ability of PgR status as determined by immunohistochemical assay was statistically significantly better (c index = 0.60 versus 0.51; P = .003) than that determined by extraction assay, and so immunohistochemically determined PgR status could predict disease-free survival. CONCLUSIONS: Trial conclusions in which ER status (for all patients) or PgR status (for postmenopausal patients) was determined by immunohistochemical assay supported those determined by extraction assays. However, among premenopausal patients, trial conclusions drawn from PgR status differed--immunohistochemically determined PgR status could predict response to endocrine therapy, unlike that determined by the extraction assay.
Resumo:
BACKGROUND: We sought to characterize the impact that hepatitis C virus (HCV) infection has on CD4 cells during the first 48 weeks of antiretroviral therapy (ART) in previously ART-naive human immunodeficiency virus (HIV)-infected patients. METHODS: The HIV/AIDS Drug Treatment Programme at the British Columbia Centre for Excellence in HIV/AIDS distributes all ART in this Canadian province. Eligible individuals were those whose first-ever ART included 2 nucleoside reverse transcriptase inhibitors and either a protease inhibitor or a nonnucleoside reverse transcriptase inhibitor and who had a documented positive result for HCV antibody testing. Outcomes were binary events (time to an increase of > or = 75 CD4 cells/mm3 or an increase of > or = 10% in the percentage of CD4 cells in the total T cell population [CD4 cell fraction]) and continuous repeated measures. Statistical analyses used parametric and nonparametric methods, including multivariate mixed-effects linear regression analysis and Cox proportional hazards analysis. RESULTS: Of 1186 eligible patients, 606 (51%) were positive and 580 (49%) were negative for HCV antibodies. HCV antibody-positive patients were slower to have an absolute (P<.001) and a fraction (P = .02) CD4 cell event. In adjusted Cox proportional hazards analysis (controlling for age, sex, baseline absolute CD4 cell count, baseline pVL, type of ART initiated, AIDS diagnosis at baseline, adherence to ART regimen, and number of CD4 cell measurements), HCV antibody-positive patients were less likely to have an absolute CD4 cell event (adjusted hazard ratio [AHR], 0.84 [95% confidence interval [CI], 0.72-0.98]) and somewhat less likely to have a CD4 cell fraction event (AHR, 0.89 [95% CI, 0.70-1.14]) than HCV antibody-negative patients. In multivariate mixed-effects linear regression analysis, HCV antibody-negative patients had increases of an average of 75 cells in the absolute CD4 cell count and 4.4% in the CD4 cell fraction, compared with 20 cells and 1.1% in HCV antibody-positive patients, during the first 48 weeks of ART, after adjustment for time-updated pVL, number of CD4 cell measurements, and other factors. CONCLUSION: HCV antibody-positive HIV-infected patients may have an altered immunologic response to ART.
Resumo:
To compare the prediction of hip fracture risk of several bone ultrasounds (QUS), 7062 Swiss women > or =70 years of age were measured with three QUSs (two of the heel, one of the phalanges). Heel QUSs were both predictive of hip fracture risk, whereas the phalanges QUS was not. INTRODUCTION: As the number of hip fracture is expected to increase during these next decades, it is important to develop strategies to detect subjects at risk. Quantitative bone ultrasound (QUS), an ionizing radiation-free method, which is transportable, could be interesting for this purpose. MATERIALS AND METHODS: The Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture Risk (SEMOF) study is a multicenter cohort study, which compared three QUSs for the assessment of hip fracture risk in a sample of 7609 elderly ambulatory women > or =70 years of age. Two QUSs measured the heel (Achilles+; GE-Lunar and Sahara; Hologic), and one measured the heel (DBM Sonic 1200; IGEA). The Cox proportional hazards regression was used to estimate the hazard of the first hip fracture, adjusted for age, BMI, and center, and the area under the ROC curves were calculated to compare the devices and their parameters. RESULTS: From the 7609 women who were included in the study, 7062 women 75.2 +/- 3.1 (SD) years of age were prospectively followed for 2.9 +/- 0.8 years. Eighty women reported a hip fracture. A decrease by 1 SD of the QUS variables corresponded to an increase of the hip fracture risk from 2.3 (95% CI, 1.7, 3.1) to 2.6 (95% CI, 1.9, 3.4) for the three variables of Achilles+ and from 2.2 (95% CI, 1.7, 3.0) to 2.4 (95% CI, 1.8, 3.2) for the three variables of Sahara. Risk gradients did not differ significantly among the variables of the two heel QUS devices. On the other hand, the phalanges QUS (DBM Sonic 1200) was not predictive of hip fracture risk, with an adjusted hazard risk of 1.2 (95% CI, 0.9, 1.5), even after reanalysis of the digitalized data and using different cut-off levels (1700 or 1570 m/s). CONCLUSIONS: In this elderly women population, heel QUS devices were both predictive of hip fracture risk, whereas the phalanges QUS device was not.
Resumo:
There is an emerging interest in modeling spatially correlated survival data in biomedical and epidemiological studies. In this paper, we propose a new class of semiparametric normal transformation models for right censored spatially correlated survival data. This class of models assumes that survival outcomes marginally follow a Cox proportional hazard model with unspecified baseline hazard, and their joint distribution is obtained by transforming survival outcomes to normal random variables, whose joint distribution is assumed to be multivariate normal with a spatial correlation structure. A key feature of the class of semiparametric normal transformation models is that it provides a rich class of spatial survival models where regression coefficients have population average interpretation and the spatial dependence of survival times is conveniently modeled using the transformed variables by flexible normal random fields. We study the relationship of the spatial correlation structure of the transformed normal variables and the dependence measures of the original survival times. Direct nonparametric maximum likelihood estimation in such models is practically prohibited due to the high dimensional intractable integration of the likelihood function and the infinite dimensional nuisance baseline hazard parameter. We hence develop a class of spatial semiparametric estimating equations, which conveniently estimate the population-level regression coefficients and the dependence parameters simultaneously. We study the asymptotic properties of the proposed estimators, and show that they are consistent and asymptotically normal. The proposed method is illustrated with an analysis of data from the East Boston Ashma Study and its performance is evaluated using simulations.
Resumo:
BACKGROUND: We evaluated the ability of CA15-3 and alkaline phosphatase (ALP) to predict breast cancer recurrence. PATIENTS AND METHODS: Data from seven International Breast Cancer Study Group trials were combined. The primary end point was relapse-free survival (RFS) (time from randomization to first breast cancer recurrence), and analyses included 3953 patients with one or more CA15-3 and ALP measurement during their RFS period. CA15-3 was considered abnormal if >30 U/ml or >50% higher than the first value recorded; ALP was recorded as normal, abnormal, or equivocal. Cox proportional hazards models with a time-varying indicator for abnormal CA15-3 and/or ALP were utilized. RESULTS: Overall, 784 patients (20%) had a recurrence, before which 274 (35%) had one or more abnormal CA15-3 and 35 (4%) had one or more abnormal ALP. Risk of recurrence increased by 30% for patients with abnormal CA15-3 [hazard ratio (HR) = 1.30; P = 0.0005], and by 4% for those with abnormal ALP (HR = 1.04; P = 0.82). Recurrence risk was greatest for patients with either (HR = 2.40; P < 0.0001) and with both (HR = 4.69; P < 0.0001) biomarkers abnormal. ALP better predicted liver recurrence. CONCLUSIONS: CA15-3 was better able to predict breast cancer recurrence than ALP, but use of both biomarkers together provided a better early indicator of recurrence. Whether routine use of these biomarkers improves overall survival remains an open question.
Resumo:
BACKGROUND: Aromatase inhibitors are considered standard adjuvant endocrine treatment of postmenopausal women with hormone receptor-positive breast cancer, but it remains uncertain whether aromatase inhibitors should be given upfront or sequentially with tamoxifen. Awaiting results from ongoing randomized trials, we examined prognostic factors of an early relapse among patients in the BIG 1-98 trial to aid in treatment choices. PATIENTS AND METHODS: Analyses included all 7707 eligible patients treated on BIG 1-98. The median follow-up was 2 years, and the primary end point was breast cancer relapse. Cox proportional hazards regression was used to identify prognostic factors. RESULTS: Two hundred and eighty-five patients (3.7%) had an early relapse (3.1% on letrozole, 4.4% on tamoxifen). Predictive factors for early relapse were node positivity (P < 0.001), absence of both receptors being positive (P < 0.001), high tumor grade (P < 0.001), HER-2 overexpression/amplification (P < 0.001), large tumor size (P = 0.001), treatment with tamoxifen (P = 0.002), and vascular invasion (P = 0.02). There were no significant interactions between treatment and the covariates, though letrozole appeared to provide a greater than average reduction in the risk of early relapse in patients with many involved lymph nodes, large tumors, and vascular invasion present. CONCLUSION: Upfront letrozole resulted in significantly fewer early relapses than tamoxifen, even after adjusting for significant prognostic factors.
Resumo:
BACKGROUND: The prognostic relevance of the collateral circulation is still controversial. The goal of this study was to assess the impact on survival of quantitatively obtained, recruitable coronary collateral flow in patients with stable coronary artery disease during 10 years of follow-up. METHODS AND RESULTS: Eight-hundred forty-five individuals (age, 62+/-11 years), 106 patients without coronary artery disease and 739 patients with chronic stable coronary artery disease, underwent a total of 1053 quantitative, coronary pressure-derived collateral measurements between March 1996 and April 2006. All patients were prospectively included in a collateral flow index (CFI) database containing information on recruitable collateral flow parameters obtained during a 1-minute coronary balloon occlusion. CFI was calculated as follows: CFI = (P(occl) - CVP)/(P(ao) - CVP) where P(occl) is mean coronary occlusive pressure, P(ao) is mean aortic pressure, and CVP is central venous pressure. Patients were divided into groups with poorly developed (CFI < 0.25) or well-grown collateral vessels (CFI > or = 0.25). Follow-up information on the occurrence of all-cause mortality and major adverse cardiac events after study inclusion was collected. Cumulative 10-year survival rates in relation to all-cause deaths and cardiac deaths were 71% and 88%, respectively, in patients with low CFI and 89% and 97% in the group with high CFI (P=0.0395, P=0.0109). Through the use of Cox proportional hazards analysis, the following variables independently predicted elevated cardiac mortality: age, low CFI (as a continuous variable), and current smoking. CONCLUSIONS: A well-functioning coronary collateral circulation saves lives in patients with chronic stable coronary artery disease. Depending on the exact amount of collateral flow recruitable during a brief coronary occlusion, long-term cardiac mortality is reduced to one fourth compared with the situation without collateral supply.
Resumo:
PURPOSE: To compare clinical outcomes of endovascular and open aortic repair of abdominal aortic aneurysms (AAAs) in young patients at low risk. It was hypothesized that endovascular aneurysm repair (EVAR) compares favorably with open aneurysm repair (OAR) in these patients. MATERIALS AND METHODS: Twenty-five patients aged 65 years or younger with a low perioperative surgical risk profile underwent EVAR at a single institution between April 1994 and May 2007 (23 men; mean age, 62 years+/-2.8). A sex- and risk-matched control group of 25 consecutive patients aged 65 years or younger who underwent OAR was used as a control group (23 men; mean age, 59 years+/-3.9). Patient outcomes and complications were classified according to Society of Vascular Surgery/International Society for Cardiovascular Surgery reporting standards. RESULTS: Mean follow-up times were 7.1 years+/-3.2 after EVAR and 5.9 years+/-1.8 after OAR (P=.1020). Total complication rates were 20% after EVAR and 52% after OAR (P=.0378), and all complications were mild or moderate. Mean intensive care unit times were 0.2 days+/-0.4 after EVAR and 1.1 days+/-0.4 after OAR (P<.0001) and mean lengths of hospital stay were 2.3 days+/-1.0 after EVAR and 5.0 days+/-2.1 after OAR (P<.0001). Cumulative rates of long-term patient survival did not differ between EVAR and OAR (P=.144). No AAA-related deaths or aortoiliac ruptures occurred during follow-up for EVAR and OAR. In addition, no surgical conversions were necessary in EVAR recipients. Cumulative rates of freedom from secondary procedures were not significantly different between the EVAR and OAR groups (P=.418). Within a multivariable Cox proportional-hazards analysis adjusted for patient age, maximum AAA diameter, and cardiac risk score, all-cause mortality rates (odds ratio [OR], 0.125; 95% CI, 0.010-1.493; P=.100) and need for secondary procedures (OR, 5.014; 95% CI, 0.325-77.410; P=.537) were not different between EVAR and OAR. CONCLUSIONS: Results from this observational study indicate that EVAR offers a favorable alternative to OAR in young patients at low risk.