516 resultados para Proportional Hazards Model

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To investigate the expression pattern of hypoxia-induced proteins identified as being involved in malignant progression of head-and-neck squamous cell carcinoma (HNSCC) and to determine their relationship to tumor pO 2 and prognosis. Methods and Materials: We performed immunohistochemical staining of hypoxia-induced proteins (carbonic anhydrase IX [CA IX], BNIP3L, connective tissue growth factor, osteopontin, ephrin A1, hypoxia inducible gene-2, dihydrofolate reductase, galectin-1, IκB kinase β, and lysyl oxidase) on tumor tissue arrays of 101 HNSCC patients with pretreatment pO 2 measurements. Analysis of variance and Fisher's exact tests were used to evaluate the relationship between marker expression, tumor pO 2, and CA IX staining. Cox proportional hazard model and log-rank tests were used to determine the relationship between markers and prognosis. Results: Osteopontin expression correlated with tumor pO 2 (Eppendorf measurements) (p = 0.04). However, there was a strong correlation between lysyl oxidase, ephrin A1, and galectin-1 and CA IX staining. These markers also predicted for cancer-specific survival and overall survival on univariate analysis. A hypoxia score of 0-5 was assigned to each patient, on the basis of the presence of strong staining for these markers, whereby a higher score signifies increased marker expression. On multivariate analysis, increasing hypoxia score was an independent prognostic factor for cancer-specific survival (p = 0.015) and was borderline significant for overall survival (p = 0.057) when adjusted for other independent predictors of outcomes (hemoglobin and age). Conclusions: We identified a panel of hypoxia-related tissue markers that correlates with treatment outcomes in HNSCC. Validation of these markers will be needed to determine their utility in identifying patients for hypoxia-targeted therapy. © 2007 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To identify a 15-KDa novel hypoxia-induced secreted protein in head and neck squamous cell carcinomas (HNSCC) and to determine its role in malignant progression. Methods: We used surface-enhanced laser desorption ionization time-of-flight mass spectrometry (SELDI-TOF-MS) and tandem MS to identify a novel hypoxia-induced secreted protein in FaDu cells. We used immunoblots, real-time polymerase chain reaction (PCR), and enzyme-linked immunoabsorbent assay to confirm the hypoxic induction of this secreted protein as galectin-1 in cell lines and xenografts. We stained tumor tissues from 101 HNSCC patients for galectin-1, CA IX (carbonic anhydrase IX, a hypoxia marker) and CDS (a T-cell marker). Expression of these markers was correlated to each other and to treatment outcomes. Results: SELDI-TOF studies yielded a hypoxia-induced peak at 15 kDa that proved to be galectin-1 by MS analysis. Immunoblots and PCR studies confirmed increased galectin-1 expression by hypoxia in several cancer cell lines. Plasma levels of galectin-1 were higher in tumor-bearing severe combined immunodeficiency (SCID) mice breathing 10% O 2 compared with mice breathing room air. In HNSCC patients, there was a significant correlation between galectin-1 and CA IX staining (P = .01) and a strong inverse correlation between galectin-1 and CDS staining (P = .01). Expression of galectin-1 and CDS were significant predictors for overall survival on multivariate analysis. Conclusion: Galectin-1 is a novel hypoxia-regulated protein and a prognostic marker in HNSCC. This study presents a new mechanism on how hypoxia can affect the malignant progression and therapeutic response of solid tumors by regulating the secretion of proteins that modulate immune privilege. © 2005 by American Society of Clinical Oncology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION In retrospective analyses of patients with nonsquamous non-small-cell lung cancer treated with pemetrexed, low thymidylate synthase (TS) expression is associated with better clinical outcomes. This phase II study explored this association prospectively at the protein and mRNA-expression level. METHODS Treatment-naive patients with nonsquamous non-small-cell lung cancer (stage IIIB/IV) had four cycles of first-line chemotherapy with pemetrexed/cisplatin. Nonprogressing patients continued on pemetrexed maintenance until progression or maximum tolerability. TS expression (nucleus/cytoplasm/total) was assessed in diagnostic tissue samples by immunohistochemistry (IHC; H-scores), and quantitative reverse-transcriptase polymerase chain reaction. Cox regression was used to assess the association between H-scores and progression-free/overall survival (PFS/OS) distribution estimated by the Kaplan-Meier method. Maximal χ analysis identified optimal cutpoints between low TS- and high TS-expression groups, yielding maximal associations with PFS/OS. RESULTS The study enrolled 70 patients; of these 43 (61.4%) started maintenance treatment. In 60 patients with valid H-scores, median (m) PFS was 5.5 (95% confidence interval [CI], 3.9-6.9) months, mOS was 9.6 (95% CI, 7.3-15.7) months. Higher nuclear TS expression was significantly associated with shorter PFS and OS (primary analysis IHC, PFS: p < 0.0001; hazard ratio per 1-unit increase: 1.015; 95%CI, 1.008-1.021). At the optimal cutpoint of nuclear H-score (70), mPFS in the low TS- versus high TS-expression groups was 7.1 (5.7-8.3) versus 2.6 (1.3-4.1) months (p = 0.0015; hazard ratio = 0.28; 95%CI, 0.16-0.52; n = 40/20). Trends were similar for cytoplasm H-scores, quantitative reverse-transcriptase polymerase chain reaction and other clinical endpoints (OS, response, and disease control). CONCLUSIONS The primary endpoint was met; low TS expression was associated with longer PFS. Further randomized studies are needed to explore nuclear TS IHC expression as a potential biomarker of clinical outcomes for pemetrexed treatment in larger patient cohorts. © 2013 by the International Association for the Study of Lung Cancer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose The role played by the innate immune system in determining survival from non-small-cell lung cancer (NSCLC) is unclear. The aim of this study was to investigate the prognostic significance of macrophage and mast-cell infiltration in NSCLC. Methods We used immunohistochemistry to identify tryptase+ mast cells and CD68+ macrophages in the tumor stroma and tumor islets in 175 patients with surgically resected NSCLC. Results Macrophages were detected in both the tumor stroma and islets in all patients. Mast cells were detected in the stroma and islets in 99.4% and 68.5% of patients, respectively. Using multivariate Cox proportional hazards analysis, increasing tumor islet macrophage density (P < .001) and tumor islet/stromal macrophage ratio (P < .001) emerged as favorable independent prognostic indicators. In contrast, increasing stromal macrophage density was an independent predictor of reduced survival (P = .001). The presence of tumor islet mast cells (P = .018) and increasing islet/stromal mast-cell ratio (P = .032) were also favorable independent prognostic indicators. Macrophage islet density showed the strongest effect: 5-year survival was 52.9% in patients with an islet macrophage density greater than the median versus 7.7% when less than the median (P < .0001). In the same groups, respectively, median survival was 2,244 versus 334 days (P < .0001). Patients with a high islet macrophage density but incomplete resection survived markedly longer than patients with a low islet macrophage density but complete resection. Conclusion The tumor islet CD68+ macrophage density is a powerful independent predictor of survival from surgically resected NSCLC. The biologic explanation for this and its implications for the use of adjunctive treatment requires further study. © 2005 by American Society of Clinical Oncology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: Four randomized phase II/III trials investigated the addition of cetuximab to platinum-based, first-line chemotherapy in patients with advanced non-small cell lung cancer (NSCLC). A meta-analysis was performed to examine the benefit/risk ratio for the addition of cetuximab to chemotherapy. MATERIALS AND METHODS: The meta-analysis included individual patient efficacy data from 2018 patients and individual patient safety data from 1970 patients comprising respectively the combined intention-to-treat and safety populations of the four trials. The effect of adding cetuximab to chemotherapy was measured by hazard ratios (HRs) obtained using a Cox proportional hazards model and odds ratios calculated by logistic regression. Survival rates at 1 year were calculated. All applied models were stratified by trial. Tests on heterogeneity of treatment effects across the trials and sensitivity analyses were performed for all endpoints. RESULTS: The meta-analysis demonstrated that the addition of cetuximab to chemotherapy significantly improved overall survival (HR 0.88, p=0.009, median 10.3 vs 9.4 months), progression-free survival (HR 0.90, p=0.045, median 4.7 vs 4.5 months) and response (odds ratio 1.46, p<0.001, overall response rate 32.2% vs 24.4%) compared with chemotherapy alone. The safety profile of chemotherapy plus cetuximab in the meta-analysis population was confirmed as manageable. Neither trials nor patient subgroups defined by key baseline characteristics showed significant heterogeneity for any endpoint. CONCLUSION: The addition of cetuximab to platinum-based, first-line chemotherapy for advanced NSCLC significantly improved outcome for all efficacy endpoints with an acceptable safety profile, indicating a favorable benefit/risk ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article provides a review of techniques for the analysis of survival data arising from respiratory health studies. Popular techniques such as the Kaplan–Meier survival plot and the Cox proportional hazards model are presented and illustrated using data from a lung cancer study. Advanced issues are also discussed, including parametric proportional hazards models, accelerated failure time models, time-varying explanatory variables, simultaneous analysis of multiple types of outcome events and the restricted mean survival time, a novel measure of the effect of treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective The objective of this study was to investigate the risk of chronic kidney disease (CKD) stage 4-5 and dialysis treatment on incidence of foot ulceration and major lower extremity amputation in comparison to CKD stage 3. Methods In this retrospective study, all individuals who visited our hospital between 2006 and 2012 because of CKD stages 3 to 5 or dialysis treatment were included. Medical records were reviewed for incidence of foot ulceration and major amputation. The time from CKD 3, CKD 4-5, and dialysis treatment until first foot ulceration and first major lower extremity amputation was calculated and analyzed by Kaplan-Meier curves and multivariate Cox proportional hazards model. Diabetes mellitus, peripheral arterial disease, peripheral neuropathy, and foot deformities were included for potential confounding. Results A total of 669 individuals were included: 539 in CKD 3, 540 in CKD 4-5, and 259 in dialysis treatment (individuals could progress from one group to the next). Unadjusted foot ulcer incidence rates per 1000 patients per year were 12 for CKD 3, 47 for CKD 4-5, and 104 for dialysis (P < .001). In multivariate analyses, the hazard ratio for incidence of foot ulceration was 4.0 (95% confidence interval [CI], 2.6-6.3) in CKD 4-5 and 7.6 (95% CI, 4.8-12.1) in dialysis treatment compared with CKD 3. Hazard ratios for incidence of major amputation were 9.5 (95% CI, 2.1-43.0) and 15 (95% CI, 3.3-71.0), respectively. Conclusions CKD 4-5 and dialysis treatment are independent risk factors for foot ulceration and major amputation compared with CKD 3. Maximum effort is needed in daily clinical practice to prevent foot ulcers and their devastating consequences in all individuals with CKD 4-5 or dialysis treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Survival probability prediction using covariate-based hazard approach is a known statistical methodology in engineering asset health management. We have previously reported the semi-parametric Explicit Hazard Model (EHM) which incorporates three types of information: population characteristics; condition indicators; and operating environment indicators for hazard prediction. This model assumes the baseline hazard has the form of the Weibull distribution. To avoid this assumption, this paper presents the non-parametric EHM which is a distribution-free covariate-based hazard model. In this paper, an application of the non-parametric EHM is demonstrated via a case study. In this case study, survival probabilities of a set of resistance elements using the non-parametric EHM are compared with the Weibull proportional hazard model and traditional Weibull model. The results show that the non-parametric EHM can effectively predict asset life using the condition indicator, operating environment indicator, and failure history.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background and Significance Venous leg ulcers are a significant cause of chronic ill-health for 1–3% of those aged over 60 years, increasing in incidence with age. The condition is difficult and costly to heal, consuming 1–2.5% of total health budgets in developed countries and up to 50% of community nursing time. Unfortunately after healing, there is a recurrence rate of 60 to 70%, frequently within the first 12 months after heaing. Although some risk factors associated with higher recurrence rates have been identified (e.g. prolonged ulcer duration, deep vein thrombosis), in general there is limited evidence on treatments to effectively prevent recurrence. Patients are generally advised to undertake activities which aim to improve the impaired venous return (e.g. compression therapy, leg elevation, exercise). However, only compression therapy has some evidence to support its effectiveness in prevention and problems with adherence to this strategy are well documented. Aim The aim of this research was to identify factors associated with recurrence by determining relationships between recurrence and demographic factors, health, physical activity, psychosocial factors and self-care activities to prevent recurrence. Methods Two studies were undertaken: a retrospective study of participants diagnosed with a venous leg ulcer which healed 12 to 36 months prior to the study (n=122); and a prospective longitudinal study of participants recruited as their ulcer healed and data collected for 12 months following healing (n=80). Data were collected from medical records on demographics, medical history and ulcer history and treatments; and from self-report questionnaires on physical activity, nutrition, psychosocial measures, ulcer history, compression and other self-care activities. Follow-up data for the prospective study were collected every three months for 12 months after healing. For the retrospective study, a logistic regression model determined the independent influences of variables on recurrence. For the prospective study, median time to recurrence was calculated using the Kaplan-Meier method and a Cox proportional-hazards regression model was used to adjust for potential confounders and determine effects of preventive strategies and psychosocial factors on recurrence. Results In total, 68% of participants in the retrospective study and 44% of participants in the prospective study suffered a recurrence. After mutual adjustment for all variables in multivariable regression models, leg elevation, compression therapy, self efficacy and physical activity were found to be consistently related to recurrence in both studies. In the retrospective study, leg elevation, wearing Class 2 or 3 compression hosiery, the level of physical activity, cardiac disease and self efficacy scores remained significantly associated (p<0.05) with recurrence. The model was significant (p <0.001); with a R2 equivalent of 0.62. Examination of relationships between psychosocial factors and adherence to wearing compression hosiery found wearing compression hosiery was significantly positively associated with participants’ knowledge of the cause of their condition (p=0.002), higher self-efficacy scores (p=0.026) and lower depression scores (p=0.009). Analysis of data from the prospective study found there were 35 recurrences (44%) in the 12 months following healing and median time to recurrence was 27 weeks. After adjustment for potential confounders, a Cox proportional hazards regression model found that at least an hour/day of leg elevation, six or more days/week in Class 2 (20–25mmHg) or 3 (30–40mmHg) compression hosiery, higher social support scale scores and higher General Self-Efficacy scores remained significantly associated (p<0.05) with a lower risk of recurrence, while male gender and a history of DVT remained significant risk factors for recurrence. Overall the model was significant (p <0.001); with an R2 equivalent 0.72. Conclusions The high rates of recurrence found in the studies highlight the urgent need for further information in this area to support development of effective strategies for prevention. Overall, results indicate leg elevation, physical activity, compression hosiery and strategies to improve self-efficacy are likely to prevent recurrence. In addition, optimal management of depression and strategies to improve patient knowledge and self-efficacy may positively influence adherence to compression therapy. This research provides important information for development of strategies to prevent recurrence of venous leg ulcers, with the potential to improve health and decrease health care costs in this population.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim To identify relationships between preventive activities, psychosocial factors and leg ulcer recurrence in patients with chronic venous leg ulcers. Background Chronic venous leg ulcers are slow to heal and frequently recur, resulting in years of suffering and intensive use of health care resources. Methods A prospective longitudinal study was undertaken with a sample of 80 patients with a venous leg ulcer recruited when their ulcer healed. Data were collected from 2006–2009 from medical records on demographics, medical history and ulcer history; and from self-report questionnaires on physical activity, nutrition, preventive activities and psychosocial measures. Follow-up data were collected via questionnaires every three months for 12 months after healing. Median time to recurrence was calculated using the Kaplan-Meier method. A Cox proportional-hazards regression model was used to adjust for potential confounders and determine effects of preventive strategies and psychosocial factors on recurrence. Results: There were 35 recurrences in a sample of 80 participants. Median time to recurrence was 27 weeks. After adjustment for potential confounders, a Cox proportional hazards regression model found that at least an hour/day of leg elevation, six or more days/week in Class 2 (20–25mmHg) or 3 (30–40mmHg) compression hosiery, higher social support scale scores and higher General Self-Efficacy scores remained significantly associated (p<0.05) with a lower risk of recurrence, while male gender and a history of DVT remained significant risk factors for recurrence. Conclusion Results indicate that leg elevation, compression hosiery, high levels of self-efficacy and strong social support will help prevent recurrence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: There is currently no early predictive marker of survival for patients receiving chemotherapy for malignant pleural mesothelioma (MPM). Tumour response may be predictive for overall survival (OS), though this has not been explored. We have thus undertaken a combined-analysis of OS, from a 42 day landmark, of 526 patients receiving systemic therapy for MPM. We also validate published progression-free survival rates (PFSRs) and a progression-free survival (PFS) prognostic-index model. Methods: Analyses included nine MPM clinical trials incorporating six European Organisation for Research and Treatment of Cancer (EORTC) studies. Analysis of OS from landmark (from day 42 post-treatment) was considered regarding tumour response. PFSR analysis data included six non-EORTC MPM clinical trials. Prognostic index validation was performed on one non-EORTC data-set, with available survival data. Results: Median OS, from landmark, of patients with partial response (PR) was 12·8 months, stable disease (SD), 9·4 months and progressive disease (PD), 3·4 months. Both PR and SD were associated with longer OS from landmark compared with disease progression (both p < 0·0001). PFSRs for platinum-based combination therapies were consistent with published significant clinical activity ranges. Effective separation between PFS and OS curves provided a validation of the EORTC prognostic model, based on histology, stage and performance status. Conclusion: Response to chemotherapy is associated with significantly longer OS from landmark in patients with MPM. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background The incidence of malignant mesothelioma is increasing. There is the perception that survival is worse in the UK than in other countries. However, it is important to compare survival in different series based on accurate prognostic data. The European Organisation for Research and Treatment of Cancer (EORTC) and the Cancer and Leukaemia Group B (CALGB) have recently published prognostic scoring systems. We have assessed the prognostic variables, validated the EORTC and CALGB prognostic groups, and evaluated survival in a series of 142 patients. Methods Case notes of 142 consecutive patients presenting in Leicester since 1988 were reviewed. Univariate analysis of prognostic variables was performed using a Cox proportional hazards regression model. Statistically significant variables were analysed further in a forward, stepwise multivariate model. EORTC and CALGB prognostic groups were derived, Kaplan-Meier survival curves plotted, and survival rates were calculated from life tables. Results Significant poor prognostic factors in univariate analysis included male sex, older age, weight loss, chest pain, poor performance status, low haemoglobin, leukocytosis, thrombocytosis, and non-epithelial cell type (p<0.05). The prognostic significance of cell type, haemoglobin, white cell count, performance status, and sex were retained in the multivariate model. Overall median survival was 5.9 (range 0-34.3) months. One and two year survival rates were 21.3% (95% CI 13.9 to 28.7) and 3.5% (0 to 8.5), respectively. Median, one, and two year survival data within prognostic groups in Leicester were equivalent to the EORTC and CALGB series. Survival curves were successfully stratified by the prognostic groups. Conclusions This study validates the EORTC and CALGB prognostic scoring systems which should be used both in the assessment of survival data of series in different countries and in the stratification of patients into randomised clinical studies.