4 resultados para Tilted-time window model
em DigitalCommons@The Texas Medical Center
Resumo:
Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. In order to evaluate this hypothesis, the general goal of this research is to build models for survival prediction of glioma patients using DNA molecular profiles (U133 Affymetrix gene expression microarrays) along with clinical information. First, a predictive Random Forest model is built for binary outcomes (i.e. short vs. long-term survival) and a small subset of genes whose expression values can be used to predict survival time is selected. Following, a new statistical methodology is developed for predicting time-to-death outcomes using Bayesian ensemble trees. Due to a large heterogeneity observed within prognostic classes obtained by the Random Forest model, prediction can be improved by relating time-to-death with gene expression profile directly. We propose a Bayesian ensemble model for survival prediction which is appropriate for high-dimensional data such as gene expression data. Our approach is based on the ensemble "sum-of-trees" model which is flexible to incorporate additive and interaction effects between genes. We specify a fully Bayesian hierarchical approach and illustrate our methodology for the CPH, Weibull, and AFT survival models. We overcome the lack of conjugacy using a latent variable formulation to model the covariate effects which decreases computation time for model fitting. Also, our proposed models provides a model-free way to select important predictive prognostic markers based on controlling false discovery rates. We compare the performance of our methods with baseline reference survival methods and apply our methodology to an unpublished data set of brain tumor survival times and gene expression data, selecting genes potentially related to the development of the disease under study. A closing discussion compares results obtained by Random Forest and Bayesian ensemble methods under the biological/clinical perspectives and highlights the statistical advantages and disadvantages of the new methodology in the context of DNA microarray data analysis.
Resumo:
Radiotherapy has been a method of choice in cancer treatment for a number of years. Mathematical modeling is an important tool in studying the survival behavior of any cell as well as its radiosensitivity. One particular cell under investigation is the normal T-cell, the radiosensitivity of which may be indicative to the patient's tolerance to radiation doses.^ The model derived is a compound branching process with a random initial population of T-cells that is assumed to have compound distribution. T-cells in any generation are assumed to double or die at random lengths of time. This population is assumed to undergo a random number of generations within a period of time. The model is then used to obtain an estimate for the survival probability of T-cells for the data under investigation. This estimate is derived iteratively by applying the likelihood principle. Further assessment of the validity of the model is performed by simulating a number of subjects under this model.^ This study shows that there is a great deal of variation in T-cells survival from one individual to another. These variations can be observed under normal conditions as well as under radiotherapy. The findings are in agreement with a recent study and show that genetic diversity plays a role in determining the survival of T-cells. ^
Resumo:
Although several detailed models of molecular processes essential for circadian oscillations have been developed, their complexity makes intuitive understanding of the oscillation mechanism difficult. The goal of the present study was to reduce a previously developed, detailed model to a minimal representation of the transcriptional regulation essential for circadian rhythmicity in Drosophila. The reduced model contains only two differential equations, each with time delays. A negative feedback loop is included, in which PER protein represses per transcription by binding the dCLOCK transcription factor. A positive feedback loop is also included, in which dCLOCK indirectly enhances its own formation. The model simulated circadian oscillations, light entrainment, and a phase-response curve with qualitative similarities to experiment. Time delays were found to be essential for simulation of circadian oscillations with this model. To examine the robustness of the simplified model to fluctuations in molecule numbers, a stochastic variant was constructed. Robust circadian oscillations and entrainment to light pulses were simulated with fewer than 80 molecules of each gene product present on average. Circadian oscillations persisted when the positive feedback loop was removed. Moreover, elimination of positive feedback did not decrease the robustness of oscillations to stochastic fluctuations or to variations in parameter values. Such reduced models can aid understanding of the oscillation mechanisms in Drosophila and in other organisms in which feedback regulation of transcription may play an important role.
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^