994 resultados para Unimodal hazard function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many clinical trials to evaluate treatment efficacy, it is believed that there may exist latent treatment effectiveness lag times after which medical procedure or chemical compound would be in full effect. In this article, semiparametric regression models are proposed and studied to estimate the treatment effect accounting for such latent lag times. The new models take advantage of the invariance property of the additive hazards model in marginalizing over random effects, so parameters in the models are easy to be estimated and interpreted, while the flexibility without specifying baseline hazard function is kept. Monte Carlo simulation studies demonstrate the appropriateness of the proposed semiparametric estimation procedure. Data collected in the actual randomized clinical trial, which evaluates the effectiveness of biodegradable carmustine polymers for treatment of recurrent brain tumors, are analyzed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mathematical models of disease progression predict disease outcomes and are useful epidemiological tools for planners and evaluators of health interventions. The R package gems is a tool that simulates disease progression in patients and predicts the effect of different interventions on patient outcome. Disease progression is represented by a series of events (e.g., diagnosis, treatment and death), displayed in a directed acyclic graph. The vertices correspond to disease states and the directed edges represent events. The package gems allows simulations based on a generalized multistate model that can be described by a directed acyclic graph with continuous transition-specific hazard functions. The user can specify an arbitrary hazard function and its parameters. The model includes parameter uncertainty, does not need to be a Markov model, and may take the history of previous events into account. Applications are not limited to the medical field and extend to other areas where multistate simulation is of interest. We provide a technical explanation of the multistate models used by gems, explain the functions of gems and their arguments, and show a sample application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The follow-up care for women with breast cancer requires an understanding of disease recurrence patterns and the follow-up visit schedule should be determined according to the times when the recurrence are most likely to occur, so that preventive measure can be taken to avoid or minimize the recurrence. Objective: To model breast cancer recurrence through stochastic process with an aim to generate a hazard function for determining a follow-up schedule. Methods: We modeled the process of disease progression as the time transformed Weiner process and the first-hitting-time was used as an approximation of the true failure time. The women's "recurrence-free survival time" or a "not having the recurrence event" is modeled by the time it takes Weiner process to cross a threshold value which represents a woman experiences breast cancer recurrence event. We explored threshold regression model which takes account of covariates that contributed to the prognosis of breast cancer following development of the first-hitting time model. Using real data from SEER-Medicare, we proposed models of follow-up visits schedule on the basis of constant probability of disease recurrence between consecutive visits. Results: We demonstrated that the threshold regression based on first-hitting-time modeling approach can provide useful predictive information about breast cancer recurrence. Our results suggest the surveillance and follow-up schedule can be determined for women based on their prognostic factors such as tumor stage and others. Women with early stage of disease may be seen less frequently for follow-up visits than those women with locally advanced stages. Our results from SEER-Medicare data support the idea of risk-controlled follow-up strategies for groups of women. Conclusion: The methodology we proposed in this study allows one to determine individual follow-up scheduling based on a parametric hazard function that incorporates known prognostic factors.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is well known that an identification problem exists in the analysis of age-period-cohort data because of the relationship among the three factors (date of birth + age at death = date of death). There are numerous suggestions about how to analyze the data. No one solution has been satisfactory. The purpose of this study is to provide another analytic method by extending the Cox's lifetable regression model with time-dependent covariates. The new approach contains the following features: (1) It is based on the conditional maximum likelihood procedure using a proportional hazard function described by Cox (1972), treating the age factor as the underlying hazard to estimate the parameters for the cohort and period factors. (2) The model is flexible so that both the cohort and period factors can be treated as dummy or continuous variables, and the parameter estimations can be obtained for numerous combinations of variables as in a regression analysis. (3) The model is applicable even when the time period is unequally spaced.^ Two specific models are considered to illustrate the new approach and applied to the U.S. prostate cancer data. We find that there are significant differences between all cohorts and there is a significant period effect for both whites and nonwhites. The underlying hazard increases exponentially with age indicating that old people have much higher risk than young people. A log transformation of relative risk shows that the prostate cancer risk declined in recent cohorts for both models. However, prostate cancer risk declined 5 cohorts (25 years) earlier for whites than for nonwhites under the period factor model (0 0 0 1 1 1 1). These latter results are similar to the previous study by Holford (1983).^ The new approach offers a general method to analyze the age-period-cohort data without using any arbitrary constraint in the model. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Existe una creciente preocupación por las catástrofes de origen natural que están por llegar, motivo por el que se están realizando estudios desde prácticamente todas las ramas de la ciencia. La razón para ello se puede encontrar en el miedo a que los eventos futuros puedan dificultar las actividades humanas, aunque no es el único factor. Por todo ello, se produce una dispersión muy importante incluso en los conceptos más elementales como qué debe ser considerado o cómo debe llamarse y catalogarse uno u otro elemento. En consecuencia, los métodos para comprender los riesgos naturales también son muy diferentes, rara vez encontrándose enfoques realmente multidisciplinares. Se han realizado algunos esfuerzos para crear un marco de entendimiento común como por ejemplo, la "Directiva sobre inundaciones" o, más recientemente, la Directiva Inspire. Las entidades aseguradoras y reaseguradoras son un actor importante entre los muchos involucrados en los estudios de riesgos. Su interés radica en el hecho de que terminan pagando la mayor parte de la factura, si no toda. Pero, a cuánto puede ascender esa factura, no es una pregunta fácil de responder aún en casos muy concretos, y sin embargo, es la pregunta que constantemente se plantea por parte de los tomadores de decisiones a todos los niveles. Este documento resume las actividades de investigación que han llevado a cabo al objeto de sentar un marco de referencia, implementando de enfoques numéricos capaces de hacer frente a algunas de las cuestiones más relevantes que se encuentran en casi todos los estudios de riesgos naturales, ensayando conceptos de manera pragmática. Para ello, se escogió un lugar experimental de acuerdo a diferentes criterios, como la densidad de población, la facilidad de proporcionar los límites geográficos claros, la presencia de tres de los procesos geológicos más importantes (inundaciones, terremotos y vulcanismo) y la disponibilidad de datos. El modelo aquí propuesto aprovecha fuentes de datos muy diversas para evaluar los peligros naturales, poniendo de relieve la necesidad de un enfoque multidisciplinar y emplea un catálogo de datos único, unificado, independiente (no orientado), coherente y homogéneo para estimar el valor de las propiedades. Ahora bien, los datos se explotan de manera diferente según cada tipo de peligro, manteniendo sin variación los conceptos subyacentes. Durante esta investigación, se ha encontrado una gran brecha en la relación entre las pérdidas reales y las probabilidades del peligro, algo contrario a lo que se ha pensado que debía ser el comportamiento más probable de los riesgos naturales, demostrando que los estudios de riesgo tienen vida útil muy limitada. En parte debido ello, el modelo propuesto en este estudio es el de trabajar con escenarios, fijando una probabilidad de ocurrencia, lo que es contrario al modelo clásico de evaluar funciones continuas de riesgo. Otra razón para abordar la cuestión mediante escenarios es forzar al modelo para proporcionar unas cifras creíbles de daño máximo fijando cuestiones como la ubicación espacial de un evento y sus probabilidades, aportando una nueva visión del "peor escenario posible” de probabilidad conocida. ABSTRACT There is a growing concern about catastrophes of natural origin about to come hence many studies are being carried out from almost any science branch. Even though it is not the only one, fear for the upcoming events that might jeopardize any given human activity is the main motive. A forking effect is therefore heavily present even on the basic concepts of what is to be considered or how should it be named and catalogued; as a consequence, methods towards understanding natural risks also show great differences and a multidisciplinary approach has seldomly been followed. Some efforts were made to create a common understanding of such a matter, the “Floods Directive” or more recently the Inspire Directive, are a couple of examples. The insurance sector is an important actor among the many involved. Their interest relies on the fact that, eventually, they pay most of the bill if not all. But how much could that be is not an easy question to be answerd even in a very specific case, and it is almost always the question posed by decision makers at all levels. This document summarizes research activities that have being carried out in order to put some solid ground to be followed, implementing numerical approaches that are capable of coping with some of the most relevant issues found in almost all natural risk studies, testing concepts pragmatically. In order to do so, an experimental site was selected according to different criteria, such as population density, the ease of providing clear geographical boundaries, the presence of three of the most important geological processes (floods, earthquakes and volcanism) and data availability. The model herein proposed takes advantage of very diferent data sources in the assessment of hazard, pointing out how a multidisciplinary approach is needed, and uses only one unified, independent, consistent, homogeneous (non objective driven) source for assessing property value. Data is exploited differently according to each hazard type, but the underlying concepts remain the same. During this research, a deep detachment was found between actual loss and hazard chances, contrarily to what has been thought to be the most likely behaviour of natural hazards, proving that risk studies have a very limited lifespan. Partially because of such finding, the model in this study addresses scenarios with fixed probability of occurrence, as opposed to studying a continuous hazard function as usually proposed. Another reason for studying scenarios was to force the model to provide a reliable figure after a set of given parameters where fixed, such as the spatial location of an event and its chances, so the “worst case” of a given return period could be found.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.

The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.

We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.

Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examined differences in response latencies obtained during a validated video-based hazard perception driving test between three healthy, community-dwelling groups: 22 mid-aged (35-55 years), 34 young-old (65-74 years), and 23 old-old (75-84 years) current drivers, matched for gender, education level, and vocabulary. We found no significant difference in performance between mid-aged and young-old groups, but the old-old group was significantly slower than the other two groups. The differences between the old-old group and the other groups combined were independently mediated by useful field of view (UFOV), contrast sensitivity, and simple reaction time measures. Given that hazard perception latency has been linked with increased crash risk, these results are consistent with the idea that increased crash risk in older adults could be a function of poorer hazard perception, though this decline does not appear to manifest until age 75+ in healthy drivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. METHODS: 36 visually normal participants (aged 19 – 80 years), completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields. and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus, and sensitivity for displacement in a random-dot kinematogram (Dmin). Participants also completed a hazard perception test (HPT) which measured participants’ response times to hazards embedded in video recordings of real world driving which has been shown to be linked to crash risk. RESULTS: Dmin for the random-dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random-dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. CONCLUSION: These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception in order to develop better interventions to improve road safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil wind erosion is the primary process and the main driving force for land desertification and sand-dust storms in and and semi-arid areas of Northern China. While many researchers have studied this issue, this study quantified the various indicators of soil wind erosion, using the GIS technology to extract the spatial data and to construct a RBFN (Radial Basis Function Network) model for Inner Mongolia. By calibrating sample data of the different levels of wind erosion hazard, the model parameters were established, and then the assessment of wind erosion hazard. Results show that in the southern parts of Inner Mongolia wind erosion hazards are very severe, counties in the middle regions of Inner Mongolia vary from moderate to severe, and in eastern are slight. Comparison of the results with other research shows conformity with actual conditions, proving the reasonability and applicability of the RBFN model. Copyright (C) 2007 John Wiley & Sons, Ltd.