473 resultados para failure time model
Resumo:
For clustered survival data, the traditional Gehan-type estimator is asymptotically equivalent to using only the between-cluster ranks, and the within-cluster ranks are ignored. The contribution of this paper is two fold: - (i) incorporating within-cluster ranks in censored data analysis, and; - (ii) applying the induced smoothing of Brown and Wang (2005, Biometrika) for computational convenience. Asymptotic properties of the resulting estimating functions are given. We also carry out numerical studies to assess the performance of the proposed approach and conclude that the proposed approach can lead to much improved estimators when strong clustering effects exist. A dataset from a litter-matched tumorigenesis experiment is used for illustration.
Resumo:
We consider a continuous time model for election timing in a Majoritarian Parliamentary System where the government maintains a constitutional right to call an early election. Our model is based on the two-party-preferred data that measure the popularity of the government and the opposition over time. We describe the poll process by a Stochastic Differential Equation (SDE) and use a martingale approach to derive a Partial Differential Equation (PDE) for the government’s expected remaining life in office. A comparison is made between a three-year and a four-year maximum term and we also provide the exercise boundary for calling an election. Impacts on changes in parameters in the SDE, the probability of winning the election and maximum terms on the call exercise boundaries are discussed and analysed. An application of our model to the Australian Federal Election for House of Representatives is also given.
Resumo:
This article provides a review of techniques for the analysis of survival data arising from respiratory health studies. Popular techniques such as the Kaplan–Meier survival plot and the Cox proportional hazards model are presented and illustrated using data from a lung cancer study. Advanced issues are also discussed, including parametric proportional hazards models, accelerated failure time models, time-varying explanatory variables, simultaneous analysis of multiple types of outcome events and the restricted mean survival time, a novel measure of the effect of treatment.
Resumo:
A smoothed rank-based procedure is developed for the accelerated failure time model to overcome computational issues. The proposed estimator is based on an EM-type procedure coupled with the induced smoothing. "The proposed iterative approach converges provided the initial value is based on a consistent estimator, and the limiting covariance matrix can be obtained from a sandwich-type formula. The consistency and asymptotic normality of the proposed estimator are also established. Extensive simulations show that the new estimator is not only computationally less demanding but also more reliable than the other existing estimators.
Resumo:
Environmental data usually include measurements, such as water quality data, which fall below detection limits, because of limitations of the instruments or of certain analytical methods used. The fact that some responses are not detected needs to be properly taken into account in statistical analysis of such data. However, it is well-known that it is challenging to analyze a data set with detection limits, and we often have to rely on the traditional parametric methods or simple imputation methods. Distributional assumptions can lead to biased inference and justification of distributions is often not possible when the data are correlated and there is a large proportion of data below detection limits. The extent of bias is usually unknown. To draw valid conclusions and hence provide useful advice for environmental management authorities, it is essential to develop and apply an appropriate statistical methodology. This paper proposes rank-based procedures for analyzing non-normally distributed data collected at different sites over a period of time in the presence of multiple detection limits. To take account of temporal correlations within each site, we propose an optimal linear combination of estimating functions and apply the induced smoothing method to reduce the computational burden. Finally, we apply the proposed method to the water quality data collected at Susquehanna River Basin in United States of America, which dearly demonstrates the advantages of the rank regression models.
Resumo:
We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.
Resumo:
Adaptions of weighted rank regression to the accelerated failure time model for censored survival data have been successful in yielding asymptotically normal estimates and flexible weighting schemes to increase statistical efficiencies. However, for only one simple weighting scheme, Gehan or Wilcoxon weights, are estimating equations guaranteed to be monotone in parameter components, and even in this case are step functions, requiring the equivalent of linear programming for computation. The lack of smoothness makes standard error or covariance matrix estimation even more difficult. An induced smoothing technique overcame these difficulties in various problems involving monotone but pure jump estimating equations, including conventional rank regression. The present paper applies induced smoothing to the Gehan-Wilcoxon weighted rank regression for the accelerated failure time model, for the more difficult case of survival time data subject to censoring, where the inapplicability of permutation arguments necessitates a new method of estimating null variance of estimating functions. Smooth monotone parameter estimation and rapid, reliable standard error or covariance matrix estimation is obtained.
Resumo:
Precise identification of the time when a change in a hospital outcome has occurred enables clinical experts to search for a potential special cause more effectively. In this paper, we develop change point estimation methods for survival time of a clinical procedure in the presence of patient mix in a Bayesian framework. We apply Bayesian hierarchical models to formulate the change point where there exists a step change in the mean survival time of patients who underwent cardiac surgery. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. Markov Chain Monte Carlo is used to obtain posterior distributions of the change point parameters including location and magnitude of changes and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time CUSUM control charts for different magnitude scenarios. The proposed estimator shows a better performance where a longer follow-up period, censoring time, is applied. In comparison with the alternative built-in CUSUM estimator, more accurate and precise estimates are obtained by the Bayesian estimator. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
The use of mobile phones while driving is more prevalent among young drivers—a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q Advanced Driving Simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver’s peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21 to 26 years old and split evenly by gender. Drivers’ reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver’s age, license type (Provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted presents a significant and measurable safety concern that will undoubtedly persist unless mitigated.
Resumo:
Traffic incidents are key contributors to non-recurrent congestion, potentially generating significant delay. Factors that influence the duration of incidents are important to understand so that effective mitigation strategies can be implemented. To identify and quantify the effects of influential factors, a methodology for studying total incident duration based on historical data from an ‘integrated database’ is proposed. Incident duration models are developed using a selected freeway segment in the Southeast Queensland, Australia network. The models include incident detection and recovery time as components of incident duration. A hazard-based duration modelling approach is applied to model incident duration as a function of a variety of factors that influence traffic incident duration. Parametric accelerated failure time survival models are developed to capture heterogeneity as a function of explanatory variables, with both fixed and random parameters specifications. The analysis reveals that factors affecting incident duration include incident characteristics (severity, type, injury, medical requirements, etc.), infrastructure characteristics (roadway shoulder availability), time of day, and traffic characteristics. The results indicate that event type durations are uniquely different, thus requiring different responses to effectively clear them. Furthermore, the results highlight the presence of unobserved incident duration heterogeneity as captured by the random parameter models, suggesting that additional factors need to be considered in future modelling efforts.
Resumo:
Braking is a crucial driving task with a direct relationship with crash risk, as both excess and inadequate braking can lead to collisions. The objective of this study was to compare the braking profile of young drivers distracted by mobile phone conversations to non-distracted braking. In particular, the braking behaviour of drivers in response to a pedestrian entering a zebra crossing was examined using the CARRS-Q Advanced Driving Simulator. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free, and handheld. In addition to driving the simulator, each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The drivers were 18–26 years old and split evenly by gender. A linear mixed model analysis of braking profiles along the roadway before the pedestrian crossing revealed comparatively increased decelerations among distracted drivers, particularly during the initial 20 kph of deceleration. Drivers’ initial 20 kph deceleration time was modelled using a parametric accelerated failure time (AFT) hazard-based duration model with a Weibull distribution with clustered heterogeneity to account for the repeated measures experiment design. Factors found to significantly influence the braking task included vehicle dynamics variables like initial speed and maximum deceleration, phone condition, and driver-specific variables such as licence type, crash involvement history, and self-reported experience of using a mobile phone whilst driving. Distracted drivers on average appear to reduce the speed of their vehicle faster and more abruptly than non-distracted drivers, exhibiting excess braking comparatively and revealing perhaps risk compensation. The braking appears to be more aggressive for distracted drivers with provisional licenses compared to drivers with open licenses. Abrupt or excessive braking by distracted drivers might pose significant safety concerns to following vehicles in a traffic stream.
Resumo:
Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
With increasingly complex engineering assets and tight economic requirements, asset reliability becomes more crucial in Engineering Asset Management (EAM). Improving the reliability of systems has always been a major aim of EAM. Reliability assessment using degradation data has become a significant approach to evaluate the reliability and safety of critical systems. Degradation data often provide more information than failure time data for assessing reliability and predicting the remnant life of systems. In general, degradation is the reduction in performance, reliability, and life span of assets. Many failure mechanisms can be traced to an underlying degradation process. Degradation phenomenon is a kind of stochastic process; therefore, it could be modelled in several approaches. Degradation modelling techniques have generated a great amount of research in reliability field. While degradation models play a significant role in reliability analysis, there are few review papers on that. This paper presents a review of the existing literature on commonly used degradation models in reliability analysis. The current research and developments in degradation models are reviewed and summarised in this paper. This study synthesises these models and classifies them in certain groups. Additionally, it attempts to identify the merits, limitations, and applications of each model. It provides potential applications of these degradation models in asset health and reliability prediction.
Resumo:
Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Suitable mathematical models that are capable of predicting Time-to-Failure (TTF) and the probability of failure in future time are essential. In traditional reliability models, the lifetime of assets is estimated using failure time data. However, in most real-life situations and industry applications, the lifetime of assets is influenced by different risk factors, which are called covariates. The fundamental notion in reliability theory is the failure time of a system and its covariates. These covariates change stochastically and may influence and/or indicate the failure time. Research shows that many statistical models have been developed to estimate the hazard of assets or individuals with covariates. An extensive amount of literature on hazard models with covariates (also termed covariate models), including theory and practical applications, has emerged. This paper is a state-of-the-art review of the existing literature on these covariate models in both the reliability and biomedical fields. One of the major purposes of this expository paper is to synthesise these models from both industrial reliability and biomedical fields and then contextually group them into non-parametric and semi-parametric models. Comments on their merits and limitations are also presented. Another main purpose of this paper is to comprehensively review and summarise the current research on the development of the covariate models so as to facilitate the application of more covariate modelling techniques into prognostics and asset health management.
Resumo:
The common approach to estimate bus dwell time at a BRT station is to apply the traditional dwell time methodology derived for suburban bus stops. In spite of being sensitive to boarding and alighting passenger numbers and to some extent towards fare collection media, these traditional dwell time models do not account for the platform crowding. Moreover, they fall short in accounting for the effects of passenger/s walking along a relatively longer BRT platform. Using the experience from Brisbane busway (BRT) stations, a new variable, Bus Lost Time (LT), is introduced in traditional dwell time model. The bus lost time variable captures the impact of passenger walking and platform crowding on bus dwell time. These are two characteristics which differentiate a BRT station from a bus stop. This paper reports the development of a methodology to estimate bus lost time experienced by buses at a BRT platform. Results were compared with the Transit Capacity and Quality of Servce Manual (TCQSM) approach of dwell time and station capacity estimation. When the bus lost time was used in dwell time calculations it was found that the BRT station platform capacity reduced by 10.1%.