927 resultados para Stopping rules
Resumo:
OBJECTIVES: To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications. STUDY DESIGN AND SETTING: We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada. RESULTS: Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%). CONCLUSION: Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.
Resumo:
This paper, the second in a series of three papers concerned with the statistical aspects of interim analyses in clinical trials, is concerned with stopping rules in phase II clinical trials. Phase II trials are generally small-scale studies, and may include one or more experimental treatments with or without a control. A common feature is that the results primarily determine the course of further clinical evaluation of a treatment rather than providing definitive evidence of treatment efficacy. This means that there is more flexibility available in the design and analysis of such studies than in phase III trials. This has led to a range of different approaches being taken to the statistical design of stopping rules for such trials. This paper briefly describes and compares the different approaches. In most cases the stopping rules can be described and implemented easily without knowledge of the detailed statistical and computational methods used to obtain the rules.
Resumo:
OBJECTIVES To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications. STUDY DESIGN AND SETTING We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada. RESULTS Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%). CONCLUSION Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.
Resumo:
Threshold estimation with sequential procedures is justifiable on the surmise that the index used in the so-called dynamic stopping rule has diagnostic value for identifying when an accurate estimate has been obtained. The performance of five types of Bayesian sequential procedure was compared here to that of an analogous fixed-length procedure. Indices for use in sequential procedures were: (1) the width of the Bayesian probability interval, (2) the posterior standard deviation, (3) the absolute change, (4) the average change, and (5) the number of sign fluctuations. A simulation study was carried out to evaluate which index renders estimates with less bias and smaller standard error at lower cost (i.e. lower average number of trials to completion), in both yes–no and two-alternative forced-choice (2AFC) tasks. We also considered the effect of the form and parameters of the psychometric function and its similarity with themodel function assumed in the procedure. Our results show that sequential procedures do not outperform fixed-length procedures in yes–no tasks. However, in 2AFC tasks, sequential procedures not based on sign fluctuations all yield minimally better estimates than fixed-length procedures, although most of the improvement occurs with short runs that render undependable estimates and the differences vanish when the procedures run for a number of trials (around 70) that ensures dependability. Thus, none of the indices considered here (some of which are widespread) has the diagnostic value that would justify its use. In addition, difficulties of implementation make sequential procedures unfit as alternatives to fixed-length procedures.
Resumo:
Group sequential methods and response adaptive randomization (RAR) procedures have been applied in clinical trials due to economical and ethical considerations. Group sequential methods are able to reduce the average sample size by inducing early stopping, but patients are equally allocated with half of chance to inferior arm. RAR procedures incline to allocate more patients to better arm; however it requires more sample size to obtain a certain power. This study intended to combine these two procedures. We applied the Bayesian decision theory approach to define our group sequential stopping rules and evaluated the operating characteristics under RAR setting. The results showed that Bayesian decision theory method was able to preserve the type I error rate as well as achieve a favorable power; further by comparing with the error spending function method, we concluded that Bayesian decision theory approach was more effective on reducing average sample size.^
Resumo:
BACKGROUND: Safety and economic issues have increasingly raised concerns about the long term use of immunomodulators or biologics as maintenance therapies for Crohn's disease (CD). Despite emerging evidence suggesting that stopping therapy might be an option for low risk patients, criteria identifying target groups for this strategy are missing, and there is a lack of recommendations regarding this question. METHODS: Multidisciplinary European expert panel (EPACT-II Update) rated the appropriateness of stopping therapy in CD patients in remission. We used the RAND/UCLA Appropriateness Method, and included the following variables: presence of clinical and/or endoscopic remission, CRP level, fecal calprotectin level, prior surgery for CD, and duration of remission (1, 2 or 4 years). RESULTS: Before considering withdrawing therapy, the prerequisites of a C-reactive protein (CRP) and fecal calprotectin measurement were rated as "appropriate" by the panellists, whereas a radiological evaluation was considered as being of "uncertain" appropriateness. Ileo-colonoscopy was considered appropriate 1 year after surgery or after 4 years in the absence of prior surgery. Stopping azathioprine, 6-mercaptopurine or methotrexate mono-therapy was judged appropriate after 4 years of clinical remission. Withdrawing anti-TNF mono-therapy was judged appropriate after 2 years in case of clinical and endoscopic remission, and after 4 years of clinical remission. In case of combined therapy, anti-TNF withdrawal, while continuing the immunomodulator, was considered appropriate after two years of clinical remission. CONCLUSION: A multidisciplinary European expert panel proposed for the first time treatment stopping rules for patients in clinical and/or endoscopic remission, with normal CRP and fecal calprotectin levels.
Resumo:
BACKGROUND AND AIM: Recurrent hepatitis C is a major cause of morbidity and mortality after liver transplantation (LT), and optimal treatment algorithms have yet to be defined. Here, we present our experience of the first 21 patients with recurrent hepatitis C treated in Lausanne. PATIENTS AND METHODS: Twenty-one patients with histologyproven recurrent hepatitis C after LT were treated since 2003. Treatment was initiated with pegylated interferon-α2a 135 μg per week and ribavirin 400 mg per day in the majority of patients, and subsequent doses were adapted individually based on on-treatment virological responses and clinical and/or biochemical side effects. RESULTS: On an intention-to-treat basis, sustained virological response (SVR) was achieved in 12/21 (57%) patients (5/11 [45%], 2/3 [67%], 4/5 [80%] and 1/2 [50%] of patients infected with genotypes 1, 2, 3 and 4, respectively). Two patients experienced relapse and 6 did not respond to treatment (NR). Treatment duration ranged from 24 to 90 weeks. It was stopped prematurely due to adverse events in 5/21 (24%) patients (with SVR achieved in 2 patients, NR in 2 patients, and death of one patient awaiting re-transplantation). Of note, SVR was achieved in a patient with combined liver and kidney transplantation. Importantly, SVR was achieved in some patients despite the lack of an early virological response or HCV RNA negativity at week 24. Darbepoetin α and filgrastim were used in 33% and 14%, respectively. CONCLUSION: Individually adapted treatment of recurrent hepatitis C can achieve SVR in a substantial proportion of LT patients. Conventional stopping rules do not apply in this setting so that prolonged therapy may be useful in selected patients.
Resumo:
When preparing an article on image restoration in astronomy, it is obvious that some topics have to be dropped to keep the work at reasonable length. We have decided to concentrate on image and noise models and on the algorithms to find the restoration. Topics like parameter estimation and stopping rules are also commented on. We start by describing the Bayesian paradigm and then proceed to study the noise and blur models used by the astronomical community. Then the prior models used to restore astronomical images are examined. We describe the algorithms used to find the restoration for the most common combinations of degradation and image models. Then we comment on important issues such as acceleration of algorithms, stopping rules, and parameter estimation. We also comment on the huge amount of information available to, and made available by, the astronomical community.
Resumo:
Background and aim: Recurrent hepati tis C is a major cause of morbidity and mortality after li ver transpl ant ati on (LT), and optimal treatm ent algorithms have yet to be defined. Here, we present our experience of 22 patients with recurrent hepatitis C treated in our institution .Patients and methods: Twenty-two patients with hi stology-proven recurrent hepati tis Cafter LT were treated since 2003. Treatment was ini ti ated with pegylated interferon-a2a 135 IIg per week and ribavirin 400 mg per day in the majority of patients, and subsequent doses were adapted individllally based on on-treatment virologieal responses and c1inical and/or biochemical si de effeets.Results: On an intention-to-treat basis, ustained virological re ponse(SVR) was achieved in 12/21 (54.5%) patie nts (5/12 [41 .6%], 2/3 [67%], 4/5 [80%] and 1/2 [50%] of patients infected with genotypes 1,2,3 and 4, respectively). Two patients experieneed relap e and 6 did not respond to treatm ent (NR). Treatment duration ranged from 24 to 90 weeks. It was stopped prematurely due to adverse events in 6/22 (27.2%) patients (with SVR achieved in 2 patients, NR in 2 patients, and death of 2 patients: one patient awaiting retransplantation and a second patient with HCV-HJV co-infection and fibrosing cholestat ic hepatiti s, nine months after transplantation). Of note, SVR was achi eved in a patient \Vi th combined liver and kidney transplantation. Importantly, SVR \Vas ach ieved in some patients despite the lack ofan early virological response or HCV RNA negativity at week 24. Darbepoetin a and fil ~,'rasti m were used in 36% and 18%, respectively.Conclusion: Individually adapted treatment of recurrent hepatitis C canachieve SVR in a substantial proponion ofLT patients. Conventional stopping rules do not apply in this setting so that prolonged therapy may be useful in selected patients.
Resumo:
Recently, various approaches have been suggested for dose escalation studies based on observations of both undesirable events and evidence of therapeutic benefit. This article concerns a Bayesian approach to dose escalation that requires the user to make numerous design decisions relating to the number of doses to make available, the choice of the prior distribution, the imposition of safety constraints and stopping rules, and the criteria by which the design is to be optimized. Results are presented of a substantial simulation study conducted to investigate the influence of some of these factors on the safety and the accuracy of the procedure with a view toward providing general guidance for investigators conducting such studies. The Bayesian procedures evaluated use logistic regression to model the two responses, which are both assumed to be binary. The simulation study is based on features of a recently completed study of a compound with potential benefit to patients suffering from inflammatory diseases of the lung.
Resumo:
This Ph.D thesis focuses on iterative regularization methods for regularizing linear and nonlinear ill-posed problems. Regarding linear problems, three new stopping rules for the Conjugate Gradient method applied to the normal equations are proposed and tested in many numerical simulations, including some tomographic images reconstruction problems. Regarding nonlinear problems, convergence and convergence rate results are provided for a Newton-type method with a modified version of Landweber iteration as an inner iteration in a Banach space setting.
Resumo:
Standard methods for testing safety data are needed to ensure the safe conduct of clinical trials. In particular, objective rules for reliably identifying unsafe treatments need to be put into place to help protect patients from unnecessary harm. DMCs are uniquely qualified to evaluate accumulating unblinded data and make recommendations about the continuing safe conduct of a trial. However, it is the trial leadership who must make the tough ethical decision about stopping a trial, and they could benefit from objective statistical rules that help them judge the strength of evidence contained in the blinded data. We design early stopping rules for harm that act as continuous safety screens for randomized controlled clinical trials with blinded treatment information, which could be used by anyone, including trial investigators (and trial leadership). A Bayesian framework, with emphasis on the likelihood function, is used to allow for continuous monitoring without adjusting for multiple comparisons. Close collaboration between the statistician and the clinical investigators will be needed in order to design safety screens with good operating characteristics. Though the math underlying this procedure may be computationally intensive, implementation of the statistical rules will be easy and the continuous screening provided will give suitably early warning when real problems were to emerge. Trial investigators and trial leadership need these safety screens to help them to effectively monitor the ongoing safe conduct of clinical trials with blinded data.^
Resumo:
Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^
Resumo:
Aim: To evaluate the reported use of Data Monitoring Committees (DMCs), the frequency of interim analysis, pre-specified stopping rules and early trial termination in neonatal randomised controlled trials (RCTs). Methods: We reviewed neonatal RCTs published in four high impact general medical journals, specifically looking at safety issues including documented involvement of a DMC, stated interim analysis, stopping rules and early trial termination. We searched all journal issues over an 11-year period (2003-2013) and recorded predefined parameters on each item for RCTs meeting inclusion criteria. Results: Seventy neonatal trials were identified in four general medical journals: Lancet, New England Journal of Medicine (NEJM), British Medical Journal and Journal of American Medical Association (JAMA). 43 (61.4%) studies reported the presence of a DMC, 36 (51.4%) explicitly mentioned interim analysis; stopping rules were reported in 15 (21.4%) RCTs and 7 (10%) trials were terminated early. The NEJM most frequently reported these parameters compared to the other three journals reviewed. Conclusion: While the majority of neonatal RCTs report on DMC involvement and interim analysis there is still scope for improvement. Clear documentation of safety related issues should be a central component of reporting in neonatal trials involving newborn infants.
Resumo:
Bound in old sprinkled calf.