962 resultados para Bayesian adaptive design
Resumo:
My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.
Resumo:
Bayesian adaptive randomization (BAR) is an attractive approach to allocate more patients to the putatively superior arm based on the interim data while maintains good statistical properties attributed to randomization. Under this approach, patients are adaptively assigned to a treatment group based on the probability that the treatment is better. The basic randomization scheme can be modified by introducing a tuning parameter, replacing the posterior estimated response probability, setting a boundary to randomization probabilities. Under randomization settings comprised of the above modifications, operating characteristics, including type I error, power, sample size, imbalance of sample size, interim success rate, and overall success rate, were evaluated through simulation. All randomization settings have low and comparable type I errors. Increasing tuning parameter decreases power, but increases imbalance of sample size and interim success rate. Compared with settings using the posterior probability, settings using the estimated response rates have higher power and overall success rate, but less imbalance of sample size and lower interim success rate. Bounded settings have higher power but less imbalance of sample size than unbounded settings. All settings have better performance in the Bayesian design than in the frequentist design. This simulation study provided practical guidance on the choice of how to implement the adaptive design. ^
Resumo:
Treating patients with combined agents is a growing trend in cancer clinical trials. Evaluating the synergism of multiple drugs is often the primary motivation for such drug-combination studies. Focusing on the drug combination study in the early phase clinical trials, our research is composed of three parts: (1) We conduct a comprehensive comparison of four dose-finding designs in the two-dimensional toxicity probability space and propose using the Bayesian model averaging method to overcome the arbitrariness of the model specification and enhance the robustness of the design; (2) Motivated by a recent drug-combination trial at MD Anderson Cancer Center with a continuous-dose standard of care agent and a discrete-dose investigational agent, we propose a two-stage Bayesian adaptive dose-finding design based on an extended continual reassessment method; (3) By combining phase I and phase II clinical trials, we propose an extension of a single agent dose-finding design. We model the time-to-event toxicity and efficacy to direct dose finding in two-dimensional drug-combination studies. We conduct extensive simulation studies to examine the operating characteristics of the aforementioned designs and demonstrate the designs' good performances in various practical scenarios.^
Resumo:
There are two practical challenges in the phase I clinical trial conduct: lack of transparency to physicians, and the late onset toxicity. In my dissertation, Bayesian approaches are used to address these two problems in clinical trial designs. The proposed simple optimal designs cast the dose finding problem as a decision making process for dose escalation and deescalation. The proposed designs minimize the incorrect decision error rate to find the maximum tolerated dose (MTD). For the late onset toxicity problem, a Bayesian adaptive dose-finding design for drug combination is proposed. The dose-toxicity relationship is modeled using the Finney model. The unobserved delayed toxicity outcomes are treated as missing data and Bayesian data augment is employed to handle the resulting missing data. Extensive simulation studies have been conducted to examine the operating characteristics of the proposed designs and demonstrated the designs' good performances in various practical scenarios.^
Resumo:
This paper describes the use of model-based geostatistics for choosing the optimal set of sampling locations, collectively called the design, for a geostatistical analysis. Two types of design situations are considered. These are retrospective design, which concerns the addition of sampling locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing optimal positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model parameter values are unknown. The results show that in this situation a wide range of inter-point distances should be included in the design, and the widely used regular design is therefore not the optimal choice.
Resumo:
Bayesian adaptive methods have been extensively used in psychophysics to estimate the point at which performance on a task attains arbitrary percentage levels, although the statistical properties of these estimators have never been assessed. We used simulation techniques to determine the small-sample properties of Bayesian estimators of arbitrary performance points, specifically addressing the issues of bias and precision as a function of the target percentage level. The study covered three major types of psychophysical task (yes-no detection, 2AFC discrimination and 2AFC detection) and explored the entire range of target performance levels allowed for by each task. Other factors included in the study were the form and parameters of the actual psychometric function Psi, the form and parameters of the model function M assumed in the Bayesian method, and the location of Psi within the parameter space. Our results indicate that Bayesian adaptive methods render unbiased estimators of any arbitrary point on psi only when M=Psi, and otherwise they yield bias whose magnitude can be considerable as the target level moves away from the midpoint of the range of Psi. The standard error of the estimator also increases as the target level approaches extreme values whether or not M=Psi. Contrary to widespread belief, neither the performance level at which bias is null nor that at which standard error is minimal can be predicted by the sweat factor. A closed-form expression nevertheless gives a reasonable fit to data describing the dependence of standard error on number of trials and target level, which allows determination of the number of trials that must be administered to obtain estimates with prescribed precision.
Resumo:
Threshold estimation with sequential procedures is justifiable on the surmise that the index used in the so-called dynamic stopping rule has diagnostic value for identifying when an accurate estimate has been obtained. The performance of five types of Bayesian sequential procedure was compared here to that of an analogous fixed-length procedure. Indices for use in sequential procedures were: (1) the width of the Bayesian probability interval, (2) the posterior standard deviation, (3) the absolute change, (4) the average change, and (5) the number of sign fluctuations. A simulation study was carried out to evaluate which index renders estimates with less bias and smaller standard error at lower cost (i.e. lower average number of trials to completion), in both yes–no and two-alternative forced-choice (2AFC) tasks. We also considered the effect of the form and parameters of the psychometric function and its similarity with themodel function assumed in the procedure. Our results show that sequential procedures do not outperform fixed-length procedures in yes–no tasks. However, in 2AFC tasks, sequential procedures not based on sign fluctuations all yield minimally better estimates than fixed-length procedures, although most of the improvement occurs with short runs that render undependable estimates and the differences vanish when the procedures run for a number of trials (around 70) that ensures dependability. Thus, none of the indices considered here (some of which are widespread) has the diagnostic value that would justify its use. In addition, difficulties of implementation make sequential procedures unfit as alternatives to fixed-length procedures.
Resumo:
The development of targeted therapy involve many challenges. Our study will address some of the key issues involved in biomarker identification and clinical trial design. In our study, we propose two biomarker selection methods, and then apply them in two different clinical trial designs for targeted therapy development. In particular, we propose a Bayesian two-step lasso procedure for biomarker selection in the proportional hazards model in Chapter 2. In the first step of this strategy, we use the Bayesian group lasso to identify the important marker groups, wherein each group contains the main effect of a single marker and its interactions with treatments. In the second step, we zoom in to select each individual marker and the interactions between markers and treatments in order to identify prognostic or predictive markers using the Bayesian adaptive lasso. In Chapter 3, we propose a Bayesian two-stage adaptive design for targeted therapy development while implementing the variable selection method given in Chapter 2. In Chapter 4, we proposed an alternate frequentist adaptive randomization strategy for situations where a large number of biomarkers need to be incorporated in the study design. We also propose a new adaptive randomization rule, which takes into account the variations associated with the point estimates of survival times. In all of our designs, we seek to identify the key markers that are either prognostic or predictive with respect to treatment. We are going to use extensive simulation to evaluate the operating characteristics of our methods.^
Resumo:
Adventure! The Paladin Order foi um projecto ambicioso que começou por ser desenvolvido como um video jogo completo. Tinha como objéctivo implementar uma ferramenta diferente que permitisse tornar o jogo completamente adaptativo às decisões do jogador tanto nas interacções e no diálogo com outras personagens, assim como no combate contra os variados inímigos do jogo. Devido à inexperiência do autor uma grande parte do tempo foi passado a estudar e a pesquisar várias possíveis soluções que permitissem criar um ambiente que fosse adaptativo de uma forma simples e interessante, não só para os programadores mas também para qualquer pessoa que fosse responsável por editar o diálogo e a história do jogo. Os resultados foram bastante interessantes, revelando um sistema que depende simultaneamente dos ficheiros de onde é retirado o diálogo, e de um sistema de personalidades que permite definir qual será o comportamento de qualquer objecto do jogo ou, pelo menos, como as outras personagens irão reagir. O produto final é uma ferramenta de bases sólidas que permite uma implementação relativamente simples de um sistema abrangente e adaptativo, com poucas falhas e apenas algumas questões de simplicidade de código.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.
Resumo:
Detection of a tactile stimulus on one finger is impaired when a concurrent stimulus (masker) is presented on an additional finger of the same or the opposite hand. This phenomenon is known to be finger-specific at the within-hand level. However, whether this specificity is also maintained at the between-hand level is not known. In four experiments, we addressed this issue by combining a Bayesian adaptive staircase procedure (QUEST) with a two-interval forced choice (2IFC) design in order to establish threshold for detecting 200ms, 100Hz sinusoidal vibrations applied to the index or little fingertip of either hand (targets). We systematically varied the masker finger (index, middle, ring, or little finger of either hand), while controlling the spatial location of the target and masker stimuli. Detection thresholds varied consistently as a function of the masker finger when the latter was on the same hand (Experiments 1 and 2), but not when on different hands (Experiments 3 and 4). Within the hand, detection thresholds increased for masker fingers closest to the target finger (i.e., middle>ring when the target was index). Between the hands, detection thresholds were higher only when the masker was present on any finger as compared to when the target was presented in isolation. The within hand effect of masker finger is consistent with the segregation of different fingers at the early stages of somatosensory processing, from the periphery to the primary somatosensory cortex (SI). We propose that detection is finger-specific and reflects the organisation of somatosensory receptive fields in SI within, but not between the hands.
Resumo:
This paper presents a greedy Bayesian experimental design criterion for heteroscedastic Gaussian process models. The criterion is based on the Fisher information and is optimal in the sense of minimizing parameter uncertainty for likelihood based estimators. We demonstrate the validity of the criterion under different noise regimes and present experimental results from a rabies simulator to demonstrate the effectiveness of the resulting approximately optimal designs.