3 resultados para Biometrics
em DigitalCommons@The Texas Medical Center
Resumo:
Treatment for cancer often involves combination therapies used both in medical practice and clinical trials. Korn and Simon listed three reasons for the utility of combinations: 1) biochemical synergism, 2) differential susceptibility of tumor cells to different agents, and 3) higher achievable dose intensity by exploiting non-overlapping toxicities to the host. Even if the toxicity profile of each agent of a given combination is known, the toxicity profile of the agents used in combination must be established. Thus, caution is required when designing and evaluating trials with combination therapies. Traditional clinical design is based on the consideration of a single drug. However, a trial of drugs in combination requires a dose-selection procedure that is vastly different than that needed for a single-drug trial. When two drugs are combined in a phase I trial, an important trial objective is to determine the maximum tolerated dose (MTD). The MTD is defined as the dose level below the dose at which two of six patients experience drug-related dose-limiting toxicity (DLT). In phase I trials that combine two agents, more than one MTD generally exists, although all are rarely determined. For example, there may be an MTD that includes high doses of drug A with lower doses of drug B, another one for high doses of drug B with lower doses of drug A, and yet another for intermediate doses of both drugs administered together. With classic phase I trial designs, only one MTD is identified. Our new trial design allows identification of more than one MTD efficiently, within the context of a single protocol. The two drugs combined in our phase I trial are temsirolimus and bevacizumab. Bevacizumab is a monoclonal antibody targeting the vascular endothelial growth factor (VEGF) pathway which is fundamental for tumor growth and metastasis. One mechanism of tumor resistance to antiangiogenic therapy is upregulation of hypoxia inducible factor 1α (HIF-1α) which mediates responses to hypoxic conditions. Temsirolimus has resulted in reduced levels of HIF-1α making this an ideal combination therapy. Dr. Donald Berry developed a trial design schema for evaluating low, intermediate and high dose levels of two drugs given in combination as illustrated in a recently published paper in Biometrics entitled “A Parallel Phase I/II Clinical Trial Design for Combination Therapies.” His trial design utilized cytotoxic chemotherapy. We adapted this design schema by incorporating greater numbers of dose levels for each drug. Additional dose levels are being examined because it has been the experience of phase I trials that targeted agents, when given in combination, are often effective at dosing levels lower than the FDA-approved dose of said drugs. A total of thirteen dose levels including representative high, intermediate and low dose levels of temsirolimus with representative high, intermediate, and low dose levels of bevacizumab will be evaluated. We hypothesize that our new trial design will facilitate identification of more than one MTD, if they exist, efficiently and within the context of a single protocol. Doses gleaned from this approach could potentially allow for a more personalized approach in dose selection from among the MTDs obtained that can be based upon a patient’s specific co-morbid conditions or anticipated toxicities.
Resumo:
Quantitative imaging with 18F-FDG PET/CT has the potential to provide an in vivo assessment of response to radiotherapy (RT). However, comparing tissue tracer uptake in longitudinal studies is often confounded by variations in patient setup and potential treatment induced gross anatomic changes. These variations make true response monitoring for the same anatomic volume a challenge, not only for tumors, but also for normal organs-at-risk (OAR). The central hypothesis of this study is that more accurate image registration will lead to improved quantitation of tissue response to RT with 18F-FDG PET/CT. Employing an in-house developed “demons” based deformable image registration algorithm, pre-RT tumor and parotid gland volumes can be more accurately mapped to serial functional images. To test the hypothesis, specific aim 1 was designed to analyze whether deformably mapping tumor volumes rather than aligning to bony structures leads to superior tumor response assessment. We found that deformable mapping of the most metabolically avid regions improved response prediction (P<0.05). The positive predictive power for residual disease was 63% compared to 50% for contrast enhanced post-RT CT. Specific aim 2 was designed to use parotid gland standardized uptake value (SUV) as an objective imaging biomarker for salivary toxicity. We found that relative change in parotid gland SUV correlated strongly with salivary toxicity as defined by the RTOG/EORTC late effects analytic scale (Spearman’s ρ = -0.96, P<0.01). Finally, the goal of specific aim 3 was to create a phenomenological dose-SUV response model for the human parotid glands. Utilizing only baseline metabolic function and the planned dose distribution, predicting parotid SUV change or salivary toxicity, based upon specific aim 2, became possible. We found that the predicted and observed parotid SUV relative changes were significantly correlated (Spearman’s ρ = 0.94, P<0.01). The application of deformable image registration to quantitative treatment response monitoring with 18F-FDG PET/CT could have a profound impact on patient management. Accurate and early identification of residual disease may allow for more timely intervention, while the ability to quantify and predict toxicity of normal OAR might permit individualized refinement of radiation treatment plan designs.
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^