9 resultados para Factorial Treatment Designs

em CentAUR: Central Archive University of Reading - UK


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nonregular two-level fractional factorial designs are designs which cannot be specified in terms of a set of defining contrasts. The aliasing properties of nonregular designs can be compared by using a generalisation of the minimum aberration criterion called minimum G2-aberration.Until now, the only nontrivial designs that are known to have minimum G2-aberration are designs for n runs and m n–5 factors. In this paper, a number of construction results are presented which allow minimum G2-aberration designs to be found for many of the cases with n = 16, 24, 32, 48, 64 and 96 runs and m n/2–2 factors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

From a statistician's standpoint, the interesting kind of isomorphism for fractional factorial designs depends on the statistical application. Combinatorially isomorphic fractional factorial designs may have different statistical properties when factors are quantitative. This idea is illustrated by using Latin squares of order 3 to obtain fractions of the 3(3) factorial. design in 18 runs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Minimum aberration is the most established criterion for selecting a regular fractional factorial design of maximum resolution. Minimum aberration designs for n runs and n/2 less than or equal to m < n factors have previously been constructed using the novel idea of complementary designs. In this paper, an alternative method of construction is developed by relating the wordlength pattern of designs to the so-called 'confounding between experimental runs'. This allows minimum aberration designs to be constructed for n runs and 5n/16 less than or equal to m less than or equal to n/2 factors as well as for n/2 less than or equal to m < n.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, there has been a drive to save development costs and shorten time-to-market of new therapies. Research into novel trial designs to facilitate this goal has led to, amongst other approaches, the development of methodology for seamless phase II/III designs. Such designs allow treatment or dose selection at an interim analysis and comparative evaluation of efficacy with control, in the same study. Methods have gained much attention because of their potential advantages compared to conventional drug development programmes with separate trials for individual phases. In this article, we review the various approaches to seamless phase II/III designs based upon the group-sequential approach, the combination test approach and the adaptive Dunnett method. The objective of this article is to describe the approaches in a unified framework and highlight their similarities and differences to allow choice of an appropriate methodology by a trialist considering conducting such a trial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncertainty regarding changes in dissolved organic carbon (DOC) quantity and quality has created interest in managing peatlands for their ecosystem services such as drinking water provision. The evidence base for such interventions is, however, sometimes contradictory. We performed a laboratory climate manipulation using a factorial design on two dominant peatland vegetation types (Calluna vulgaris and Sphagnum Spp.) and a peat soil collected from a drinking water catchment in Exmoor National Park, UK. Temperature and rainfall were set to represent baseline and future conditions under the UKCP09 2080s high emissions scenario for July and August. DOC leachate then underwent standard water treatment of coagulation/flocculation before chlorination. C. vulgaris leached more DOC than Sphagnum Spp. (7.17 versus 3.00 mg g−1) with higher specific ultraviolet (SUVA) values and a greater sensitivity to climate, leaching more DOC under simulated future conditions. The peat soil leached less DOC (0.37 mg g−1) than the vegetation and was less sensitive to climate. Differences in coagulation removal efficiency between the DOC sources appears to be driven by relative solubilisation of protein-like DOC, observed through the fluorescence peak C/T. Post-coagulation only differences between vegetation types were detected for the regulated disinfection by-products (DBPs), suggesting climate change influence at this scale can be removed via coagulation. Our results suggest current biodiversity restoration programmes to encourage Sphagnum Spp. will result in lower DOC concentrations and SUVA values, particularly with warmer and drier summers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.