91 resultados para Filter designs
Resumo:
In clinical trials, situations often arise where more than one response from each patient is of interest; and it is required that any decision to stop the study be based upon some or all of these measures simultaneously. Theory for the design of sequential experiments with simultaneous bivariate responses is described by Jennison and Turnbull (Jennison, C., Turnbull, B. W. (1993). Group sequential tests for bivariate response: interim analyses of clinical trials with both efficacy and safety endpoints. Biometrics 49:741-752) and Cook and Farewell (Cook, R. J., Farewell, V. T. (1994). Guidelines for monitoring efficacy and toxicity responses in clinical trials. Biometrics 50:1146-1152) in the context of one efficacy and one safety response. These expositions are in terms of normally distributed data with known covariance. The methods proposed require specification of the correlation, ρ between test statistics monitored as part of the sequential test. It can be difficult to quantify ρ and previous authors have suggested simply taking the lowest plausible value, as this will guarantee power. This paper begins with an illustration of the effect that inappropriate specification of ρ can have on the preservation of trial error rates. It is shown that both the type I error and the power can be adversely affected. As a possible solution to this problem, formulas are provided for the calculation of correlation from data collected as part of the trial. An adaptive approach is proposed and evaluated that makes use of these formulas and an example is provided to illustrate the method. Attention is restricted to the bivariate case for ease of computation, although the formulas derived are applicable in the general multivariate case.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
This article describes an approach to optimal design of phase II clinical trials using Bayesian decision theory. The method proposed extends that suggested by Stallard (1998, Biometrics54, 279–294) in which designs were obtained to maximize a gain function including the cost of drug development and the benefit from a successful therapy. Here, the approach is extended by the consideration of other potential therapies, the development of which is competing for the same limited resources. The resulting optimal designs are shown to have frequentist properties much more similar to those traditionally used in phase II trials.
Resumo:
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.
Resumo:
Nonregular two-level fractional factorial designs are designs which cannot be specified in terms of a set of defining contrasts. The aliasing properties of nonregular designs can be compared by using a generalisation of the minimum aberration criterion called minimum G2-aberration.Until now, the only nontrivial designs that are known to have minimum G2-aberration are designs for n runs and m n–5 factors. In this paper, a number of construction results are presented which allow minimum G2-aberration designs to be found for many of the cases with n = 16, 24, 32, 48, 64 and 96 runs and m n/2–2 factors.
Resumo:
It is common practice to design a survey with a large number of strata. However, in this case the usual techniques for variance estimation can be inaccurate. This paper proposes a variance estimator for estimators of totals. The method proposed can be implemented with standard statistical packages without any specific programming, as it involves simple techniques of estimation, such as regression fitting.
Resumo:
To explore the projection efficiency of a design, Tsai, et al [2000. Projective three-level main effects designs robust to model uncertainty. Biometrika 87, 467-475] introduced the Q criterion to compare three-level main-effects designs for quantitative factors that allow the consideration of interactions in addition to main effects. In this paper, we extend their method and focus on the case in which experimenters have some prior knowledge, in advance of running the experiment, about the probabilities of effects being non-negligible. A criterion which incorporates experimenters' prior beliefs about the importance of each effect is introduced to compare orthogonal, or nearly orthogonal, main effects designs with robustness to interactions as a secondary consideration. We show that this criterion, exploiting prior information about model uncertainty, can lead to more appropriate designs reflecting experimenters' prior beliefs. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
From a statistician's standpoint, the interesting kind of isomorphism for fractional factorial designs depends on the statistical application. Combinatorially isomorphic fractional factorial designs may have different statistical properties when factors are quantitative. This idea is illustrated by using Latin squares of order 3 to obtain fractions of the 3(3) factorial. design in 18 runs.
Resumo:
A supersaturated design (SSD) is an experimental plan, useful for evaluating the main effects of m factors with n experimental units when m > n - 1, each factor has two levels and when the first-order effects of only a few factors are expected to have dominant effects on the response. Use of these plans can be extremely cost-effective when it is necessary to screen hundreds or thousands of factors with a limited amount of resources. In this article we describe how to use cyclic balanced incomplete block designs and regular graph designs to construct E (s(2)) optimal and near optimal SSDs when m is a multiple of n - 1. We also provide a table that can be used to construct these designs for screening thousands of factors. We also explain how to obtain SSDs when m is not a multiple of n - 1. Using the table and the approaches given in this paper, SSDs can be developed for designs with up to 24 runs and up to 12,190 factors.
Resumo:
Minimum aberration is the most established criterion for selecting a regular fractional factorial design of maximum resolution. Minimum aberration designs for n runs and n/2 less than or equal to m < n factors have previously been constructed using the novel idea of complementary designs. In this paper, an alternative method of construction is developed by relating the wordlength pattern of designs to the so-called 'confounding between experimental runs'. This allows minimum aberration designs to be constructed for n runs and 5n/16 less than or equal to m less than or equal to n/2 factors as well as for n/2 less than or equal to m < n.
Resumo:
An adaptive tuned vibration absorber (ATVA) with a smart variable stiffness element is capable of retuning itself in response to a time-varying excitation frequency., enabling effective vibration control over a range of frequencies. This paper discusses novel methods of achieving variable stiffness in an ATVA by changing shape, as inspired by biological paradigms. It is shown that considerable variation in the tuned frequency can be achieved by actuating a shape change, provided that this is within the limits of the actuator. A feasible design for such an ATVA is one in which the device offers low resistance to the required shape change actuation while not being restricted to low values of the effective stiffness of the vibration absorber. Three such original designs are identified: (i) A pinned-pinned arch beam with fixed profile of slight curvature and variable preload through an adjustable natural curvature; (ii) a vibration absorber with a stiffness element formed from parallel curved beams of adjustable curvature vibrating longitudinally; (iii) a vibration absorber with a variable geometry linkage as stiffness element. The experimental results from demonstrators based on two of these designs show good correlation with the theory.
Resumo:
The VISIR instrument for the European Southern Observatory (ESO) Very Large Telescope (VLT) is a thermal-infrared imager and spectrometer currently being developed by the French Service d'Astrophysique of CEA Saclay, and Dutch NFRA ASTRON Dwingeloo consortium. This cryogenic instrument will employ precision infrared bandpass filters in the N-( =7.5-14µm) and Q-( =16-28µm) band mid-IR atmospheric windows to study interstellar and circumstellar environments crucial for star and planetary formation theories. As the filters in these mid-IR wavelength ranges are of interest to many astronomical cryogenic instruments, a worldwide astronomical filter consortium was set up with participation from 12 differing institutes, each requiring instrument specific filter operating environments and optical metrology. This paper describes the design and fabrication methods used to manufacture these astronomical consortium filters, including the rationale for the selection of multilayer coating designs, temperature-dependant optical properties of the filter materials and FTIR spectral measurements showing the changes in passband and blocking performance on cooling to <50K. We also describe the development of a 7-14µm broadband antireflection coating deposited on Ge lenses and KRS-5 grisms for cryogenic operation at 40K