986 resultados para sample size


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fixed sample-size plans for monitoring Plutella xylostella (L.) (Lepidoptera: Plutellidae) on broccoli and other Brassica vegetable crops are popular in Australia for their simplicity and ease of application. But the sample sizes used are often small, ≈10–25 plants per crop, and it may be that they fail to provide sufficient information upon which to base pest control decisions. We tested the performance of seven fixed sample-size plans (10, 15, 20, 30, 35, 40, and 45 plants) by resampling a large data set on P. xylostella in commercial broccoli crops. For each sample size, enumerative and presence-absence plans were assessed. The precision of the plans was assessed in terms of the ratio of the standard error to the mean; and at least 45 and 35 samples were necessary for the enumerative and presence-absence plans, respectively, to attain the generally accepted benchmark of ≤0.3. Sample sizes of 10–20 were highly imprecise. We also assessed the consequences of classifications based on action thresholds (ATs) of 0.2 and 0.8 larvae per plant for the enumerative case, and 0.15 and 0.45 proportion of plants of infested for the presence-absence case. Operating characteristic curves and investigations of the frequency of correct decisions suggest improvements in the performance of plans with increased sample size. In both the enumerative and presence-absence cases, the proportion of incorrect decisions was much higher for the lower of the two ATs assessed, and type II errors (i.e., failure to suggest pest control upon the AT is exceeded) generally accounted for the majority of this error. Type II errors are the most significant from a producer’s standpoint. Further consideration is necessary to determine what is an acceptable type II error rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a random design model based on independent and identically distributed pairs of observations (Xi, Yi), where the regression function m(x) is given by m(x) = E(Yi|Xi = x) with one independent variable. In a nonparametric setting the aim is to produce a reasonable approximation to the unknown function m(x) when we have no precise information about the form of the true density, f(x) of X. We describe an estimation procedure of non-parametric regression model at a given point by some appropriately constructed fixed-width (2d) confidence interval with the confidence coefficient of at least 1−. Here, d(> 0) and 2 (0, 1) are two preassigned values. Fixed-width confidence intervals are developed using both Nadaraya-Watson and local linear kernel estimators of nonparametric regression with data-driven bandwidths. The sample size was optimized using the purely and two-stage sequential procedures together with asymptotic properties of the Nadaraya-Watson and local linear estimators. A large scale simulation study was performed to compare their coverage accuracy. The numerical results indicate that the confi dence bands based on the local linear estimator have the better performance than those constructed by using Nadaraya-Watson estimator. However both estimators are shown to have asymptotically correct coverage properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reliability of an induced classifier can be affected by several factors including the data oriented factors and the algorithm oriented factors [3]. In some cases, the reliability could also be affected by knowledge oriented factors. In this chapter, we analyze three special cases to examine the reliability of the discovered knowledge. Our case study results show that (1) in the cases of mining from low quality data, rough classification approach is more reliable than exact approach which in general tolerate to low quality data; (2) Without sufficient large size of the data, the reliability of the discovered knowledge will be decreased accordingly; (3) The reliability of point learning approach could easily be misled by noisy data. It will in most cases generate an unreliable interval and thus affect the reliability of the discovered knowledge. It is also reveals that the inexact field is a good learning strategy that could model the potentials and to improve the discovery reliability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses a multivariate response surface methodology to analyze the size distortion of the BDS test when applied to standardized residuals of rst-order GARCH processes. The results show that the asymptotic standard normal distribution is an unreliable approximation, even in large samples. On the other hand, a simple log-transformation of the squared standardized residuals seems to correct most of the size problems. Nonethe-less, the estimated response surfaces can provide not only a measure of the size distortion, but also more adequate critical values for the BDS test in small samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O uso dos tamanhos de amostras adequados nas unidades experimentais melhora a eficiência da pesquisa. Foi conduzido um experimento no ano agrícola 2004/2005 em Santa Maria, Rio Grande do Sul, com o objetivo de estimar o tamanho de amostra para o comprimento de espiga, o diâmetro de espiga e de sabugo, o peso da espiga, dos grãos por espiga, do sabugo e de 100 grãos, o número de fileiras de grãos por espiga, o número de grãos por espiga e o comprimento dos grãos de dois híbridos simples (P30F33 e P Flex), dois híbridos triplos (AG8021 e DG501) e dois híbridos duplos (AG2060 e DKB701) de milho. Para uma precisão de 5% (D5), características de peso (peso de espiga despalhada, de grãos, de sabugo e de 100 grãos) podem ser amostradas com 21 espigas, características de tamanho (comprimento de espiga e de grão, diâmetro de espiga e de sabugo) com oito espigas, e dados de contagem (número de grãos e de fileiras) com 13 espigas. O tamanho de amostra é variável em função da característica da espiga e do tipo de híbrido: simples, triplo ou duplo. A variabilidade genética existente entre os híbridos de milho, na forma crescente: simples, triplo e duplo, não reflete na mesma ordem no tamanho de amostra de caracteres da espiga.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The usual practice in using a control chart to monitor a process is to take samples of size n from the process every h hours This article considers the properties of the XBAR chart when the size of each sample depends on what is observed in the preceding sample. The idea is that the sample should be large if the sample point of the preceding sample is close to but not actually outside the control limits and small if the sample point is close to the target. The properties of the variable sample size (VSS) XBAR chart are obtained using Markov chains. The VSS XBAR chart is substantially quicker than the traditional XBAR chart in detecting moderate shifts in the process.