877 resultados para Variable sample size
Resumo:
Adaptive least mean square (LMS) filters with or without training sequences, which are known as training-based and blind detectors respectively, have been formulated to counter interference in CDMA systems. The convergence characteristics of these two LMS detectors are analyzed and compared in this paper. We show that the blind detector is superior to the training-based detector with respect to convergence rate. On the other hand, the training-based detector performs better in the steady state, giving a lower excess mean-square error (MSE) for a given adaptation step size. A novel decision-directed LMS detector which achieves the low excess MSE of the training-based detector and the superior convergence performance of the blind detector is proposed.
Resumo:
This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.
Resumo:
This paper uses a multivariate response surface methodology to analyze the size distortion of the BDS test when applied to standardized residuals of rst-order GARCH processes. The results show that the asymptotic standard normal distribution is an unreliable approximation, even in large samples. On the other hand, a simple log-transformation of the squared standardized residuals seems to correct most of the size problems. Nonethe-less, the estimated response surfaces can provide not only a measure of the size distortion, but also more adequate critical values for the BDS test in small samples.
Resumo:
O uso dos tamanhos de amostras adequados nas unidades experimentais melhora a eficiência da pesquisa. Foi conduzido um experimento no ano agrícola 2004/2005 em Santa Maria, Rio Grande do Sul, com o objetivo de estimar o tamanho de amostra para o comprimento de espiga, o diâmetro de espiga e de sabugo, o peso da espiga, dos grãos por espiga, do sabugo e de 100 grãos, o número de fileiras de grãos por espiga, o número de grãos por espiga e o comprimento dos grãos de dois híbridos simples (P30F33 e P Flex), dois híbridos triplos (AG8021 e DG501) e dois híbridos duplos (AG2060 e DKB701) de milho. Para uma precisão de 5% (D5), características de peso (peso de espiga despalhada, de grãos, de sabugo e de 100 grãos) podem ser amostradas com 21 espigas, características de tamanho (comprimento de espiga e de grão, diâmetro de espiga e de sabugo) com oito espigas, e dados de contagem (número de grãos e de fileiras) com 13 espigas. O tamanho de amostra é variável em função da característica da espiga e do tipo de híbrido: simples, triplo ou duplo. A variabilidade genética existente entre os híbridos de milho, na forma crescente: simples, triplo e duplo, não reflete na mesma ordem no tamanho de amostra de caracteres da espiga.
Resumo:
Recent studies have shown that the (X) over bar chart with variable sampling intervals (VSI) and/or with variable sample sizes (VSS) detects process shifts faster than the traditional (X) over bar chart. This article extends these studies for processes that are monitored by both the (X) over bar and R charts. A Markov chain model is used to determine the properties of the joint (X) over bar and R charts with variable sample sizes and sampling intervals (VSSI). The VSSI scheme improves the joint (X) over bar and R control chart performance in terms of the speed with which shifts in the process mean and/or variance are detected.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Purpose: To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. Methods: One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab(US Patent). A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE<0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Results: Bio-Optics: sample size, 97 +/- 22 cells; RE, 6.52 +/- 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE<0.05); customized sample size, 162 +/- 34 cells. CSO: sample size, 110 +/- 20 cells; RE, 5.98 +/- 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE<0.05); customized sample size, 157 +/- 45 cells. Konan: sample size, 80 +/- 27 cells; RE, 10.6 +/- 3.67; none of the examinations had sufficient endothelial cell quantity (RE>0.05); customized sample size, 336 +/- 131 cells. Topcon: sample size, 87 +/- 17 cells; RE, 10.1 +/- 2.52; none of the examinations had sufficient endothelial cell quantity (RE>0.05); customized sample size, 382 +/- 159 cells. Conclusions: A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
Proper sample size estimation is an important part of clinical trial methodology and closely related to the precision and power of the trial's results. Trials with sufficient sample sizes are scientifically and ethically justified and more credible compared with trials with insufficient sizes. Planning clinical trials with inadequate sample sizes might be considered as a waste of time and resources, as well as unethical, since patients might be enrolled in a study in which the expected results will not be trusted and are unlikely to have an impact on clinical practice. Because of the low emphasis of sample size calculation in clinical trials in orthodontics, it is the objective of this article to introduce the orthodontic clinician to the importance and the general principles of sample size calculations for randomized controlled trials to serve as guidance for study designs and as a tool for quality assessment when reviewing published clinical trials in our specialty. Examples of calculations are shown for 2-arm parallel trials applicable to orthodontics. The working examples are analyzed, and the implications of design or inherent complexities in each category are discussed.