893 resultados para variable sample size
Resumo:
Statistical software is now commonly available to calculate Power (P') and sample size (N) for most experimental designs. In many circumstances, however, sample size is constrained by lack of time, cost, and in research involving human subjects, the problems of recruiting suitable individuals. In addition, the calculation of N is often based on erroneous assumptions about variability and therefore such estimates are often inaccurate. At best, we would suggest that such calculations provide only a very rough guide of how to proceed in an experiment. Nevertheless, calculation of P' is very useful especially in experiments that have failed to detect a difference which the experimenter thought was present. We would recommend that P' should always be calculated in these circumstances to determine whether the experiment was actually too small to test null hypotheses adequately.
Resumo:
The concept of sample size and statistical power estimation is now something that Optometrists that want to perform research, whether it be in practice or in an academic institution, cannot simply hide away from. Ethics committees, journal editors and grant awarding bodies are now increasingly requesting that all research be backed up with sample size and statistical power estimation in order to justify any study and its findings. This article presents a step-by-step guide of the process for determining sample sizeand statistical power. It builds on statistical concepts presented in earlier articles in Optometry Today by Richard Armstrong and Frank Eperjesi.
Resumo:
2000 Mathematics Subject Classification: 62E16, 65C05, 65C20.
Resumo:
Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.
Resumo:
Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome – Rankin Scale and Barthel Index – were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation (Po00001). The ordering of the methods showed that the ordinal method of Whitehead and comparison of means produced significantly lower sample sizes than the other methods. The ordinal data method on average reduced sample size by 28% (inter-quartile range 14–53%) compared with the comparison of proportions; however, a 22% increase in sample size was seen with the ordinal method for trials assessing thrombolysis. The comparison of medians method of Payne gave the largest sample sizes. Conclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome
Resumo:
Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size.
Resumo:
A vespa-da-madeira, Sirex noctilio Fabricius (Hymenoptera: Siricidae) foi introduzida no Brasil em 1988 e tornou-se a principal praga dos plantios de pínus. Encontra-se distribuída em aproximadamente 1.000.000 de ha em diferentes níveis populacionais nos Estados do Rio Grande do Sul, Santa Catarina, Paraná, São Paulo e Minas Gerais. O controle da população da vespa-da-madeira é feito principalmente pela utilização do nematoide Deladenus siricidicola Bedding (Nematoda: Neothylenchidae). A avaliação da eficiência dos inimigos naturais é dificultada por não haver um sistema de amostragem apropriado. Este estudo testou o sistema de amostragem hierárquica para definir o tamanho da amostra para monitorar a população de S. noctilio e também a eficiência dos inimigos naturais, a qual mostrou-se adequada.
Resumo:
In this article we consider a control chart based on the sample variances of two quality characteristics. The points plotted on the chart correspond to the maximum value of these two statistics. The main reason to consider the proposed chart instead of the generalized variance |S| chart is its better diagnostic feature, that is, with the new chart it is easier to relate an out-of-control signal to the variables whose parameters have moved away from their in-control values. We study the control chart efficiency considering different shifts in the covariance matrix. In this way, we obtain the average run length (ARL) that measures the effectiveness of a control chart in detecting process shifts. The proposed chart always detects process disturbances faster than the generalized variance |S| chart. The same is observed when the size of the samples is variable, except in a few cases in which the size of the samples switches between small size and very large size.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In this article, we consider the T(2) chart with double sampling to control bivariate processes (BDS chart). During the first stage of the sampling, n(1) items of the sample are inspected and two quality characteristics (x; y) are measured. If the Hotelling statistic T(1)(2) for the mean vector of (x; y) is less than w, the sampling is interrupted. If the Hotelling statistic T(1)(2) is greater than CL(1), where CL(1) > w, the control chart signals an out-of-control condition. If w < T(1)(2) <= CL(1), the sampling goes on to the second stage, where the remaining n(2) items of the sample are inspected and T(2)(2) for the mean vector of the whole sample is computed. During the second stage of the sampling, the control chart signals an out-of-control condition when the statistic T(2)(2) is larger than CL(2). A comparative study shows that the BDS chart detects process disturbances faster than the standard bivariate T(2) chart and the adaptive bivariate T(2) charts with variable sample size and/or variable sampling interval.
Resumo:
When joint (X) over bar and R charts are in use, samples of fixed size are regularly taken from the process, and their means and ranges are plotted on the (X) over bar and R charts, respectively. In this article, joint (X) over bar and R charts have been used for monitoring continuous production processes. The sampling is performed, in two stages. During the first stage, one item of the sample is inspected and, depending on the result, the sampling is interrupted if the process is found to be in control; otherwise, it goes on to the second stage, where the remaining sample items are inspected. The two-stage sampling procedure speeds up the detection of process disturbances. The proposed joint (X) over bar and R charts are easier to administer and are more efficient than the joint (X) over bar and R charts with variable sample size where the quality characteristic of interest can be evaluated either by attribute or variable. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
A standard (X) over bar chart for controlling the process mean takes samples of size no at specified, equally-spaced, fixed-time points. This article proposes a modification of the standard (X) over bar chart that allows one to take additional samples, bigger than no, between these fixed times. The additional samples are taken from the process when there is evidence that the process mean moved from target. Following the notation proposed by Reynolds (1996a) and Costs (1997) we shortly call the proposed (X) over bar chart as VSSIFT (X) over bar chart: where VSSIFT means variable sample size and sampling intervals with fixed times. The (X) over bar chart with the VSSIFT feature is easier to be administered than a standard VSSI (X) over bar chart that is not constrained to sample at the specified fixed times. The performances of the charts in detecting process mean shifts are comparable.
Resumo:
A standard X̄ chart for controlling the process mean takes samples of size n0 at specified, equally-spaced, fixed-time points. This article proposes a modification of the standard X chart that allows one to take additional samples, bigger than n0, between these fixed times. The additional samples are taken from the process when there is evidence that the process mean moved from target. Following the notation proposed by Reynolds (1996a) and Costa (1997) we shortly call the proposed X chart as VSSIFT X chart where VSSIFT means variable sample size and sampling intervals with fixed times. The X chart with the VSSIFT feature is easier to be administered than a standard VSSI X chart that is not constrained to sample at the specified fixed times. The performances of the charts in detecting process mean shifts are comparable. Copyright © 1998 by Marcel Dekker, Inc.
Resumo:
In this article, we consider the synthetic control chart with two-stage sampling (SyTS chart) to control bivariate processes. During the first stage, one item of the sample is inspected and two correlated quality characteristics (x;y) are measured. If the Hotelling statistic T1 2 for these individual observations of (x;y) is lower than a specified value UCL 1 the sampling is interrupted. Otherwise, the sampling goes on to the second stage, where the remaining items are inspected and the Hotelling statistic T2 2 for the sample means of (x;y) is computed. When the statistic T2 2 is larger than a specified value UCL2, the sample is classified as nonconforming. According to the synthetic control chart procedure, the signal is based on the number of conforming samples between two neighbor nonconforming samples. The proposed chart detects process disturbances faster than the bivariate charts with variable sample size and it is from the practical viewpoint more convenient to administer.
Resumo:
In this paper we propose the Double Sampling X̄ control chart for monitoring processes in which the observations follow a first order autoregressive model. We consider sampling intervals that are sufficiently long to meet the rational subgroup concept. The Double Sampling X̄ chart is substantially more efficient than the Shewhart chart and the Variable Sample Size chart. To study the properties of these charts we derived closed-form expressions for the average run length (ARL) taking into account the within-subgroup correlation. Numerical results show that this correlation has a significant impact on the chart properties.