840 resultados para Variable sample size X- control chart
Resumo:
BACKGROUND The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). METHODS We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. RESULTS The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. CONCLUSIONS Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.
Resumo:
Sample preparation procedures for AMS measurements of 129I and 127I in environmental materials and some methodological aspects of quality assurance are discussed. Measurements from analyses of some pre-nuclear soil and thyroid gland samples and of a systematic investigation of natural waters in Lower Saxony, Germany, are described. Although the up-to-now lowest 129I/127I ratios in soils and thyroid glands were observed, they are still suspect to contamination since they are significantly higher than the pre-nuclear equilibrium ratio in the marine hydrosphere. A survey on all available 129I/127I isotopic ratios in precipitation shows a dramatic increase until the middle of the 1980s and a stabilization since 1987 at high isotopic ratios of about (3.6–8.3)×10−7. In surface waters, ratios of (57–380)×10−10 are measured while shallow ground waters show with ratios of (1.3–200)×10−10 significantly lower values with a much larger spread. The data for 129I in soils and in precipitation are used to estimate pre-nuclear and modern 129I deposition densities.
Resumo:
We conducted a nested case-control study to determine the significant risk factors for developing encephalitis from West Nile virus (WNV) infection. The purpose of this research project was to expand the previously published Houston study of 2002–2004 patients to include data on Houston patients from four additional years (2005–2008) to determine if there were any differences in risk factors shown to be associated with developing the more severe outcomes of WNV infection, encephalitis and death, by having this larger sample size. A re-analysis of the risk factors for encephalitis and death was conducted on all of the patients from 2002–2008 and was the focus of this proposed research. This analysis allowed for the determination to be made that there are differences in the outcome in the risk factors for encephalitis and death with an increased sample size. Retrospective medical chart reviews were completed for the 265 confirmed WNV hospitalized patients; 153 patients had encephalitis (WNE), 112 had either viral syndrome with fever (WNF) or meningitis (WNM); a total of 22 patients died. Univariate logistic regression analyses on demographic, comorbidities, and social risk factors was conducted in a similar manner as in the previously conducted study to determine the risk factors for developing encephalitis from WNV. A multivariate model was developed by using model building strategies for the multivariate logistic regression analysis. The hypothesis of this study was that there would be additional risk factors shown to be significant with the increase in sample size of the dataset. This analysis with a greater sample size and increased power supports the hypothesis in that there were additional risk factors shown to be statistically associated with the more severe outcomes of WNV infection (WNE or death). Based on univariate logistic regression results, these data showed that even though age of 20–44 years was statistically significant as a protecting effect for developing WNE in the original study, the expanded sample lacked significance. This study showed a significant WNE risk factor to be chronic alcohol abuse, when it was not significant in the original analysis. Other WNE risk factors identified in this analysis that showed to be significant but were not significant in the original analysis were cancer not in remission > 5 years, history of stroke, and chronic renal disease. When comparing the two analyses with death as an outcome, two risk factors that were shown to be significant in the original analysis but not in the expanded dataset analysis were diabetes mellitus and immunosuppression. Three risk factors shown to be significant in this expanded analysis but were not significant in the original study were illicit drug use, heroin or opiate use, and injection drug use. However, with the multiple logistic regression models, the same independent risk factors for developing encephalitis of age and history of hypertension including drug induced hypertension were consistent in both studies.^
Resumo:
Most empirical disciplines promote the reuse and sharing of datasets, as it leads to greater possibility of replication. While this is increasingly the case in Empirical Software Engineering, some of the most popular bug-fix datasets are now known to be biased. This raises two significants concerns: first, that sample bias may lead to underperforming prediction models, and second, that the external validity of the studies based on biased datasets may be suspect. This issue has raised considerable consternation in the ESE literature in recent years. However, there is a confounding factor of these datasets that has not been examined carefully: size. Biased datasets are sampling only some of the data that could be sampled, and doing so in a biased fashion; but biased samples could be smaller, or larger. Smaller data sets in general provide less reliable bases for estimating models, and thus could lead to inferior model performance. In this setting, we ask the question, what affects performance more? bias, or size? We conduct a detailed, large-scale meta-analysis, using simulated datasets sampled with bias from a high-quality dataset which is relatively free of bias. Our results suggest that size always matters just as much bias direction, and in fact much more than bias direction when considering information-retrieval measures such as AUC and F-score. This indicates that at least for prediction models, even when dealing with sampling bias, simply finding larger samples can sometimes be sufficient. Our analysis also exposes the complexity of the bias issue, and raises further issues to be explored in the future.
Resumo:
The interest in LED lighting has been growing recently due to the high efficacy, lifelime and ruggedness that this technology offers. However the key element to guarantee those parameters with these new electronic devices is to keep under control the working temperature of the semiconductor crystal. This paper propases a LED lamp design that fulfils the requ irements of a PV lighting systems, whose main quality criteria is reliability. It uses directly as a power supply a non·stabilized constant voltage source, as batteries. An electronic control architecture is used to regulate the current applied to the LEO matri)( according to their temperature and the voltage output value of the batteries with two pulse modulation signals (PWM) signals. The first one connects and disconnects the LEOs to the power supply and the second one connects and disconnects several emitters to the electric circuit changing its overall impedance. A prototype of the LEO lamp has been implemented and tested at different temperaturas and battery voltages.
Resumo:
Combinatorial chemistry is gaining wide appeal as a technique for generating molecular diversity. Among the many combinatorial protocols, the split/recombine method is quite popular and particularly efficient at generating large libraries of compounds. In this process, polymer beads are equally divided into a series of pools and each pool is treated with a unique fragment; then the beads are recombined, mixed to uniformity, and redivided equally into a new series of pools for the subsequent couplings. The deviation from the ideal equimolar distribution of the final products is assessed by a special overall relative error, which is shown to be related to the Pearson statistic. Although the split/recombine sampling scheme is quite different from those used in analysis of categorical data, the Pearson statistic is shown to still follow a chi2 distribution. This result allows us to derive the required number of beads such that, with 99% confidence, the overall relative error is controlled to be less than a pregiven tolerable limit L1. In this paper, we also discuss another criterion, which determines the required number of beads so that, with 99% confidence, all individual relative errors are controlled to be less than a pregiven tolerable limit L2 (0 < L2 < 1).
Resumo:
Thesis--University of Illinois.
Resumo:
Project No. 711151.01.
Resumo:
Bibliography: leaf 33.
Resumo:
Mode of access: Internet.
Resumo:
Statistical software is now commonly available to calculate Power (P') and sample size (N) for most experimental designs. In many circumstances, however, sample size is constrained by lack of time, cost, and in research involving human subjects, the problems of recruiting suitable individuals. In addition, the calculation of N is often based on erroneous assumptions about variability and therefore such estimates are often inaccurate. At best, we would suggest that such calculations provide only a very rough guide of how to proceed in an experiment. Nevertheless, calculation of P' is very useful especially in experiments that have failed to detect a difference which the experimenter thought was present. We would recommend that P' should always be calculated in these circumstances to determine whether the experiment was actually too small to test null hypotheses adequately.
Resumo:
The concept of sample size and statistical power estimation is now something that Optometrists that want to perform research, whether it be in practice or in an academic institution, cannot simply hide away from. Ethics committees, journal editors and grant awarding bodies are now increasingly requesting that all research be backed up with sample size and statistical power estimation in order to justify any study and its findings. This article presents a step-by-step guide of the process for determining sample sizeand statistical power. It builds on statistical concepts presented in earlier articles in Optometry Today by Richard Armstrong and Frank Eperjesi.
Resumo:
2000 Mathematics Subject Classification: 62E16, 65C05, 65C20.