892 resultados para sample size
Resumo:
Purpose Arbitrary numbers of corneal confocal microscopy images have been used for analysis of corneal subbasal nerve parameters under the implicit assumption that these are a representative sample of the central corneal nerve plexus. The purpose of this study is to present a technique for quantifying the number of random central corneal images required to achieve an acceptable level of accuracy in the measurement of corneal nerve fiber length and branch density. Methods Every possible combination of 2 to 16 images (where 16 was deemed the true mean) of the central corneal subbasal nerve plexus, not overlapping by more than 20%, were assessed for nerve fiber length and branch density in 20 subjects with type 2 diabetes and varying degrees of functional nerve deficit. Mean ratios were calculated to allow comparisons between and within subjects. Results In assessing nerve branch density, eight randomly chosen images not overlapping by more than 20% produced an average that was within 30% of the true mean 95% of the time. A similar sampling strategy of five images was 13% within the true mean 80% of the time for corneal nerve fiber length. Conclusions The “sample combination analysis” presented here can be used to determine the sample size required for a desired level of accuracy of quantification of corneal subbasal nerve parameters. This technique may have applications in other biological sampling studies.
Resumo:
Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.
Resumo:
Brain asymmetry has been a topic of interest for neuroscientists for many years. The advent of diffusion tensor imaging (DTI) allows researchers to extend the study of asymmetry to a microscopic scale by examining fiber integrity differences across hemispheres rather than the macroscopic differences in shape or structure volumes. Even so, the power to detect these microarchitectural differences depends on the sample size and how the brain images are registered and how many subjects are studied. We fluidly registered 4 Tesla DTI scans from 180 healthy adult twins (45 identical and fraternal pairs) to a geometrically-centered population mean template. We computed voxelwise maps of significant asymmetries (left/right hemisphere differences) for common fiber anisotropy indices (FA, GA). Quantitative genetic models revealed that 47-62% of the variance in asymmetry was due to genetic differences in the population. We studied how these heritability estimates varied with the type of registration target (T1- or T2-weighted) and with sample size. All methods consistently found that genetic factors strongly determined the lateralization of fiber anisotropy, facilitating the quest for specific genes that might influence brain asymmetry and fiber integrity.
Resumo:
Objective The Nintendo Wii Fit integrates virtual gaming with body movement, and may be suitable as an adjunct to conventional physiotherapy following lower limb fractures. This study examined the feasibility and safety of using the Wii Fit as an adjunct to outpatient physiotherapy following lower limb fractures, and reports sample size considerations for an appropriately powered randomised trial. Methodology Ambulatory patients receiving physiotherapy following a lower limb fracture participated in this study (n = 18). All participants received usual care (individual physiotherapy). The first nine participants also used the Wii Fit under the supervision of their treating clinician as an adjunct to usual care. Adverse events, fracture malunion or exacerbation of symptoms were recorded. Pain, balance and patient-reported function were assessed at baseline and discharge from physiotherapy. Results No adverse events were attributed to either the usual care physiotherapy or Wii Fit intervention for any patient. Overall, 15 (83%) participants completed both assessments and interventions as scheduled. For 80% power in a clinical trial, the number of complete datasets required in each group to detect a small, medium or large effect of the Wii Fit at a post-intervention assessment was calculated at 175, 63 and 25, respectively. Conclusions The Nintendo Wii Fit was safe and feasible as an adjunct to ambulatory physiotherapy in this sample. When considering a likely small effect size and the 17% dropout rate observed in this study, 211 participants would be required in each clinical trial group. A larger effect size or multiple repeated measures design would require fewer participants.
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.
Resumo:
Aim: To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. Methods: The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. Results: A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Conclusion: Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Resumo:
PurposeThe selection of suitable outcomes and sample size calculation are critical factors in the design of a randomised controlled trial (RCT). The goal of this study was to identify the range of outcomes and information on sample size calculation in RCTs on geographic atrophy (GA).MethodsWe carried out a systematic review of age-related macular degeneration (AMD) RCTs. We searched MEDLINE, EMBASE, Scopus, Cochrane Library, www.controlled-trials.com, and www.ClinicalTrials.gov. Two independent reviewers screened records. One reviewer collected data and the second reviewer appraised 10% of collected data. We scanned references lists of selected papers to include other relevant RCTs.ResultsLiterature and registry search identified 3816 abstracts of journal articles and 493 records from trial registries. From a total of 177 RCTs on all types of AMD, 23 RCTs on GA were included. Eighty-one clinical outcomes were identified. Visual acuity (VA) was the most frequently used outcome, presented in 18 out of 23 RCTs and followed by the measures of lesion area. For sample size analysis, 8 GA RCTs were included. None of them provided sufficient Information on sample size calculations.ConclusionsThis systematic review illustrates a lack of standardisation in terms of outcome reporting in GA trials and issues regarding sample size calculation. These limitations significantly hamper attempts to compare outcomes across studies and also perform meta-analyses.
Resumo:
Resumen tomado de la publicación
Resumo:
Resumen tomado de la publicaci??n