772 resultados para Random Sample Size


Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62E16, 65C05, 65C20.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose Arbitrary numbers of corneal confocal microscopy images have been used for analysis of corneal subbasal nerve parameters under the implicit assumption that these are a representative sample of the central corneal nerve plexus. The purpose of this study is to present a technique for quantifying the number of random central corneal images required to achieve an acceptable level of accuracy in the measurement of corneal nerve fiber length and branch density. Methods Every possible combination of 2 to 16 images (where 16 was deemed the true mean) of the central corneal subbasal nerve plexus, not overlapping by more than 20%, were assessed for nerve fiber length and branch density in 20 subjects with type 2 diabetes and varying degrees of functional nerve deficit. Mean ratios were calculated to allow comparisons between and within subjects. Results In assessing nerve branch density, eight randomly chosen images not overlapping by more than 20% produced an average that was within 30% of the true mean 95% of the time. A similar sampling strategy of five images was 13% within the true mean 80% of the time for corneal nerve fiber length. Conclusions The “sample combination analysis” presented here can be used to determine the sample size required for a desired level of accuracy of quantification of corneal subbasal nerve parameters. This technique may have applications in other biological sampling studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The usual practice in using a control chart to monitor a process is to take samples of size n from the process every h hours. This article considers the properties of the X̄ chart when the size of each sample depends on what is observed in the preceding sample. The idea is that the sample should be large if the sample point of the preceding sample is close to but not actually outside the control limits and small if the sample point is close to the target. The properties of the variable sample size (VSS) X̄ chart are obtained using Markov chains. The VSS X̄ chart is substantially quicker than the traditional X̄ chart in detecting moderate shifts in the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

gsample draws a random sample from the data in memory. Simple random sampling (SRS) is supported, as well as unequal probability sampling (UPS), of which sampling with probabilities proportional to size (PPS) is a special case. Both methods, SRS and UPS/PPS, provide sampling with replacement and sampling without replacement. Furthermore, stratified sampling and cluster sampling is supported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain asymmetry has been a topic of interest for neuroscientists for many years. The advent of diffusion tensor imaging (DTI) allows researchers to extend the study of asymmetry to a microscopic scale by examining fiber integrity differences across hemispheres rather than the macroscopic differences in shape or structure volumes. Even so, the power to detect these microarchitectural differences depends on the sample size and how the brain images are registered and how many subjects are studied. We fluidly registered 4 Tesla DTI scans from 180 healthy adult twins (45 identical and fraternal pairs) to a geometrically-centered population mean template. We computed voxelwise maps of significant asymmetries (left/right hemisphere differences) for common fiber anisotropy indices (FA, GA). Quantitative genetic models revealed that 47-62% of the variance in asymmetry was due to genetic differences in the population. We studied how these heritability estimates varied with the type of registration target (T1- or T2-weighted) and with sample size. All methods consistently found that genetic factors strongly determined the lateralization of fiber anisotropy, facilitating the quest for specific genes that might influence brain asymmetry and fiber integrity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective The Nintendo Wii Fit integrates virtual gaming with body movement, and may be suitable as an adjunct to conventional physiotherapy following lower limb fractures. This study examined the feasibility and safety of using the Wii Fit as an adjunct to outpatient physiotherapy following lower limb fractures, and reports sample size considerations for an appropriately powered randomised trial. Methodology Ambulatory patients receiving physiotherapy following a lower limb fracture participated in this study (n = 18). All participants received usual care (individual physiotherapy). The first nine participants also used the Wii Fit under the supervision of their treating clinician as an adjunct to usual care. Adverse events, fracture malunion or exacerbation of symptoms were recorded. Pain, balance and patient-reported function were assessed at baseline and discharge from physiotherapy. Results No adverse events were attributed to either the usual care physiotherapy or Wii Fit intervention for any patient. Overall, 15 (83%) participants completed both assessments and interventions as scheduled. For 80% power in a clinical trial, the number of complete datasets required in each group to detect a small, medium or large effect of the Wii Fit at a post-intervention assessment was calculated at 175, 63 and 25, respectively. Conclusions The Nintendo Wii Fit was safe and feasible as an adjunct to ambulatory physiotherapy in this sample. When considering a likely small effect size and the 17% dropout rate observed in this study, 211 participants would be required in each clinical trial group. A larger effect size or multiple repeated measures design would require fewer participants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study summarizes the results of a survey designed to provide economic information about the financial status of commercial reef fish boats with homeports in the Florida Keys. A survey questionnaire was administered in the summer and fall of 1994 by interviewers in face-to-face meetings with owners or operators of randomly selected boats. Fishermen were asked for background information about themselves and their boats, their capital investments in boats and equipment, and about their average catches, revenues, and costs per trip for their two most important kinds of fishing trips during 1993 for species in the reef fish fishery. Respondents were characterized with regard to their dependence on the reef fish fishery as a source of household income. Boats were described in terms of their physical and financial characteristics. Different kinds of fishing trips were identified by the species that generated the greatest revenue. Trips were grouped into the following categories: yellowtail snapper (Ocyurus chrysurus); mutton snapper (Lutjanus analis), black grouper (Mycteroperca bonaci), or red grouper (Epinephelus morio); gray snapper (Lutjanus griseus); deeper water groupers and tilefishes; greater amberjack (Seriola dumerili); spiny lobster (Panulirus argus); king mackerel (Scomberomorus cavalla); and dolphin (Coryphaena hippurus). Average catches, revenues, routine trip costs, and net operating revenues per boat per trip and per boat per year were estimated for each category of fishing trips. In addition to its descriptive value, data collected during this study will aid in future examinations of the economic effects of various regulations on commercial reef fish fishermen.(PDF file contains 48 pages.)