893 resultados para Variable sample size
Resumo:
The VSS X- chart is known to perform better than the traditional X- control chart in detecting small to moderate mean shifts in the process. Many researchers have used this chart in order to detect a process mean shift under the assumption of known parameters. However, in practice, the process parameters are rarely known and are usually estimated from an in-control Phase I data set. In this paper, we evaluate the (run length) performances of the VSS X- control chart when the process parameters are estimated and we compare them in the case where the process parameters are assumed known. We draw the conclusion that these performances are quite different when the shift and the number of samples used during the phase I are small. ©2010 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Variable leaf milfoil, Myriophyllum heterophyllum, has been present in Maine since 1970. We created an analysis area including seventeen infestation sites and all bodies of water within a forty mile buffer. We also eliminated all water locations with a size less than 7,101 km2, the size of the smallest infestation site, Shagg Pond. Within those specifications we randomly selected seventeen un-infested bodies of water and used them as our uncontaminated sample. We looked for relationships between presence and number of boat launches, and proximity to a populated area. Using the Mann-Whitney test, we compared the sample size of non-infested lakes to the infested lakes. We found there was no significant difference in all three variables on the infestation of variable leaf milfoil.
Resumo:
A Fortran computer program is given for the computation of the adjusted average time to signal, or AATS, for adaptive (X) over bar charts with one, two, or all three design parameters variable: the sample size, n, the sampling interval, h, and the factor k used in determining the width of the action limits. The program calculates the threshold limit to switch the adaptive design parameters and also provides the in-control average time to signal, or ATS.
Resumo:
This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.
Resumo:
A Fortran computer program is given for the computation of the adjusted average time to signal, or AATS, for adaptive X̄ charts with one, two, or all three design parameters variable: the sample size, n, the sampling interval, h, and the factor k used in determining the width of the action limits. The program calculates the threshold limit to switch the adaptive design parameters and also provides the in-control average time to signal, or ATS.
Resumo:
The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^
Resumo:
This thesis project is motivated by the potential problem of using observational data to draw inferences about a causal relationship in observational epidemiology research when controlled randomization is not applicable. Instrumental variable (IV) method is one of the statistical tools to overcome this problem. Mendelian randomization study uses genetic variants as IVs in genetic association study. In this thesis, the IV method, as well as standard logistic and linear regression models, is used to investigate the causal association between risk of pancreatic cancer and the circulating levels of soluble receptor for advanced glycation end-products (sRAGE). Higher levels of serum sRAGE were found to be associated with a lower risk of pancreatic cancer in a previous observational study (255 cases and 485 controls). However, such a novel association may be biased by unknown confounding factors. In a case-control study, we aimed to use the IV approach to confirm or refute this observation in a subset of study subjects for whom the genotyping data were available (178 cases and 177 controls). Two-stage IV method using generalized method of moments-structural mean models (GMM-SMM) was conducted and the relative risk (RR) was calculated. In the first stage analysis, we found that the single nucleotide polymorphism (SNP) rs2070600 of the receptor for advanced glycation end-products (AGER) gene meets all three general assumptions for a genetic IV in examining the causal association between sRAGE and risk of pancreatic cancer. The variant allele of SNP rs2070600 of the AGER gene was associated with lower levels of sRAGE, and it was neither associated with risk of pancreatic cancer, nor with the confounding factors. It was a potential strong IV (F statistic = 29.2). However, in the second stage analysis, the GMM-SMM model failed to converge due to non- concaveness probably because of the small sample size. Therefore, the IV analysis could not support the causality of the association between serum sRAGE levels and risk of pancreatic cancer. Nevertheless, these analyses suggest that rs2070600 was a potentially good genetic IV for testing the causality between the risk of pancreatic cancer and sRAGE levels. A larger sample size is required to conduct a credible IV analysis.^
Resumo:
The quality of environmental studies depends on the utilization of adequate sampling protocol and analytical method for obtaining reliable results and minimizing analytical uncertainties. In order to demonstrate the applicability of INAA for determining chemical element composition of invertebrates, this work evaluated sample representativeness in terms of subsampling and sample size. Br, Co, Fe, K, Na, Sc and Zn could be determined in very small samples despite increasing of analytical uncertainties. Special attention should be directed to invertebrate species with small structures because of the high chemical variation observed among different sample sizes tested.
Resumo:
Conducting dielectric samples are often used in high-resolution experiments at high held. It is shown that significant amplitude and phase distortions of the RF magnetic field may result from perturbations caused by such samples. Theoretical analyses demonstrate the spatial variation of the RF field amplitude and phase across the sample, and comparisons of the effect are made for a variety of sample properties and operating field strengths. Although the effect is highly nonlinear, it tends to increase with increasing field strength, permittivity, conductivity, and sample size. There are cases, however, in which increasing the conductivity of the sample improves the homogeneity of the amplitude of the RF field across the sample at the expense of distorted RF phase. It is important that the perturbation effects be calculated for the experimental conditions used, as they have the potential to reduce the signal-to-noise ratio of NMR experiments and may increase the generation of spurious coherences. The effect of RF-coil geometry on the coherences is also modeled, with the use of homogeneous resonators such as the birdcage design being preferred, Recommendations are made concerning methods of reducing sample-induced perturbations. Experimental high-field imaging and high-resolution studies demonstrate the effect. (C) 1997 Academic Press.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Dissertação de Mestrado, Ciências Económicas e Empresariais, 10 de Dezembro de 2015, Universidade dos Açores.
Resumo:
This manuscript analyses the data generated by a Zero Length Column (ZLC) diffusion experimental set-up, for 1,3 Di-isopropyl benzene in a 100% alumina matrix with variable particle size. The time evolution of the phenomena resembles those of fractional order systems, namely those with a fast initial transient followed by long and slow tails. The experimental measurements are best fitted with the Harris model revealing a power law behavior.
Resumo:
ABSTRACT OBJECTIVE To identify the factors associated with severity of malocclusion in a population of adolescents. METHODS In this cross-sectional population-based study, the sample size (n = 761) was calculated considering a prevalence of malocclusion of 50.0%, with a 95% confidence level and a 5.0% precision level. The study adopted correction for the effect of delineation (deff = 2), and a 20.0% increase to offset losses and refusals. Multistage probability cluster sampling was adopted. Trained and calibrated professionals performed the intraoral examinations and interviews in households. The dependent variable (severity of malocclusion) was assessed using the Dental Aesthetic Index (DAI). The independent variables were grouped into five blocks: demographic characteristics, socioeconomic condition, use of dental services, health-related behavior and oral health subjective conditions. The ordinal logistic regression model was used to identify the factors associated with severity of malocclusion. RESULTS We interviewed and examined 736 adolescents (91.5% response rate), 69.9% of whom showed no abnormalities or slight malocclusion. Defined malocclusion was observed in 17.8% of the adolescents, being severe or very severe in 12.6%, with pressing or essential need of orthodontic treatment. The probabilities of greater severity of malocclusion were higher among adolescents who self-reported as black, indigenous, pardo or yellow, with lower per capita income, having harmful oral habits, negative perception of their appearance and perception of social relationship affected by oral health. CONCLUSIONS Severe or very severe malocclusion was more prevalent among socially disadvantaged adolescents, with reported harmful habits and perception of compromised esthetics and social relationships. Given that malocclusion can interfere with the self-esteem of adolescents, it is essential to improve public policy for the inclusion of orthodontic treatment among health care provided to this segment of the population, particularly among those of lower socioeconomic status.