975 resultados para variable sample size


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A standard X̄ chart for controlling the process mean takes samples of size n0 at specified, equally-spaced, fixed-time points. This article proposes a modification of the standard X chart that allows one to take additional samples, bigger than n0, between these fixed times. The additional samples are taken from the process when there is evidence that the process mean moved from target. Following the notation proposed by Reynolds (1996a) and Costa (1997) we shortly call the proposed X chart as VSSIFT X chart where VSSIFT means variable sample size and sampling intervals with fixed times. The X chart with the VSSIFT feature is easier to be administered than a standard VSSI X chart that is not constrained to sample at the specified fixed times. The performances of the charts in detecting process mean shifts are comparable. Copyright © 1998 by Marcel Dekker, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we consider the synthetic control chart with two-stage sampling (SyTS chart) to control bivariate processes. During the first stage, one item of the sample is inspected and two correlated quality characteristics (x;y) are measured. If the Hotelling statistic T1 2 for these individual observations of (x;y) is lower than a specified value UCL 1 the sampling is interrupted. Otherwise, the sampling goes on to the second stage, where the remaining items are inspected and the Hotelling statistic T2 2 for the sample means of (x;y) is computed. When the statistic T2 2 is larger than a specified value UCL2, the sample is classified as nonconforming. According to the synthetic control chart procedure, the signal is based on the number of conforming samples between two neighbor nonconforming samples. The proposed chart detects process disturbances faster than the bivariate charts with variable sample size and it is from the practical viewpoint more convenient to administer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose the Double Sampling X̄ control chart for monitoring processes in which the observations follow a first order autoregressive model. We consider sampling intervals that are sufficiently long to meet the rational subgroup concept. The Double Sampling X̄ chart is substantially more efficient than the Shewhart chart and the Variable Sample Size chart. To study the properties of these charts we derived closed-form expressions for the average run length (ARL) taking into account the within-subgroup correlation. Numerical results show that this correlation has a significant impact on the chart properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The VSS X- chart is known to perform better than the traditional X- control chart in detecting small to moderate mean shifts in the process. Many researchers have used this chart in order to detect a process mean shift under the assumption of known parameters. However, in practice, the process parameters are rarely known and are usually estimated from an in-control Phase I data set. In this paper, we evaluate the (run length) performances of the VSS X- control chart when the process parameters are estimated and we compare them in the case where the process parameters are assumed known. We draw the conclusion that these performances are quite different when the shift and the number of samples used during the phase I are small. ©2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Variable leaf milfoil, Myriophyllum heterophyllum, has been present in Maine since 1970. We created an analysis area including seventeen infestation sites and all bodies of water within a forty mile buffer. We also eliminated all water locations with a size less than 7,101 km2, the size of the smallest infestation site, Shagg Pond. Within those specifications we randomly selected seventeen un-infested bodies of water and used them as our uncontaminated sample. We looked for relationships between presence and number of boat launches, and proximity to a populated area. Using the Mann-Whitney test, we compared the sample size of non-infested lakes to the infested lakes. We found there was no significant difference in all three variables on the infestation of variable leaf milfoil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Fortran computer program is given for the computation of the adjusted average time to signal, or AATS, for adaptive (X) over bar charts with one, two, or all three design parameters variable: the sample size, n, the sampling interval, h, and the factor k used in determining the width of the action limits. The program calculates the threshold limit to switch the adaptive design parameters and also provides the in-control average time to signal, or ATS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Fortran computer program is given for the computation of the adjusted average time to signal, or AATS, for adaptive X̄ charts with one, two, or all three design parameters variable: the sample size, n, the sampling interval, h, and the factor k used in determining the width of the action limits. The program calculates the threshold limit to switch the adaptive design parameters and also provides the in-control average time to signal, or ATS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis project is motivated by the potential problem of using observational data to draw inferences about a causal relationship in observational epidemiology research when controlled randomization is not applicable. Instrumental variable (IV) method is one of the statistical tools to overcome this problem. Mendelian randomization study uses genetic variants as IVs in genetic association study. In this thesis, the IV method, as well as standard logistic and linear regression models, is used to investigate the causal association between risk of pancreatic cancer and the circulating levels of soluble receptor for advanced glycation end-products (sRAGE). Higher levels of serum sRAGE were found to be associated with a lower risk of pancreatic cancer in a previous observational study (255 cases and 485 controls). However, such a novel association may be biased by unknown confounding factors. In a case-control study, we aimed to use the IV approach to confirm or refute this observation in a subset of study subjects for whom the genotyping data were available (178 cases and 177 controls). Two-stage IV method using generalized method of moments-structural mean models (GMM-SMM) was conducted and the relative risk (RR) was calculated. In the first stage analysis, we found that the single nucleotide polymorphism (SNP) rs2070600 of the receptor for advanced glycation end-products (AGER) gene meets all three general assumptions for a genetic IV in examining the causal association between sRAGE and risk of pancreatic cancer. The variant allele of SNP rs2070600 of the AGER gene was associated with lower levels of sRAGE, and it was neither associated with risk of pancreatic cancer, nor with the confounding factors. It was a potential strong IV (F statistic = 29.2). However, in the second stage analysis, the GMM-SMM model failed to converge due to non- concaveness probably because of the small sample size. Therefore, the IV analysis could not support the causality of the association between serum sRAGE levels and risk of pancreatic cancer. Nevertheless, these analyses suggest that rs2070600 was a potentially good genetic IV for testing the causality between the risk of pancreatic cancer and sRAGE levels. A larger sample size is required to conduct a credible IV analysis.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To generate realistic predictions, species distribution models require the accurate coregistration of occurrence data with environmental variables. There is a common assumption that species occurrence data are accurately georeferenced; however, this is often not the case. This study investigates whether locational uncertainty and sample size affect the performance and interpretation of fine-scale species distribution models. This study evaluated the effects of locational uncertainty across multiple sample sizes by subsampling and spatially degrading occurrence data. Distribution models were constructed for kelp (Ecklonia radiata), across a large study site (680 km2) off the coast of southeastern Australia. Generalized additive models were used to predict distributions based on fine-resolution (2·5 m cell size) seafloor variables, generated from multibeam echosounder data sets, and occurrence data from underwater towed video. The effects of different levels of locational uncertainty in combination with sample size were evaluated by comparing model performance and predicted distributions. While locational uncertainty was observed to influence some measures of model performance, in general this was small and varied based on the accuracy metric used. However, simulated locational uncertainty caused changes in variable importance and predicted distributions at fine scales, potentially influencing model interpretation. This was most evident with small sample sizes. Results suggested that seemingly high-performing, fine-scale models can be generated from data containing locational uncertainty, although interpreting their predictions can be misleading if the predictions are interpreted at scales similar to the spatial errors. This study demonstrated the need to consider predictions across geographic space rather than performance alone. The findings are important for conservation managers as they highlight the inherent variation in predictions between equally performing distribution models, and the subsequent restrictions on ecological interpretations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Enhancing children's self-concepts is widely accepted as a critical educational outcome of schooling and is postulated as a mediating variable that facilitates the attainment of other desired outcomes such as improved academic achievement. Despite considerable advances in self-concept research, there has been limited progress in devising teacher-administered enhancement interventions. This is unfortunate as teachers are crucial change agents during important developmental periods when self-concept is formed. The primary aim of the present investigation is to build on the promising features of previous self-concept enhancement studies by: (a) combining two exciting research directions developed by Burnett and Craven to develop a potentially powerful cognitive-based intervention; (b) incorporating recent developments in theory and measurement to ensure that the multidimensionality of self-concept is accounted for in the research design; (c) fully investigating the effects of a potentially strong cognitive intervention on reading, mathematics, school and learning self-concepts by using a large sample size and a sophisticated research design; (d) evaluating the effects of the intervention on affective and cognitive subcomponents of reading, mathematics, school and learning self-concepts over time to test for differential effects of the intervention; (e) modifying and extending current procedures to maximise the successful implementation of a teacher-mediated intervention in a naturalistic setting by incorporating sophisticated teacher training as suggested by Hattie (1992) and including an assessment of the efficacy of implementation; and (f) examining the durability of effects associated with the intervention.