981 resultados para variable sampling interval
Resumo:
Recent theoretical studies have shown that the X̄ chart with variable sampling intervals (VSI) and the X̄ chart with variable sample size (VSS) are quicker than the traditional X̄ chart in detecting shifts in the process. This article considers the X̄ chart with variable sample size and sampling intervals (VSSI). It is assumed that the amount of time the process remains in control has exponential distribution. The properties of the VSSI X̄ chart are obtained using Markov chains. The VSSI X̄ chart is even quicker than the VSI or VSS X̄ charts in detecting moderate shifts in the process.
Resumo:
Estimates of larval supply can provide information on year-class strength that is useful for fisheries management. However, larval supply is difficult to monitor because long-term, high-frequency sampling is needed. The purpose of this study was to subsample an 11-year record of daily larval supply of blue crab (Callinectes sapidus) to determine the effect of sampling interval on variability in estimates of supply. The coefficient of variation in estimates of supply varied by 0.39 among years at a 2-day sampling interval and 0.84 at a 7-day sampling interval. For 8 of the 11 years, there was a significant correlation between mean daily larval supply and lagged fishery catch per trip (coefficient of correlation [r]=0.88). When these 8 years were subsampled, a 2-day sampling interval yielded a significant correlation with fishery data only 64.5% of the time and a 3-day sampling interval never yielded a significant correlation. Therefore, high-frequency sampling (daily or every other day) may be needed to characterize interannual variability in larval supply.
Resumo:
We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).
Resumo:
In this article, we consider the T(2) chart with double sampling to control bivariate processes (BDS chart). During the first stage of the sampling, n(1) items of the sample are inspected and two quality characteristics (x; y) are measured. If the Hotelling statistic T(1)(2) for the mean vector of (x; y) is less than w, the sampling is interrupted. If the Hotelling statistic T(1)(2) is greater than CL(1), where CL(1) > w, the control chart signals an out-of-control condition. If w < T(1)(2) <= CL(1), the sampling goes on to the second stage, where the remaining n(2) items of the sample are inspected and T(2)(2) for the mean vector of the whole sample is computed. During the second stage of the sampling, the control chart signals an out-of-control condition when the statistic T(2)(2) is larger than CL(2). A comparative study shows that the BDS chart detects process disturbances faster than the standard bivariate T(2) chart and the adaptive bivariate T(2) charts with variable sample size and/or variable sampling interval.
Resumo:
The usual practice in using a control chart to monitor a process is to take samples of size n from the process every h hours This article considers the properties of the XBAR chart when the size of each sample depends on what is observed in the preceding sample. The idea is that the sample should be large if the sample point of the preceding sample is close to but not actually outside the control limits and small if the sample point is close to the target. The properties of the variable sample size (VSS) XBAR chart are obtained using Markov chains. The VSS XBAR chart is substantially quicker than the traditional XBAR chart in detecting moderate shifts in the process.
Resumo:
A standard (X) over bar chart for controlling the process mean takes samples of size no at specified, equally-spaced, fixed-time points. This article proposes a modification of the standard (X) over bar chart that allows one to take additional samples, bigger than no, between these fixed times. The additional samples are taken from the process when there is evidence that the process mean moved from target. Following the notation proposed by Reynolds (1996a) and Costs (1997) we shortly call the proposed (X) over bar chart as VSSIFT (X) over bar chart: where VSSIFT means variable sample size and sampling intervals with fixed times. The (X) over bar chart with the VSSIFT feature is easier to be administered than a standard VSSI (X) over bar chart that is not constrained to sample at the specified fixed times. The performances of the charts in detecting process mean shifts are comparable.
Resumo:
A standard X̄ chart for controlling the process mean takes samples of size n0 at specified, equally-spaced, fixed-time points. This article proposes a modification of the standard X chart that allows one to take additional samples, bigger than n0, between these fixed times. The additional samples are taken from the process when there is evidence that the process mean moved from target. Following the notation proposed by Reynolds (1996a) and Costa (1997) we shortly call the proposed X chart as VSSIFT X chart where VSSIFT means variable sample size and sampling intervals with fixed times. The X chart with the VSSIFT feature is easier to be administered than a standard VSSI X chart that is not constrained to sample at the specified fixed times. The performances of the charts in detecting process mean shifts are comparable. Copyright © 1998 by Marcel Dekker, Inc.
Resumo:
Processos de produção precisam ser avaliados continuamente para que funcionem de modo mais eficaz e eficiente possível. Um conjunto de ferramentas utilizado para tal finalidade é denominado controle estatístico de processos (CEP). Através de ferramentas do CEP, o monitoramento pode ser realizado periodicamente. A ferramenta mais importante do CEP é o gráfico de controle. Nesta tese, foca-se no monitoramento de uma variável resposta, por meio dos parâmetros ou coeficientes de um modelo de regressão linear simples. Propõe-se gráficos de controle χ2 adaptativos para o monitoramento dos coeficientes do modelo de regressão linear simples. Mais especificamente, são desenvolvidos sete gráficos de controle χ2 adaptativos para o monitoramento de perfis lineares, a saber: gráfico com tamanho de amostra variável; intervalo de amostragem variável; limites de controle e de advertência variáveis; tamanho de amostra e intervalo de amostragem variáveis; tamanho de amostra e limites variáveis; intervalo de amostragem e limites variáveis e por fim, com todos os parâmetros de projeto variáveis. Medidas de desempenho dos gráficos propostos foram obtidas através de propriedades de cadeia de Markov, tanto para a situação zero-state como para a steady-state, verificando-se uma diminuição do tempo médio até um sinal no caso de desvios pequenos a moderados nos coeficientes do modelo de regressão do processo de produção. Os gráficos propostos foram aplicados a um exemplo de um processo de fabricação de semicondutores. Além disso, uma análise de sensibilidade dos mesmos é feita em função de desvios de diferentes magnitudes nos parâmetros do processo, a saber, no intercepto e na inclinação, comparando-se o desempenho entre os gráficos desenvolvidos e também com o gráfico χ2 com parâmetros fixos. Os gráficos propostos nesta tese são adequados para vários tipos de aplicações. Neste trabalho também foi considerado características de qualidade as quais são representadas por um modelo de regressão não-linear. Para o modelo de regressão não-linear considerado, a proposta é utilizar um método que divide o perfil não-linear em partes lineares, mais especificamente, um algoritmo para este fim, proposto na literatura, foi utilizado. Desta forma, foi possível validar a técnica proposta, mostrando que a mesma é robusta no sentido que permite tipos diferentes de perfis não-lineares. Aproxima-se, portanto um perfil não-linear por perfis lineares por partes, o que proporciona o monitoramento de cada perfil linear por gráficos de controle, como os gráficos de controle desenvolvidos nesta tese. Ademais apresenta-se a metodologia de decompor um perfil não-linear em partes lineares de forma detalhada e completa, abrindo espaço para ampla utilização.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Existing studies of on-line process control are concerned with economic aspects, and the parameters of the processes are optimized with respect to the average cost per item produced. However, an equally important dimension is the adoption of an efficient maintenance policy. In most cases, only the frequency of the corrective adjustment is evaluated because it is assumed that the equipment becomes "as good as new" after corrective maintenance. For this condition to be met, a sophisticated and detailed corrective adjustment system needs to be employed. The aim of this paper is to propose an integrated economic model incorporating the following two dimensions: on-line process control and a corrective maintenance program. Both performances are objects of an average cost per item minimization. Adjustments are based on the location of the measurement of a quality characteristic of interest in a three decision zone. Numerical examples are illustrated in the proposal. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Recent studies have shown that the (X) over bar chart with variable sampling intervals (VSI) and/or with variable sample sizes (VSS) detects process shifts faster than the traditional (X) over bar chart. This article extends these studies for processes that are monitored by both the (X) over bar and R charts. A Markov chain model is used to determine the properties of the joint (X) over bar and R charts with variable sample sizes and sampling intervals (VSSI). The VSSI scheme improves the joint (X) over bar and R control chart performance in terms of the speed with which shifts in the process mean and/or variance are detected.
Resumo:
A Fortran computer program is given for the computation of the adjusted average time to signal, or AATS, for adaptive (X) over bar charts with one, two, or all three design parameters variable: the sample size, n, the sampling interval, h, and the factor k used in determining the width of the action limits. The program calculates the threshold limit to switch the adaptive design parameters and also provides the in-control average time to signal, or ATS.
Resumo:
This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.