996 resultados para Double Sampling
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The general assumption under which the (X) over bar chart is designed is that the process mean has a constant in-control value. However, there are situations in which the process mean wanders. When it wanders according to a first-order autoregressive (AR (1)) model, a complex approach involving Markov chains and integral equation methods is used to evaluate the properties of the (X) over bar chart. In this paper, we propose the use of a pure Markov chain approach to study the performance of the (X) over bar chart. The performance of the chat (X) over bar with variable parameters and the (X) over bar with double sampling are compared. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this article, we consider the T(2) chart with double sampling to control bivariate processes (BDS chart). During the first stage of the sampling, n(1) items of the sample are inspected and two quality characteristics (x; y) are measured. If the Hotelling statistic T(1)(2) for the mean vector of (x; y) is less than w, the sampling is interrupted. If the Hotelling statistic T(1)(2) is greater than CL(1), where CL(1) > w, the control chart signals an out-of-control condition. If w < T(1)(2) <= CL(1), the sampling goes on to the second stage, where the remaining n(2) items of the sample are inspected and T(2)(2) for the mean vector of the whole sample is computed. During the second stage of the sampling, the control chart signals an out-of-control condition when the statistic T(2)(2) is larger than CL(2). A comparative study shows that the BDS chart detects process disturbances faster than the standard bivariate T(2) chart and the adaptive bivariate T(2) charts with variable sample size and/or variable sampling interval.
Resumo:
We propose a new statistic to control the covariance matrix of bivariate processes. This new statistic is based on the sample variances of the two quality characteristics, in short VMAX statistic. The points plotted on the chart correspond to the maximum of the values of these two variances. The reasons to consider the VMAX statistic instead of the generalized variance vertical bar S vertical bar is its faster detection of process changes and its better diagnostic feature; that is, with the VMAX statistic it is easier to identify the out-of-control variable. We study the double sampling (DS) and the exponentially weighted moving average (EWMA) charts based on the VMAX statistic. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper we propose the Double Sampling X̄ control chart for monitoring processes in which the observations follow a first order autoregressive model. We consider sampling intervals that are sufficiently long to meet the rational subgroup concept. The Double Sampling X̄ chart is substantially more efficient than the Shewhart chart and the Variable Sample Size chart. To study the properties of these charts we derived closed-form expressions for the average run length (ARL) taking into account the within-subgroup correlation. Numerical results show that this correlation has a significant impact on the chart properties.
Resumo:
The steady-state average run length is used to measure the performance of the recently proposed synthetic double sampling (X) over bar chart (synthetic DS chart). The overall performance of the DS X chart in signaling process mean shifts of different magnitudes does not improve when it is integrated with the conforming run length chart, except when the integrated charts are designed to offer very high protection against false alarms, and the use of large samples is prohibitive. The synthetic chart signals when a second point falls beyond the control limits, no matter whether one of them falls above the centerline and the other falls below it; with the side-sensitive feature, the synthetic chart does not signal when they fall on opposite sides of the centerline. We also investigated the steady-state average run length of the side-sensitive synthetic DS X chart. With the side-sensitive feature, the overall performance of the synthetic DS X chart improves, but not enough to outperform the non-synthetic DS X chart. Copyright (C) 2014 John Wiley &Sons, Ltd.
Resumo:
This thesis reports on the results of the analyses of certain aspects of sampling inspection plans. The investigation has been confined to attributes (as distinct from variables) plans and in this respect.the analyses have been concerned with two main aspects of single and double plans. These are:- (i) the Average Outgoing Quality Limit (AOQL) of the plan. (ii) the Average Sample Number (ASN) of the plan. In the former connection the investigation has been concerned with the evaluation of the AOQL analytically and the determination of the fraction defective of the incoming material to give the AOQL. The analyses have been applied to both single and double sampling plans, In the latter connection the investigation has been concerned with the evaluation of the maximum ASN analytically and the determination of the fraction defective of the incoming material to give the maximum value of ASN. The analyses have been confined only to double sampling plans because in the case of single sampling the ASN is constant and is equal to n, the sample size.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.
Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.
An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.
As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.
Resumo:
IntraCavity Laser Absorption Spectroscopy (ICLAS) is a high-resolution, high sensitivity spectroscopic method capable of measuring line positions, linewidths, lineshapes, and absolute line intensities with a sensitivity that far exceeds that of a traditional multiple pass absorption cell or Fourier Transform spectrometer. From the fundamental knowledge obtained through these measurements, information about the underlying spectroscopy, dynamics, and kinetics of the species interrogated can be derived. The construction of an ICLA Spectrometer will be detailed, and the measurements utilizing ICLAS will be discussed, as well as the theory of operation and modifications of the experimental apparatus. Results include: i) Line intensities and collision-broadening coefficients of the A band of oxygen and previously unobserved, high J, rotational transitions of the A band, hot-band transitions, and transitions of isotopically substituted species. ii) High-resolution (0.013 cm-1) spectra of the second overtone of the OH stretch of trans-nitrous acid recorded between 10,230 and 10,350 cm-1. The spectra were analyzed to yield a complete set of rotational parameters and an absolute band intensity, and two groups of anharmonic perturbations were observed and analyzed. These findings are discussed in the context of the contribution of overtone-mediated processes to OH radical production in the lower atmosphere.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)