14 resultados para Super threshold random variable
em Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho"
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Throughout this article, it is assumed that the no-central chi-square chart with two stage samplings (TSS Chisquare chart) is employed to monitor a process where the observations from the quality characteristic of interest X are independent and identically normally distributed with mean μ and variance σ2. The process is considered to start with the mean and the variance on target (μ = μ0; σ2 = σ0 2), but at some random time in the future an assignable cause shifts the mean from μ0 to μ1 = μ0 ± δσ0, δ >0 and/or increases the variance from σ0 2 to σ1 2 = γ2σ0 2, γ > 1. Before the assignable cause occurrence, the process is considered to be in a state of statistical control (defined by the in-control state). Similar to the Shewhart charts, samples of size n 0+ 1 are taken from the process at regular time intervals. The samplings are performed in two stages. At the first stage, the first item of the i-th sample is inspected. If its X value, say Xil, is close to the target value (|Xil-μ0|< w0σ 0, w0>0), then the sampling is interrupted. Otherwise, at the second stage, the remaining n0 items are inspected and the following statistic is computed. Wt = Σj=2n 0+1(Xij - μ0 + ξiσ 0)2 i = 1,2 Let d be a positive constant then ξ, =d if Xil > 0 ; otherwise ξi =-d. A signal is given at sample i if |Xil-μ0| > w0σ 0 and W1 > knia:tl, where kChi is the factor used in determining the upper control limit for the non-central chi-square chart. If devices such as go and no-go gauges can be considered, then measurements are not required except when the sampling goes to the second stage. Let P be the probability of deciding that the process is in control and P 1, i=1,2, be the probability of deciding that the process is in control at stage / of the sampling procedure. Thus P = P1 + P 2 - P1P2, P1 = Pr[μ0 - w0σ0 ≤ X ≤ μ0+ w 0σ0] P2=Pr[W ≤ kChi σ0 2], (3) During the in-control period, W / σ0 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ0 = n0d2, i.e. W / σ0 2 - xn0 22 (λ0) During the out-of-control period, W / σ1 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ1 = n0(δ + ξ)2 / γ2 The effectiveness of a control chart in detecting a process change can be measured by the average run length (ARL), which is the speed with which a control chart detects process shifts. The ARL for the proposed chart is easily determined because in this case, the number of samples before a signal is a geometrically distributed random variable with parameter 1-P, that is, ARL = I /(1-P). It is shown that the performance of the proposed chart is better than the joint X̄ and R charts, Furthermore, if the TSS Chi-square chart is used for monitoring diameters, volumes, weights, etc., then appropriate devices, such as go-no-go gauges can be used to decide if the sampling should go to the second stage or not. When the process is stable, and the joint X̄ and R charts are in use, the monitoring becomes monotonous because rarely an X̄ or R value fall outside the control limits. The natural consequence is the user to pay less and less attention to the steps required to obtain the X̄ and R value. In some cases, this lack of attention can result in serious mistakes. The TSS Chi-square chart has the advantage that most of the samplings are interrupted, consequently, most of the time the user will be working with attributes. Our experience shows that the inspection of one item by attribute is much less monotonous than measuring four or five items at each sampling.
Resumo:
In this paper is presented a region-based methodology for Digital Elevation Model segmentation obtained from laser scanning data. The methodology is based on two sequential techniques, i.e., a recursive splitting technique using the quad tree structure followed by a region merging technique using the Markov Random Field model. The recursive splitting technique starts splitting the Digital Elevation Model into homogeneous regions. However, due to slight height differences in the Digital Elevation Model, region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Markov Random Field model is applied to the previously segmented data. The resulting regions are firstly structured by using the so-called Region Adjacency Graph. Each node of the Region Adjacency Graph represents a region of the Digital Elevation Model segmented and two nodes have connectivity between them if corresponding regions share a common boundary. Next it is assumed that the random variable related to each node, follows the Markov Random Field model. This hypothesis allows the derivation of the posteriori probability distribution function whose solution is obtained by the Maximum a Posteriori estimation. Regions presenting high probability of similarity are merged. Experiments carried out with laser scanning data showed that the methodology allows to separate the objects in the Digital Elevation Model with a low amount of fragmentation.
Resumo:
In this paper a framework based on the decomposition of the first-order optimality conditions is described and applied to solve the Probabilistic Power Flow (PPF) problem in a coordinated but decentralized way in the context of multi-area power systems. The purpose of the decomposition framework is to solve the problem through a process of solving smaller subproblems, associated with each area of the power system, iteratively. This strategy allows the probabilistic analysis of the variables of interest, in a particular area, without explicit knowledge of network data of the other interconnected areas, being only necessary to exchange border information related to the tie-lines between areas. An efficient method for probabilistic analysis, considering uncertainty in n system loads, is applied. The proposal is to use a particular case of the point estimate method, known as Two-Point Estimate Method (TPM), rather than the traditional approach based on Monte Carlo simulation. The main feature of the TPM is that it only requires resolve 2n power flows for to obtain the behavior of any random variable. An iterative coordination algorithm between areas is also presented. This algorithm solves the Multi-Area PPF problem in a decentralized way, ensures the independent operation of each area and integrates the decomposition framework and the TPM appropriately. The IEEE RTS-96 system is used in order to show the operation and effectiveness of the proposed approach and the Monte Carlo simulations are used to validation of the results. © 2011 IEEE.
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
A Fortran computer program is given for the computation of the adjusted average time to signal, or AATS, for adaptive (X) over bar charts with one, two, or all three design parameters variable: the sample size, n, the sampling interval, h, and the factor k used in determining the width of the action limits. The program calculates the threshold limit to switch the adaptive design parameters and also provides the in-control average time to signal, or ATS.
Resumo:
The usual practice in using a control chart to monitor a process is to take samples of size n from the process every h hours. This article considers the properties of the X̄ chart when the size of each sample depends on what is observed in the preceding sample. The idea is that the sample should be large if the sample point of the preceding sample is close to but not actually outside the control limits and small if the sample point is close to the target. The properties of the variable sample size (VSS) X̄ chart are obtained using Markov chains. The VSS X̄ chart is substantially quicker than the traditional X̄ chart in detecting moderate shifts in the process.
Resumo:
A Fortran computer program is given for the computation of the adjusted average time to signal, or AATS, for adaptive X̄ charts with one, two, or all three design parameters variable: the sample size, n, the sampling interval, h, and the factor k used in determining the width of the action limits. The program calculates the threshold limit to switch the adaptive design parameters and also provides the in-control average time to signal, or ATS.
Resumo:
An upconversion random laser (RL) operating in the ultraviolet is reported for Nd 3+ doped fluoroindate glass powder pumped at 575 nm. The RL is obtained by the resonant excitation of the Nd 3+ state 2G 7/2 followed by energy transfer among two excited ions such that one ion in the pair decays to a lower energy state and the other is promoted to state 4D 7/2 from where it decays emitting light at 381 nm. The RL threshold of 30 kW/cm 2 was determined by monitoring the photoluminescence intensity as a function of the pump laser intensity. The RL pulses have time duration of 29 ns that is 50 times smaller than the decay time of the upconversion signal when the sample is pumped with intensities below the RL laser threshold. © 2011 Optical Society of America.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The photon statistics of the random laser emission of a Rhodamine B doped di-ureasil hybrid powder is investigated to evaluate its degree of coherence above threshold. Although the random laser emission is a weighted average of spatially uncorrelated radiation emitted at different positions in the sample, a spatial coherence control was achieved due to an improved detection configuration based on spatial filtering. By using this experimental approach, which also allows for fine mode discrimination and timeresolved analysis of uncoupled modes from mode competition, an area not larger than the expected coherence size of the random laser is probed. Once the spectral and temporal behavior of nonoverlapping modes is characterized, an assessment of the photon-number probability distribution and the resulting second-order correlation coefficient as a function of time delay and wavelength was performed. The outcome of our single photon counting measurements revealed a high degree of temporal coherence at the time of maximum pump intensity and at wavelengths around the Rhodamine B gain maximum.