988 resultados para Random Variable


Relevância:

60.00% 60.00%

Publicador:

Resumo:

En este documento se revisa teóricamente la distribución de probabilidad de Poisson como función que asigna a cada suceso definido, sobre una variable aleatoria discreta, la probabilidad de ocurrencia en un intervalo de tiempo o región del espacio disjunto. Adicionalmente se revisa la distribución exponencial negativa empleada para modelar el intervalo de tiempo entre eventos consecutivos de Poisson que ocurren de manera independiente; es decir, en los cuales la probabilidad de ocurrencia de los eventos sucedidos en un intervalo de tiempo no depende de los ocurridos en otros intervalos de tiempo, por esta razón se afirma que es una distribución que no tiene memoria. El proceso de Poisson relaciona la función de Poisson, que representa un conjunto de eventos independientes sucedidos en un intervalo de tiempo o región del espacio con los tiempos dados entre la ocurrencia de los eventos según la distribución exponencial negativa. Los anteriores conceptos se usan en la teoría de colas, rama de la investigación de operaciones que describe y brinda soluciones a situaciones en las que un conjunto de individuos o elementos forman colas en espera de que se les preste un servicio, por lo cual se presentan ejemplos de aplicación en el ámbito médico.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pardo, Patie, and Savov derived, under mild conditions, a Wiener-Hopf type factorization for the exponential functional of proper Lévy processes. In this paper, we extend this factorization by relaxing a finite moment assumption as well as by considering the exponential functional for killed Lévy processes. As a by-product, we derive some interesting fine distributional properties enjoyed by a large class of this random variable, such as the absolute continuity of its distribution and the smoothness, boundedness or complete monotonicity of its density. This type of results is then used to derive similar properties for the law of maxima and first passage time of some stable Lévy processes. Thus, for example, we show that for any stable process with $\rho\in(0,\frac{1}{\alpha}-1]$, where $\rho\in[0,1]$ is the positivity parameter and $\alpha$ is the stable index, then the first passage time has a bounded and non-increasing density on $\mathbb{R}_+$. We also generate many instances of integral or power series representations for the law of the exponential functional of Lévy processes with one or two-sided jumps. The proof of our main results requires different devices from the one developed by Pardo, Patie, Savov. It relies in particular on a generalization of a transform recently introduced by Chazal et al together with some extensions to killed Lévy process of Wiener-Hopf techniques. The factorizations developed here also allow for further applications which we only indicate here also allow for further applications which we only indicate here.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses the one-dimensional cutting stock problem when demand is a random variable. The problem is formulated as a two-stage stochastic nonlinear program with recourse. The first stage decision variables are the number of objects to be cut according to a cutting pattern. The second stage decision variables are the number of holding or backordering items due to the decisions made in the first stage. The problem`s objective is to minimize the total expected cost incurred in both stages, due to waste and holding or backordering penalties. A Simplex-based method with column generation is proposed for solving a linear relaxation of the resulting optimization problem. The proposed method is evaluated by using two well-known measures of uncertainty effects in stochastic programming: the value of stochastic solution-VSS-and the expected value of perfect information-EVPI. The optimal two-stage solution is shown to be more effective than the alternative wait-and-see and expected value approaches, even under small variations in the parameters of the problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study the asymptotic properties of the number of open paths of length n in an oriented rho-percolation model. We show that this number is e(n alpha(rho)(1+o(1))) as n ->infinity. The exponent alpha is deterministic, it can be expressed in terms of the free energy of a polymer model, and it can be explicitly computed in some range of the parameters. Moreover, in a restricted range of the parameters, we even show that the number of such paths is n(-1/2)We (n alpha(rho))(1+o(1)) for some nondegenerate random variable W. We build on connections with the model of directed polymers in random environment, and we use techniques and results developed in this context.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Probabilistic reasoning with belief (Bayesian) networks is based on conditional probability matrices. Thus it suffers from NP-hard implementations. In particular, the amount of probabilistic information necessary for the computations is often overwhelming. So, compressing the conditional probability table is one of the most important issues faced by the probabilistic reasoning community. Santos suggested an approach (called linear potential functions) for compressing the information from a combinatorial amount to roughly linear in the number of random variable assignments. However, much of the information in Bayesian networks, in which there are no linear potential functions, would be fitted by polynomial approximating functions rather than by reluctantly linear functions. For this reason, we construct a polynomial method to compress the conditional probability table in this paper. We evaluated the proposed technique, and our experimental results demonstrate that the approach is efficient and promising.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recently, a patchwork-based audio watermarking scheme has been proposed in [1], which embeds watermarks by modifying the means of absolute-valued discrete cosine transform (DCT) coefficients corresponding to suitable fragments. This audio watermarking scheme is more robust to common attacks than the existing counterparts. In this paper, we presents a detailed analysis of this audio watermarking scheme. We first derive a probability density function (pdf) of a random variable corresponding to the mean of an absolute-valued DCT fragment. Then, based on the obtained pdf, we show how watermarking parameters affect the performance of the concerned audio watermarking scheme. The analysis result provides a guideline for the selection of watermarking parameters. The effectiveness of our analysis is verified by simulations using a large number of real-world audio segments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We apply the concept of exchangeable random variables to the case of non-additive robability distributions exhibiting ncertainty aversion, and in the lass generated bya convex core convex non-additive probabilities, ith a convex core). We are able to rove two versions of the law of arge numbers (de Finetti's heorems). By making use of two efinitions. of independence we rove two versions of the strong law f large numbers. It turns out that e cannot assure the convergence of he sample averages to a constant. e then modal the case there is a true" probability distribution ehind the successive realizations of the uncertain random variable. In this case convergence occurs. This result is important because it renders true the intuition that it is possible "to learn" the "true" additive distribution behind an uncertain event if one repeatedly observes it (a sufficiently large number of times). We also provide a conjecture regarding the "Iearning" (or updating) process above, and prove a partia I result for the case of Dempster-Shafer updating rule and binomial trials.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The concept of stochastic discount factor pervades the Modern Theory of Asset Pricing. Initially, such object allows unattached pricing models to be discussed under the same terms. However, Hansen and Jagannathan have shown there is worthy information to be brought forth from such powerful concept which undelies asset pricing models. From security market data sets, one is able to explore the behavior of such random variable, determining a useful variance bound. Furthermore, through that instrument, they explore one pitfall on modern asset pricing: model misspecification. Those major contributions, alongside with some of its extensions, are thoroughly investigated in this exposition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O algoritmo de simulação seqüencial estocástica mais amplamente utilizado é o de simulação seqüencial Gaussiana (ssG). Teoricamente, os métodos estocásticos reproduzem tão bem o espaço de incerteza da VA Z(u) quanto maior for o número L de realizações executadas. Entretanto, às vezes, L precisa ser tão alto que o uso dessa técnica pode se tornar proibitivo. Essa Tese apresenta uma estratégia mais eficiente a ser adotada. O algoritmo de simulação seqüencial Gaussiana foi alterado para se obter um aumento em sua eficiência. A substituição do método de Monte Carlo pela técnica de Latin Hypercube Sampling (LHS), fez com que a caracterização do espaço de incerteza da VA Z(u), para uma dada precisão, fosse alcançado mais rapidamente. A técnica proposta também garante que todo o modelo de incerteza teórico seja amostrado, sobretudo em seus trechos extremos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tese é constituída por três capítulos que se enquadram na área de Microeconomia Aplicada, sendo dois deles de Economia Política Aplicada e o outro de Economia da Educação. O primeiro capítulo investiga se a eleição de mulheres para a prefeitura impacta a inserção de outras mulheres no mercado político, reduzindo-se assim uma preferência pré-estabelecida pelos eleitores de não votar em mulheres. Para realizar o exercício, utiliza-se um experimento de Regressão em Descontinuidade onde explora-se eleições em que uma mulher perdeu ou ganhou por uma margem pequena de votos para um candidato homem, a ponto do gênero eleito ser aleatório. Os resultados mostram que a eleição de uma mulher tem impacto apenas em ambientes mais propícios a eleger mulheres (o que foi mensurado aqui pelo percentual de vereadoras eleitas) ou em locais onde os candidatos tinham maior qualidade (medido pela escolaridade). O segundo artigo estima o impacto da divulgação da qualidade escolar sobre a migração dos alunos entre escolas. A ideia é que ao tornar-se público o sinal de qualidade, escolas e alunos têm incentivos para se adaptarem conforme sua demanda por qualidade. Para isso, explora-se um desenho de Regressão em Descontinuidade Fuzzy devido a um dos critérios de divulgação do IDEB ser a escola ter no mínimo 20 alunos matriculados na série avaliada. Os resultados mostram que as escolas que tiveram IDEB divulgado tiveram maior migração de alunos e, em especial, de alunos em condições de vulnerabilidade. O terceiro artigo avalia a hipótese de exogeneidade da abertura comercial brasileira, promovida no final da década de 1980 e início da de 1990. Há uma vasta literatura que explora os efeitos da abertura comercial sobre o mercado de trabalho, desigualdade de renda, pobreza e crescimento econômico. Tais trabalhos consideram o processo de liberalização brasileiro como não correlacionado com as demandas de nenhum setor de atividade econômica específico, o que justificaria utilizar o período de abertura como um instrumento para lidar com endogeneidade nas estimações. Nós apresentamos evidência de que, embora não correlacionado com nenhum setor em especial, a abertura estava correlacionada com a distribuição de capital político dos governos nesse período, e pode ter funcionado como uma estratégica clara de fortalecimento político ou, pelo menos, teve o contexto político como um facilitador do processo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Throughout this article, it is assumed that the no-central chi-square chart with two stage samplings (TSS Chisquare chart) is employed to monitor a process where the observations from the quality characteristic of interest X are independent and identically normally distributed with mean μ and variance σ2. The process is considered to start with the mean and the variance on target (μ = μ0; σ2 = σ0 2), but at some random time in the future an assignable cause shifts the mean from μ0 to μ1 = μ0 ± δσ0, δ >0 and/or increases the variance from σ0 2 to σ1 2 = γ2σ0 2, γ > 1. Before the assignable cause occurrence, the process is considered to be in a state of statistical control (defined by the in-control state). Similar to the Shewhart charts, samples of size n 0+ 1 are taken from the process at regular time intervals. The samplings are performed in two stages. At the first stage, the first item of the i-th sample is inspected. If its X value, say Xil, is close to the target value (|Xil-μ0|< w0σ 0, w0>0), then the sampling is interrupted. Otherwise, at the second stage, the remaining n0 items are inspected and the following statistic is computed. Wt = Σj=2n 0+1(Xij - μ0 + ξiσ 0)2 i = 1,2 Let d be a positive constant then ξ, =d if Xil > 0 ; otherwise ξi =-d. A signal is given at sample i if |Xil-μ0| > w0σ 0 and W1 > knia:tl, where kChi is the factor used in determining the upper control limit for the non-central chi-square chart. If devices such as go and no-go gauges can be considered, then measurements are not required except when the sampling goes to the second stage. Let P be the probability of deciding that the process is in control and P 1, i=1,2, be the probability of deciding that the process is in control at stage / of the sampling procedure. Thus P = P1 + P 2 - P1P2, P1 = Pr[μ0 - w0σ0 ≤ X ≤ μ0+ w 0σ0] P2=Pr[W ≤ kChi σ0 2], (3) During the in-control period, W / σ0 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ0 = n0d2, i.e. W / σ0 2 - xn0 22 (λ0) During the out-of-control period, W / σ1 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ1 = n0(δ + ξ)2 / γ2 The effectiveness of a control chart in detecting a process change can be measured by the average run length (ARL), which is the speed with which a control chart detects process shifts. The ARL for the proposed chart is easily determined because in this case, the number of samples before a signal is a geometrically distributed random variable with parameter 1-P, that is, ARL = I /(1-P). It is shown that the performance of the proposed chart is better than the joint X̄ and R charts, Furthermore, if the TSS Chi-square chart is used for monitoring diameters, volumes, weights, etc., then appropriate devices, such as go-no-go gauges can be used to decide if the sampling should go to the second stage or not. When the process is stable, and the joint X̄ and R charts are in use, the monitoring becomes monotonous because rarely an X̄ or R value fall outside the control limits. The natural consequence is the user to pay less and less attention to the steps required to obtain the X̄ and R value. In some cases, this lack of attention can result in serious mistakes. The TSS Chi-square chart has the advantage that most of the samplings are interrupted, consequently, most of the time the user will be working with attributes. Our experience shows that the inspection of one item by attribute is much less monotonous than measuring four or five items at each sampling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper is presented a region-based methodology for Digital Elevation Model segmentation obtained from laser scanning data. The methodology is based on two sequential techniques, i.e., a recursive splitting technique using the quad tree structure followed by a region merging technique using the Markov Random Field model. The recursive splitting technique starts splitting the Digital Elevation Model into homogeneous regions. However, due to slight height differences in the Digital Elevation Model, region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Markov Random Field model is applied to the previously segmented data. The resulting regions are firstly structured by using the so-called Region Adjacency Graph. Each node of the Region Adjacency Graph represents a region of the Digital Elevation Model segmented and two nodes have connectivity between them if corresponding regions share a common boundary. Next it is assumed that the random variable related to each node, follows the Markov Random Field model. This hypothesis allows the derivation of the posteriori probability distribution function whose solution is obtained by the Maximum a Posteriori estimation. Regions presenting high probability of similarity are merged. Experiments carried out with laser scanning data showed that the methodology allows to separate the objects in the Digital Elevation Model with a low amount of fragmentation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper a framework based on the decomposition of the first-order optimality conditions is described and applied to solve the Probabilistic Power Flow (PPF) problem in a coordinated but decentralized way in the context of multi-area power systems. The purpose of the decomposition framework is to solve the problem through a process of solving smaller subproblems, associated with each area of the power system, iteratively. This strategy allows the probabilistic analysis of the variables of interest, in a particular area, without explicit knowledge of network data of the other interconnected areas, being only necessary to exchange border information related to the tie-lines between areas. An efficient method for probabilistic analysis, considering uncertainty in n system loads, is applied. The proposal is to use a particular case of the point estimate method, known as Two-Point Estimate Method (TPM), rather than the traditional approach based on Monte Carlo simulation. The main feature of the TPM is that it only requires resolve 2n power flows for to obtain the behavior of any random variable. An iterative coordination algorithm between areas is also presented. This algorithm solves the Multi-Area PPF problem in a decentralized way, ensures the independent operation of each area and integrates the decomposition framework and the TPM appropriately. The IEEE RTS-96 system is used in order to show the operation and effectiveness of the proposed approach and the Monte Carlo simulations are used to validation of the results. © 2011 IEEE.