944 resultados para Rayleigh Random Variables
Resumo:
Exercises and solutions in LaTex
Resumo:
Lecture notes in PDF
Resumo:
Exercises and solutions in PDF
Resumo:
La variable aleatoria es una función matemática que permite asignar valores numéricos a cada uno de los posibles resultados obtenidos en un evento de naturaleza aleatoria. Si el número de estos resultados se puede contar, se tiene un conjunto discreto; por el contrario, cuando el número de resultados es infinito y no se puede contar, se tiene un conjunto continuo. El objetivo de la variable aleatoria es permitir adelantar estudios probabilísticos y estadísticos a partir del establecimiento de una asignación numérica a través de la cual se identifiquen cada uno de los resultados que pueden ser obtenidos en el desarrollo de un evento determinado. El valor esperado y la varianza son los parámetros por medio de los cuales es posible caracterizar el comportamiento de los datos reunidos en el desarrollo de una situación experimental; el valor esperado permite establecer el valor sobre el cual se centra la distribución de la probabilidad, mientras que la varianza proporciona información acerca de la manera como se distribuyen los datos obtenidos. Adicionalmente, las distribuciones de probabilidad son funciones numéricas asociadas a la variable aleatoria que describen la asignación de probabilidad para cada uno de los elementos del espacio muestral y se caracterizan por ser un conjunto de parámetros que establecen su comportamiento funcional, es decir, cada uno de los parámetros propios de la distribución suministra información del experimento aleatorio al que se asocia. El documento se cierra con una aproximación de la variable aleatoria a procesos de toma de decisión que implican condiciones de riesgo e incertidumbre.
Resumo:
Este documento se concentra en el estudio de las diferencias salariales mediante la comparación de las distribuciones de los salarios para las siete principales ciudades colombianas en el periodo 2001-2005 con datos de la Encuesta Continua de Hogares. Se detectan diferencias significativas que se explican a la luz de la teoría del capital humano y de segmentación laboral; mediante la estimación de ecuaciones de salarios a partir de las características socioeconómicas de los trabajadores junto con efectos particulares para región y rama de actividad económica que resultan significativos dando evidencia de segmentación del mercado laboralen Colombia. El componente de los salarios que es particular a la región y a la actividad económica se explica a partir de variables macroeconómicas como el crecimiento económico, la dotación sectorial de factores, el costo de vida y el desempleo a nivel regional.
Resumo:
Diebold and Lamb (1997) argue that since the long-run elasticity of supply derived from the Nerlovian model entails a ratio of random variables, it is without moments. They propose minimum expected loss estimation to correct this problem but in so-doing ignore the fact that a non white-noise-error is implicit in the model. We show that, as a consequence the estimator is biased and demonstrate that Bayesian estimation which fully accounts for the error structure is preferable.
Resumo:
Higher order cumulant analysis is applied to the blind equalization of linear time-invariant (LTI) nonminimum-phase channels. The channel model is moving-average based. To identify the moving average parameters of channels, a higher-order cumulant fitting approach is adopted in which a novel relay algorithm is proposed to obtain the global solution. In addition, the technique incorporates model order determination. The transmitted data are considered as independently identically distributed random variables over some discrete finite set (e.g., set {±1, ±3}). A transformation scheme is suggested so that third-order cumulant analysis can be applied to this type of data. Simulation examples verify the feasibility and potential of the algorithm. Performance is compared with that of the noncumulant-based Sato scheme in terms of the steady state MSE and convergence rate.
Resumo:
A set of random variables is exchangeable if its joint distribution function is invariant under permutation of the arguments. The concept of exchangeability is discussed, with a view towards potential application in evaluating ensemble forecasts. It is argued that the paradigm of ensembles being an independent draw from an underlying distribution function is probably too narrow; allowing ensemble members to be merely exchangeable might be a more versatile model. The question is discussed whether established methods of ensemble evaluation need alteration under this model, with reliability being given particular attention. It turns out that the standard methodology of rank histograms can still be applied. As a first application of the exchangeability concept, it is shown that the method of minimum spanning trees to evaluate the reliability of high dimensional ensembles is mathematically sound.
Resumo:
For a Lévy process ξ=(ξt)t≥0 drifting to −∞, we define the so-called exponential functional as follows: Formula Under mild conditions on ξ, we show that the following factorization of exponential functionals: Formula holds, where × stands for the product of independent random variables, H− is the descending ladder height process of ξ and Y is a spectrally positive Lévy process with a negative mean constructed from its ascending ladder height process. As a by-product, we generate an integral or power series representation for the law of Iξ for a large class of Lévy processes with two-sided jumps and also derive some new distributional properties. The proof of our main result relies on a fine Markovian study of a class of generalized Ornstein–Uhlenbeck processes, which is itself of independent interest. We use and refine an alternative approach of studying the stationary measure of a Markov process which avoids some technicalities and difficulties that appear in the classical method of employing the generator of the dual Markov process.
Resumo:
Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics
Resumo:
This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter . I t i s shown that the obser vations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the obser vations and generate an ensemble of obser vations that then is used in updating the ensemble of model states. T raditionally , this has not been done in previous applications of the ensemble Kalman filter and, as will be shown, this has resulted in an updated ensemble with a variance that is too low . This simple modification of the analysis scheme results in a completely consistent approach if the covariance of the ensemble of model states is interpreted as the prediction error covariance, and there are no further requirements on the ensemble Kalman filter method, except for the use of an ensemble of sufficient size. Thus, there is a unique correspondence between the error statistics from the ensemble Kalman filter and the standard Kalman filter approach
Resumo:
Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We study four discrete-time stochastic systems on N, modeling processes of rumor spreading. The involved individuals can either have an active or a passive role, speaking up or asking for the rumor. The appetite for spreading or hearing the rumor is represented by a set of random variables whose distributions may depend on the individuals. Our goal is to understand-based on the distribution of the random variables-whether the probability of having an infinite set of individuals knowing the rumor is positive or not.
Resumo:
We consider the issue of performing residual and local influence analyses in beta regression models with varying dispersion, which are useful for modelling random variables that assume values in the standard unit interval. In such models, both the mean and the dispersion depend upon independent variables. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. An application using real data is presented and discussed.