912 resultados para Stable distributions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a method of generation of exact and explicit forms of one-sided, heavy-tailed Levy stable probability distributions g(alpha)(x), 0 <= x < infinity, 0 < alpha < 1. We demonstrate that the knowledge of one such a distribution g a ( x) suffices to obtain exactly g(alpha)p ( x), p = 2, 3, .... Similarly, from known g(alpha)(x) and g(beta)(x), 0 < alpha, beta < 1, we obtain g(alpha beta)( x). The method is based on the construction of the integral operator, called Levy transform, which implements the above operations. For a rational, alpha = l/k with l < k, we reproduce in this manner many of the recently obtained exact results for g(l/k)(x). This approach can be also recast as an application of the Efros theorem for generalized Laplace convolutions. It relies solely on efficient definite integration. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4709443]

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Let (Xi ) be a sequence of i.i.d. random variables, and let N be a geometric random variable independent of (Xi ). Geometric stable distributions are weak limits of (normalized) geometric compounds, SN = X1 + · · · + XN , when the mean of N converges to infinity. By an appropriate representation of the individual summands in SN we obtain series representation of the limiting geometric stable distribution. In addition, we study the asymptotic behavior of the partial sum process SN (t) = ⅀( i=1 ... [N t] ) Xi , and derive series representations of the limiting geometric stable process and the corresponding stochastic integral. We also obtain strong invariance principles for stable and geometric stable laws.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O Teorema Central do Limite e a Lei dos Grandes Números estão entre os mais importantes resultados da teoria da probabilidade. O primeiro deles busca condições sob as quais [fórmula] converge em distribuição para a distribuição normal com parâmetros 0 e 1, quando n tende ao infinito, onde Sn é a soma de n variáveis aleatórias independentes. Ao mesmo tempo, o segundo estabelece condições para que [fórmula] convirja a zero, ou equivalentemente, para que [fórmula] convirja para a esperança das variáveis aleatórias, caso elas sejam identicamente distribuídas. Em ambos os casos as sequências abordadas são do tipo [fórmula], onde [fórmula] e [fórmula] são constantes reais. Caracterizar os possíveis limites de tais sequências é um dos objetivos dessa dissertação, já que elas não convergem exclusivamente para uma variável aleatória degenerada ou com distribuição normal como na Lei dos Grandes Números e no Teorema Central do Limite, respectivamente. Assim, somos levados naturalmente ao estudo das distribuições infinitamente divisíveis e estáveis, e os respectivos teoremas limites, e este vem a ser o objetivo principal desta dissertação. Para as demonstrações dos teoremas utiliza-se como estratégia principal a aplicação do método de Lyapunov, o qual consiste na análise da convergência da sequência de funções características correspondentes às variáveis aleatórias. Nesse sentido, faremos também uma abordagem detalhada de tais funções neste trabalho.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

After pointing out the difference between normal and anomalous diffusion, we consider a hadron resonance cascade (HRC) model simulation for particle emission at RHIC and point out that rescattering in an expanding hadron resonance gas leads to a heavy tail in the source distribution. The results are compared to recent PHENIX measurements of the tail of the particle emitting source in Au+Au collisions at RHIC. In this context, we show how can one distinguish experimentally the anomalous diffusion of hadrons from a second order QCD phase transition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many seemingly disparate approaches for marginal modeling have been developed in recent years. We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the proposed copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

* Research supported by NATO GRANT CRG 900 798 and by Humboldt Award for U.S. Scientists.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60G70, 60F05.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.