990 resultados para Stable Distribution


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Let (Xi ) be a sequence of i.i.d. random variables, and let N be a geometric random variable independent of (Xi ). Geometric stable distributions are weak limits of (normalized) geometric compounds, SN = X1 + · · · + XN , when the mean of N converges to infinity. By an appropriate representation of the individual summands in SN we obtain series representation of the limiting geometric stable distribution. In addition, we study the asymptotic behavior of the partial sum process SN (t) = ⅀( i=1 ... [N t] ) Xi , and derive series representations of the limiting geometric stable process and the corresponding stochastic integral. We also obtain strong invariance principles for stable and geometric stable laws.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Intuitively, any `bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distri- butions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Intuitively, any ‘bag of words’ approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distributions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document’s initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur’s search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Accurately characterizing the time-varying interference caused to the primary users is essential in ensuring a successful deployment of cognitive radios (CR). We show that the aggregate interference at the primary receiver (PU-Rx) from multiple, randomly located cognitive users (CUs) is well modeled as a shifted lognormal random process, which is more accurate than the lognormal and the Gaussian process models considered in the literature, even for a relatively dense deployment of CUs. It also compares favorably with the asymptotically exact stable and symmetric truncated stable distribution models, except at high CU densities. Our model accounts for the effect of imperfect spectrum sensing, which depends on path-loss, shadowing, and small-scale fading of the link from the primary transmitter to the CU; the interweave and underlay modes or CR operation, which determine the transmit powers of the CUs; and time-correlated shadowing and fading of the links from the CUs to the PU-Rx. It leads to expressions for the probability distribution function, level crossing rate, and average exceedance duration. The impact of cooperative spectrum sensing is also characterized. We validate the model by applying it to redesign the primary exclusive zone to account for the time-varying nature of interference.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The stochastic nature of oil price fluctuations is investigated over a twelve-year period, borrowing feedback from an existing database (USA Energy Information Administration database, available online). We evaluate the scaling exponents of the fluctuations by employing different statistical analysis methods, namely rescaled range analysis (R/S), scale windowed variance analysis (SWV) and the generalized Hurst exponent (GH) method. Relying on the scaling exponents obtained, we apply a rescaling procedure to investigate the complex characteristics of the probability density functions (PDFs) dominating oil price fluctuations. It is found that PDFs exhibit scale invariance, and in fact collapse onto a single curve when increments are measured over microscales (typically less than 30 days). The time evolution of the distributions is well fitted by a Levy-type stable distribution. The relevance of a Levy distribution is made plausible by a simple model of nonlinear transfer. Our results also exhibit a degree of multifractality as the PDFs change and converge toward to a Gaussian distribution at the macroscales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The attached file is created with Scientific Workplace Latex

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We test for departures from normal and independent and identically distributed (NIID) log returns, for log returns under the alternative hypothesis that are self-affine and either long-range dependent, or drawn randomly from an L-stable distribution with infinite higher-order moments. The finite sample performance of estimators of the two forms of self-affinity is explored in a simulation study. In contrast to rescaled range analysis and other conventional estimation methods, the variant of fluctuation analysis that considers finite sample moments only is able to identify both forms of self-affinity. When log returns are self-affine and long-range dependent under the alternative hypothesis, however, rescaled range analysis has higher power than fluctuation analysis. The techniques are illustrated by means of an analysis of the daily log returns for the indices of 11 stock markets of developed countries. Several of the smaller stock markets by capitalization exhibit evidence of long-range dependence in log returns. © 2012 Elsevier Inc. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a novel control design for D-STATCOM to ensure grid code-compatible performance of distributed wind generators. The approach considered in this paper is to find the smallest upper bound on the H norm of the uncertain system and to design an optimal linear quadratic Gaussian (LQG) controller based on this bound. The change in the model due to variations of induction motor (IM) load compositions in the composite load is considered as an uncertain term in the design algorithm. The performance of the designed controller is demonstrated on a distribution test system representative of the Kumamoto area in Japan. It is found that the proposed controller enhances voltage stability of the distribution system under varying operating conditions

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Stable isotope ratios, δ15N and δ13C were effectively used to determine the geographical dispersion of human derived sewage from Davis Station, East Antarctica, using Antarctic rock cod (Trematomus bernacchii). Fish within 0-4km downstream of the outfall exhibited higher δ15N and δ13C values relative to reference sites. Nitrogen in particular showed a stepped decrease in δ15N with increasing distance from the discharge point by 1-2‰. Stable isotopes were better able to detect the extent of wastewater contamination than other techniques including faecal coliform and sterol measures. Uptake and assimilation of δ15N and δ13C up to 4km from the outfall adds to growing evidence indicating the current level of wastewater treatment at Davis Station is not sufficient to avoid impact to the surrounding environment. Isotopic assimilation in T. bernacchii is a viable biomarker for investigation of initial sewage exposure and longer term monitoring in the future.