931 resultados para Heavy tail distributions
Resumo:
[cat] Es presenta un estimador nucli transformat que és adequat per a distribucions de cua pesada. Utilitzant una transformació basada en la distribució de probabilitat Beta l’elecció del paràmetre de finestra és molt directa. Es presenta una aplicació a dades d’assegurances i es mostra com calcular el Valor en Risc.
Resumo:
We propose a new kernel estimation of the cumulative distribution function based on transformation and on bias reducing techniques. We derive the optimal bandwidth that minimises the asymptotic integrated mean squared error. The simulation results show that our proposed kernel estimation improves alternative approaches when the variable has an extreme value distribution with heavy tail and the sample size is small.
Resumo:
Multi-factor approaches to analysis of real estate returns have, since the pioneering work of Chan, Hendershott and Sanders (1990), emphasised a macro-variables approach in preference to the latent factor approach that formed the original basis of the arbitrage pricing theory. With increasing use of high frequency data and trading strategies and with a growing emphasis on the risks of extreme events, the macro-variable procedure has some deficiencies. This paper explores a third way, with the use of an alternative to the standard principal components approach – independent components analysis (ICA). ICA seeks higher moment independence and maximises in relation to a chosen risk parameter. We apply an ICA based on kurtosis maximisation to weekly US REIT data using a kurtosis maximising algorithm. The results show that ICA is successful in capturing the kurtosis characteristics of REIT returns, offering possibilities for the development of risk management strategies that are sensitive to extreme events and tail distributions.
Resumo:
This paper constructs a unit root test baseei on partially adaptive estimation, which is shown to be robust against non-Gaussian innovations. We show that the limiting distribution of the t-statistic is a convex combination of standard normal and DF distribution. Convergence to the DF distribution is obtaineel when the innovations are Gaussian, implying that the traditional ADF test is a special case of the proposed testo Monte Carlo Experiments indicate that, if innovation has heavy tail distribution or are contaminated by outliers, then the proposed test is more powerful than the traditional ADF testo Nominal interest rates (different maturities) are shown to be stationary according to the robust test but not stationary according to the nonrobust ADF testo This result seems to suggest that the failure of rejecting the null of unit root in nominal interest rate may be due to the use of estimation and hypothesis testing procedures that do not consider the absence of Gaussianity in the data.Our results validate practical restrictions on the behavior of the nominal interest rate imposed by CCAPM, optimal monetary policy and option pricing models.
Resumo:
The issue of assessing variance components is essential in deciding on the inclusion of random effects in the context of mixed models. In this work we discuss this problem by supposing nonlinear elliptical models for correlated data by using the score-type test proposed in Silvapulle and Silvapulle (1995). Being asymptotically equivalent to the likelihood ratio test and only requiring the estimation under the null hypothesis, this test provides a fairly easy computable alternative for assessing one-sided hypotheses in the context of the marginal model. Taking into account the possible non-normal distribution, we assume that the joint distribution of the response variable and the random effects lies in the elliptical class, which includes light-tailed and heavy-tailed distributions such as Student-t, power exponential, logistic, generalized Student-t, generalized logistic, contaminated normal, and the normal itself, among others. We compare the sensitivity of the score-type test under normal, Student-t and power exponential models for the kinetics data set discussed in Vonesh and Carter (1992) and fitted using the model presented in Russo et al. (2009). Also, a simulation study is performed to analyze the consequences of the kurtosis misspecification.
Resumo:
In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).
Resumo:
Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.
Resumo:
Recent experiments have shown that the multimode approach for describing the fission process is compatible with the observed results. Asystematic analysis of the parameters obtained by fitting the fission-fragment mass distribution to the spontaneous and low-energy data has shown that the values for those parameters present a smooth dependence upon the nuclear mass number. In this work, a new methodology is introduced for studying fragment mass distributions through the multimode approach. It is shown that for fission induced by energetic probes (E > 30 MeV) the mass distribution of the fissioning nuclei produced during the intranuclear cascade and evaporation processes must be considered in order to have a realistic description of the fission process. The method is applied to study (208)Pb, (238)U, (239)Np and (241)Am fission induced by protons or photons.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We report the first measurements of the moments--mean (M), variance (σ(2)), skewness (S), and kurtosis (κ)--of the net-charge multiplicity distributions at midrapidity in Au+Au collisions at seven energies, ranging from sqrt[sNN]=7.7 to 200 GeV, as a part of the Beam Energy Scan program at RHIC. The moments are related to the thermodynamic susceptibilities of net charge, and are sensitive to the location of the QCD critical point. We compare the products of the moments, σ(2)/M, Sσ, and κσ(2), with the expectations from Poisson and negative binomial distributions (NBDs). The Sσ values deviate from the Poisson baseline and are close to the NBD baseline, while the κσ(2) values tend to lie between the two. Within the present uncertainties, our data do not show nonmonotonic behavior as a function of collision energy. These measurements provide a valuable tool to extract the freeze-out parameters in heavy-ion collisions by comparing with theoretical models.
Resumo:
Measurements of polar organic marker compounds were performed on aerosols that were collected at a pasture site in the Amazon basin (Rondonia, Brazil) using a high-volume dichotomous sampler (HVDS) and a Micro-Orifice Uniform Deposit Impactor (MOUDI) within the framework of the 2002 LBA-SMOCC (Large-Scale Biosphere Atmosphere Experiment in Amazonia - Smoke Aerosols, Clouds, Rainfall, and Climate: Aerosols From Biomass Burning Perturb Global and Regional Climate) campaign. The campaign spanned the late dry season (biomass burning), a transition period, and the onset of the wet season (clean conditions). In the present study a more detailed discussion is presented compared to previous reports on the behavior of selected polar marker compounds, including levoglucosan, malic acid, isoprene secondary organic aerosol (SOA) tracers and tracers for fungal spores. The tracer data are discussed taking into account new insights that recently became available into their stability and/or aerosol formation processes. During all three periods, levoglucosan was the most dominant identified organic species in the PM(2.5) size fraction of the HVDS samples. In the dry period levoglucosan reached concentrations of up to 7.5 mu g m(-3) and exhibited diel variations with a nighttime prevalence. It was closely associated with the PM mass in the size-segregated samples and was mainly present in the fine mode, except during the wet period where it peaked in the coarse mode. Isoprene SOA tracers showed an average concentration of 250 ng m(-3) during the dry period versus 157 ng m(-3) during the transition period and 52 ng m(-3) during the wet period. Malic acid and the 2-methyltetrols exhibited a different size distribution pattern, which is consistent with different aerosol formation processes (i.e., gas-to-particle partitioning in the case of malic acid and heterogeneous formation from gas-phase precursors in the case of the 2-methyltetrols). The 2-methyltetrols were mainly associated with the fine mode during all periods, while malic acid was prevalent in the fine mode only during the dry and transition periods, and dominant in the coarse mode during the wet period. The sum of the fungal spore tracers arabitol, mannitol, and erythritol in the PM(2.5) fraction of the HVDS samples during the dry, transition, and wet periods was, on average, 54 ng m(-3), 34 ng m(-3), and 27 ng m(-3), respectively, and revealed minor day/night variation. The mass size distributions of arabitol and mannitol during all periods showed similar patterns and an association with the coarse mode, consistent with their primary origin. The results show that even under the heavy smoke conditions of the dry period a natural background with contributions from bioaerosols and isoprene SOA can be revealed. The enhancement in isoprene SOA in the dry season is mainly attributed to an increased acidity of the aerosols, increased NO(x) concentrations and a decreased wet deposition.
Resumo:
Heavy quark production has been very well studied over the last years both theoretically and experimentally. Theory has been used to study heavy quark production in ep collisions at HERA, in pp collisions at Tevatron and RHIC, in pA and dA collisions at RHIC, and in AA collisions at CERN-SPS and RHIC. However, to the best of our knowledge, heavy quark production in eA has received almost no attention. With the possible construction of a high energy electron-ion collider, updated estimates of heavy quark production are needed. We address the subject from the perspective of saturation physics and compute the heavy quark production cross section with the dipole model. We isolate shadowing and nonlinear effects, showing their impact on the charm structure function and on the transverse momentum spectrum.
Resumo:
In this paper we study the possible microscopic origin of heavy-tailed probability density distributions for the price variation of financial instruments. We extend the standard log-normal process to include another random component in the so-called stochastic volatility models. We study these models under an assumption, akin to the Born-Oppenheimer approximation, in which the volatility has already relaxed to its equilibrium distribution and acts as a background to the evolution of the price process. In this approximation, we show that all models of stochastic volatility should exhibit a scaling relation in the time lag of zero-drift modified log-returns. We verify that the Dow-Jones Industrial Average index indeed follows this scaling. We then focus on two popular stochastic volatility models, the Heston and Hull-White models. In particular, we show that in the Hull-White model the resulting probability distribution of log-returns in this approximation corresponds to the Tsallis (t-Student) distribution. The Tsallis parameters are given in terms of the microscopic stochastic volatility model. Finally, we show that the log-returns for 30 years Dow Jones index data is well fitted by a Tsallis distribution, obtaining the relevant parameters. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this work project we study the tail properties of currency returns and analyze whether changes in the tail indices of these series have occurred over time as a consequence of turbulent periods. Our analysis is based on the methods introduced by Quintos, Fan and Phillips (2001), Candelon and Straetmans (2006, 2013), and their extensions. Specifically, considering a sample of daily data from December 31, 1993 to February 13, 2015 we apply the recursive test in calendar time (forward test) and in reverse calendar time (backward test) and indeed detect falls and rises in the tail indices, signifying increases and decreases in the probability of extreme events.