973 resultados para testing against heavy tails
Resumo:
2010 Mathematics Subject Classification: 62F10, 62F12.
Resumo:
A test for time-varying correlation is developed within the framework of a dynamic conditional score (DCS) model for both Gaussian and Student t-distributions. The test may be interpreted as a Lagrange multiplier test and modified to allow for the estimation of models for time-varying volatility in the individual series. Unlike standard moment-based tests, the score-based test statistic includes information on the level of correlation under the null hypothesis and local power arguments indicate the benefits of doing so. A simulation study shows that the performance of the score-based test is strong relative to existing tests across a range of data generating processes. An application to the Hong Kong and South Korean equity markets shows that the new test reveals changes in correlation that are not detected by the standard moment-based test.
Resumo:
The objective of this paper is to improve option risk monitoring by examining the information content of implied volatility and by introducing the calculation of a single-sum expected risk exposure similar to the Value-at-Risk. The figure is calculated in two steps. First, there is a need to estimate the value of a portfolio of options for a number of different market scenarios, while the second step is to summarize the information content of the estimated scenarios into a single-sum risk measure. This involves the use of probability theory and return distributions, which confronts the user with the problems of non-normality in the return distribution of the underlying asset. Here the hyperbolic distribution is used to describe one alternative for dealing with heavy tails. Results indicate that the information content of implied volatility is useful when predicting future large returns in the underlying asset. Further, the hyperbolic distribution provides a good fit to historical returns enabling a more accurate definition of statistical intervals and extreme events.
Resumo:
Abstract. Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources is still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011) into the Community Land Model 4.0 (CLM4CN) in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model because there are large differences between simulated fractional inundation and satellite observations. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1, and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78 % of the global wetland flux. Northern latitude (>50 N) systems contributed 12 Tg CH4 yr−1. We expect this latter number may be an underestimate due to the low high-latitude inundated area captured by satellites and unrealistically low high-latitude productivity and soil carbon predicted by CLM4. Sensitivity analysis showed a large range (150–346 Tg CH4 yr−1) in predicted global methane emissions. The large range was sensitive to: (1) the amount of methane transported through aerenchyma, (2) soil pH (± 100 Tg CH4 yr−1), and (3) redox inhibition (± 45 Tg CH4 yr−1).
Resumo:
This chapter describes methods for testing biocides against microbes. The first part describes a method using flow cytometry to test biocides against multispecies communities of planktonic microbial assemblage and Part 2 describes methods to test biocides against both single and multispecies biofilms.
Resumo:
The case is made for a more careful analysis of the large time asymptotic of infinite particle systems in the thermodynamic limit beyond zero density. The insufficiency of current analysis even in the model case of free particles is demonstrated. Recent advances based on more sophisticated analytical tools like functions of mean variation and Hardy spaces are sketched.
Resumo:
OBJECTIVES To assess the presence of within-group comparisons with baseline in a subset of leading dental journals and to explore possible associations with a range of study characteristics including journal and study design. STUDY DESIGN AND SETTING Thirty consecutive issues of five leading dental journals were electronically searched. The conduct and reporting of statistical analysis in respect of comparisons against baseline or otherwise along with the manner of interpretation of the results were assessed. Descriptive statistics were obtained, and chi-square test and Fisher's exact were undertaken to test the association between trial characteristics and overall study interpretation. RESULTS A total of 184 studies were included with the highest proportion published in Journal of Endodontics (n = 84, 46%) and most involving a single center (n = 157, 85%). Overall, 43 studies (23%) presented interpretation of their outcomes based solely on comparisons against baseline. Inappropriate use of baseline testing was found to be less likely in interventional studies (P < 0.001). CONCLUSION Use of comparisons with baseline appears to be common among both observational and interventional research studies in dentistry. Enhanced conduct and reporting of statistical tests are required to ensure that inferences from research studies are appropriate and informative.
Resumo:
We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.
Resumo:
We consider the problem of estimating P(Yi + (...) + Y-n > x) by importance sampling when the Yi are i.i.d. and heavy-tailed. The idea is to exploit the cross-entropy method as a toot for choosing good parameters in the importance sampling distribution; in doing so, we use the asymptotic description that given P(Y-1 + (...) + Y-n > x), n - 1 of the Yi have distribution F and one the conditional distribution of Y given Y > x. We show in some specific parametric examples (Pareto and Weibull) how this leads to precise answers which, as demonstrated numerically, are close to being variance minimal within the parametric class under consideration. Related problems for M/G/l and GI/G/l queues are also discussed.
Resumo:
The estimation of P(S-n > u) by simulation, where S, is the sum of independent. identically distributed random varibles Y-1,..., Y-n, is of importance in many applications. We propose two simulation estimators based upon the identity P(S-n > u) = nP(S, > u, M-n = Y-n), where M-n = max(Y-1,..., Y-n). One estimator uses importance sampling (for Y-n only), and the other uses conditional Monte Carlo conditioning upon Y1,..., Yn-1. Properties of the relative error of the estimators are derived and a numerical study given in terms of the M/G/1 queue in which n is replaced by an independent geometric random variable N. The conclusion is that the new estimators compare extremely favorably with previous ones. In particular, the conditional Monte Carlo estimator is the first heavy-tailed example of an estimator with bounded relative error. Further improvements are obtained in the random-N case, by incorporating control variates and stratification techniques into the new estimation procedures.
Resumo:
In this work project we study the tail properties of currency returns and analyze whether changes in the tail indices of these series have occurred over time as a consequence of turbulent periods. Our analysis is based on the methods introduced by Quintos, Fan and Phillips (2001), Candelon and Straetmans (2006, 2013), and their extensions. Specifically, considering a sample of daily data from December 31, 1993 to February 13, 2015 we apply the recursive test in calendar time (forward test) and in reverse calendar time (backward test) and indeed detect falls and rises in the tail indices, signifying increases and decreases in the probability of extreme events.
Resumo:
A low-cost test bed was made from a modified heavy vehicle (HV) brake tester. By rotating a test HV’s wheel on an eccentric roller, a known vibration was imparted to the wheel under test. A control case for dampers in good condition was compared with two test cases of ineffective shock absorbers. Measurement of the forces at the bearings of the roller provided an indication of the HV wheel-forces. Where the level of serviceability of the shock absorbers varied, differences in wheel load provided a quality indicator corresponding to a change of damper characteristic. Conclusions regarding the levels of damper maintenance beyond which HV suspensions cause road damage and dynamic wheel forces at the threshold of tyre wear at which HV shock absorbers are normally replaced are presented.