963 resultados para momentum distributions
Resumo:
Let P be a probability distribution on q -dimensional space. The so-called Diaconis-Freedman effect means that for a fixed dimension d<distributions. The present paper provides necessary and sufficient conditions for this phenomenon in a suitable asymptotic framework with increasing dimension q . It turns out, that the conditions formulated by Diaconis and Freedman (1984) are not only sufficient but necessary as well. Moreover, letting P ^ be the empirical distribution of n independent random vectors with distribution P , we investigate the behavior of the empirical process n √ (P ^ −P) under random projections, conditional on P ^ .
Resumo:
Several of multiasset derivatives like basket options or options on the weighted maximum of assets exhibit the property that their prices determine uniquely the underlying asset distribution. Related to that the question how to retrieve this distributions from the corresponding derivatives quotes will be discussed. On the contrary, the prices of exchange options do not uniquely determine the underlying distributions of asset prices and the extent of this non-uniqueness can be characterised. The discussion is related to a geometric interpretation of multiasset derivatives as support functions of convex sets. Following this, various symmetry properties for basket, maximum and exchange options are discussed alongside with their geometric interpretations and some decomposition results for more general payoff functions.
Resumo:
We study electroweak Sudakov effects in single W, Z and γ production at large transverse momentum using soft collinear effective theory. We present a factorized form of the cross section near the partonic threshold with both QCD and electroweak effects included and compute the electroweak corrections arising at different scales. We analyze their size relative to the QCD corrections as well as the impact of strong-electroweak mixing terms. Numerical results for the vector-boson cross sections at the Large Hadron Collider are presented.
Resumo:
The vector channel spectral function and the dilepton production rate from a QCD plasma at a temperature above a few hundred MeV are evaluated up to next-to-leading order (NLO) including their dependence on a non-zero momentum with respect to the heat bath. The invariant mass of the virtual photon is taken to be in the range K2 ~ (πT)2 ~ (1GeV)2, generalizing previous NLO results valid for K2 ≫ (πT)2. In the opposite regime 0 < K2 ≪ (πT)2 the loop expansion breaks down, but agrees nevertheless in order of magnitude with a previous result obtained through resummations. Ways to test the vector spectral function through comparisons with imaginary-time correlators measured on the lattice are discussed.
Resumo:
We obtain the next-to-next-to-leading order corrections to transverse-momentum spectra of W, Z and Higgs bosons near the partonic threshold. In the threshold limit, the electroweak boson recoils against a low-mass jet and all radiation is either soft, or collinear to the jet or the beam directions. We extract the virtual corrections from known results for the relevant two-loop four-point amplitudes and combine them with the soft and collinear two-loop functions as defined in Soft-Collinear Effective Theory. We have implemented these results in a public code PeTeR and present numerical results for the threshold resummed cross section of W and Z bosons at next-to-next-to-next-to-leading logarithmic accuracy, matched to next-to-leading fixed-order perturbation theory. The two-loop corrections lead to a moderate increase in the cross section and reduce the scale uncertainty by about a factor of two. The corrections are significantly larger for Higgs production.
Resumo:
Using methods from effective field theory, we have recently developed a novel, systematic framework for the calculation of the cross sections for electroweak gauge-boson production at small and very small transverse momentum q T , in which large logarithms of the scale ratio m V /q T are resummed to all orders. This formalism is applied to the production of Higgs bosons in gluon fusion at the LHC. The production cross section receives logarithmically enhanced corrections from two sources: the running of the hard matching coefficient and the collinear factorization anomaly. The anomaly leads to the dynamical generation of a non-perturbative scale q∗~mHe−const/αs(mH)≈8 GeV, which protects the process from receiving large long-distance hadronic contributions. We present numerical predictions for the transverse-momentum spectrum of Higgs bosons produced at the LHC, finding that it is quite insensitive to hadronic effects.
Resumo:
When considering NLO corrections to thermal particle production in the “relativistic” regime, in which the invariant mass squared of the produced particle is K2 ~ (πT)2, then the production rate can be expressed as a sum of a few universal “master” spectral functions. Taking the most complicated 2-loop master as an example, a general strategy for obtaining a convergent 2-dimensional integral representation is suggested. The analysis applies both to bosonic and fermionic statistics, and shows that for this master the non-relativistic approximation is only accurate for K2 ~(8πT)2, whereas the zero-momentum approximation works surprisingly well. Once the simpler masters have been similarly resolved, NLO results for quantities such as the right-handed neutrino production rate from a Standard Model plasma or the dilepton production rate from a QCD plasma can be assembled for K2 ~ (πT)2.
Resumo:
We calculate the momentum diffusion coefficient for heavy quarks in SU(3) gluon plasma at temperatures 1-2 times the deconfinement temperature. The momentum diffusion coefficient is extracted from a Monte Carlo calculation of the correlation function of color electric fields, in the leading order of expansion in heavy quark mass. Systematics of the calculation are examined, and compared with perturbtion theory and other estimates.
Resumo:
Groundwater age is a key aspect of production well vulnerability. Public drinking water supply wells typically have long screens and are expected to produce a mixture of groundwater ages. The groundwater age distributions of seven production wells of the Holten well field (Netherlands) were estimated from tritium-helium (3H/3He), krypton-85 (85Kr), and argon-39 (39Ar), using a new application of a discrete age distribution model and existing mathematical models, by minimizing the uncertainty-weighted squared differences of modeled and measured tracer concentrations. The observed tracer concentrations fitted well to a 4-bin discrete age distribution model or a dispersion model with a fraction of old groundwater. Our results show that more than 75 of the water pumped by four shallow production wells has a groundwater age of less than 20 years and these wells are very vulnerable to recent surface contamination. More than 50 of the water pumped by three deep production wells is older than 60 years. 3H/3He samples from short screened monitoring wells surrounding the well field constrained the age stratification in the aquifer. The discrepancy between the age stratification with depth and the groundwater age distribution of the production wells showed that the well field preferentially pumps from the shallow part of the aquifer. The discrete groundwater age distribution model appears to be a suitable approach in settings where the shape of the age distribution cannot be assumed to follow a simple mathematical model, such as a production well field where wells compete for capture area.
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^
Resumo:
Serial correlation of extreme midlatitude cyclones observed at the storm track exits is explained by deviations from a Poisson process. To model these deviations, we apply fractional Poisson processes (FPPs) to extreme midlatitude cyclones, which are defined by the 850 hPa relative vorticity of the ERA interim reanalysis during boreal winter (DJF) and summer (JJA) seasons. Extremes are defined by a 99% quantile threshold in the grid-point time series. In general, FPPs are based on long-term memory and lead to non-exponential return time distributions. The return times are described by a Weibull distribution to approximate the Mittag–Leffler function in the FPPs. The Weibull shape parameter yields a dispersion parameter that agrees with results found for midlatitude cyclones. The memory of the FPP, which is determined by detrended fluctuation analysis, provides an independent estimate for the shape parameter. Thus, the analysis exhibits a concise framework of the deviation from Poisson statistics (by a dispersion parameter), non-exponential return times and memory (correlation) on the basis of a single parameter. The results have potential implications for the predictability of extreme cyclones.
Resumo:
A measurement of angular correlations in Drell-Yan lepton pairs via the phi(eta)* observable is presented. This variable probes the same physics as the Z/gamma* boson transverse momentum with a better experimental resolution. The Z/gamma* -> e(+)e(-) and Z/gamma* -> mu(+)mu(-) decays produced in proton-proton collisions at a centre-of-mass energy of root s = 7 TeV are used. The data were collected with the ATLAS detector at the LHC and correspond to an integrated luminosity of 4.6 fb(-1). Normalised differential cross sections as a function of phi(eta)* are measured separately for electron and muon decay channels. These channels are then combined for improved accuracy. The cross section is also measured double differentially as a function of phi(eta)* for three independent bins of the Z boson rapidity. The results are compared to QCD calculations and to predictions from different Monte Carlo event generators. The data are reasonably well described, in all measured Z boson rapidity regions, by resummed QCD predictions combined with fixed-order perturbative QCD calculations or by some Monte Carlo event generators. The measurement precision is typically better by one order of magnitude than present theoretical uncertainties.