965 resultados para LOG-S DISTRIBUTIONS
Resumo:
The importance of competition between similar species in driving community assembly is much debated. Recently, phylogenetic patterns in species composition have been investigated to help resolve this question: phylogenetic clustering is taken to imply environmental filtering, and phylogenetic overdispersion to indicate limiting similarity between species. We used experimental plant communities with random species compositions and initially even abundance distributions to examine the development of phylogenetic pattern in species abundance distributions. Where composition was held constant by weeding, abundance distributions became overdispersed through time, but only in communities that contained distantly related clades, some with several species (i.e., a mix of closely and distantly related species). Phylogenetic pattern in composition therefore constrained the development of overdispersed abundance distributions, and this might indicate limiting similarity between close relatives and facilitation/complementarity between distant relatives. Comparing the phylogenetic patterns in these communities with those expected from the monoculture abundances of the constituent species revealed that interspecific competition caused the phylogenetic patterns. Opening experimental communities to colonization by all species in the species pool led to convergence in phylogenetic diversity. At convergence, communities were composed of several distantly related but species-rich clades and had overdispersed abundance distributions. This suggests that limiting similarity processes determine which species dominate a community but not which species occur in a community. Crucially, as our study was carried out in experimental communities, we could rule out local evolutionary or dispersal explanations for the patterns and identify ecological processes as the driving force, underlining the advantages of studying these processes in experimental communities. Our results show that phylogenetic relations between species provide a good guide to understanding community structure and add a new perspective to the evidence that niche complementarity is critical in driving community assembly.
Resumo:
Let P be a probability distribution on q -dimensional space. The so-called Diaconis-Freedman effect means that for a fixed dimension d<distributions. The present paper provides necessary and sufficient conditions for this phenomenon in a suitable asymptotic framework with increasing dimension q . It turns out, that the conditions formulated by Diaconis and Freedman (1984) are not only sufficient but necessary as well. Moreover, letting P ^ be the empirical distribution of n independent random vectors with distribution P , we investigate the behavior of the empirical process n √ (P ^ −P) under random projections, conditional on P ^ .
Resumo:
This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar [Ann. Statist. 15(3) (1987) 1131–1154]. The idea is to generate a random finite subset of a parameter space which will automatically contain points which are very close to an unknown true parameter. The motivation for this procedure comes from recent work of Dümbgen et al. [Ann. Statist. 39(2) (2011) 702–730] on regression models with log-concave error distributions.
Resumo:
Several of multiasset derivatives like basket options or options on the weighted maximum of assets exhibit the property that their prices determine uniquely the underlying asset distribution. Related to that the question how to retrieve this distributions from the corresponding derivatives quotes will be discussed. On the contrary, the prices of exchange options do not uniquely determine the underlying distributions of asset prices and the extent of this non-uniqueness can be characterised. The discussion is related to a geometric interpretation of multiasset derivatives as support functions of convex sets. Following this, various symmetry properties for basket, maximum and exchange options are discussed alongside with their geometric interpretations and some decomposition results for more general payoff functions.
Resumo:
OBJECTIVE: To characterize PubMed usage over a typical day and compare it to previous studies of user behavior on Web search engines. DESIGN: We performed a lexical and semantic analysis of 2,689,166 queries issued on PubMed over 24 consecutive hours on a typical day. MEASUREMENTS: We measured the number of queries, number of distinct users, queries per user, terms per query, common terms, Boolean operator use, common phrases, result set size, MeSH categories, used semantic measurements to group queries into sessions, and studied the addition and removal of terms from consecutive queries to gauge search strategies. RESULTS: The size of the result sets from a sample of queries showed a bimodal distribution, with peaks at approximately 3 and 100 results, suggesting that a large group of queries was tightly focused and another was broad. Like Web search engine sessions, most PubMed sessions consisted of a single query. However, PubMed queries contained more terms. CONCLUSION: PubMed's usage profile should be considered when educating users, building user interfaces, and developing future biomedical information retrieval systems.
Resumo:
Groundwater age is a key aspect of production well vulnerability. Public drinking water supply wells typically have long screens and are expected to produce a mixture of groundwater ages. The groundwater age distributions of seven production wells of the Holten well field (Netherlands) were estimated from tritium-helium (3H/3He), krypton-85 (85Kr), and argon-39 (39Ar), using a new application of a discrete age distribution model and existing mathematical models, by minimizing the uncertainty-weighted squared differences of modeled and measured tracer concentrations. The observed tracer concentrations fitted well to a 4-bin discrete age distribution model or a dispersion model with a fraction of old groundwater. Our results show that more than 75 of the water pumped by four shallow production wells has a groundwater age of less than 20 years and these wells are very vulnerable to recent surface contamination. More than 50 of the water pumped by three deep production wells is older than 60 years. 3H/3He samples from short screened monitoring wells surrounding the well field constrained the age stratification in the aquifer. The discrepancy between the age stratification with depth and the groundwater age distribution of the production wells showed that the well field preferentially pumps from the shallow part of the aquifer. The discrete groundwater age distribution model appears to be a suitable approach in settings where the shape of the age distribution cannot be assumed to follow a simple mathematical model, such as a production well field where wells compete for capture area.
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
Serial correlation of extreme midlatitude cyclones observed at the storm track exits is explained by deviations from a Poisson process. To model these deviations, we apply fractional Poisson processes (FPPs) to extreme midlatitude cyclones, which are defined by the 850 hPa relative vorticity of the ERA interim reanalysis during boreal winter (DJF) and summer (JJA) seasons. Extremes are defined by a 99% quantile threshold in the grid-point time series. In general, FPPs are based on long-term memory and lead to non-exponential return time distributions. The return times are described by a Weibull distribution to approximate the Mittag–Leffler function in the FPPs. The Weibull shape parameter yields a dispersion parameter that agrees with results found for midlatitude cyclones. The memory of the FPP, which is determined by detrended fluctuation analysis, provides an independent estimate for the shape parameter. Thus, the analysis exhibits a concise framework of the deviation from Poisson statistics (by a dispersion parameter), non-exponential return times and memory (correlation) on the basis of a single parameter. The results have potential implications for the predictability of extreme cyclones.
Resumo:
The distributions of event-by-event harmonic flow coefficients v_n for n=2-4 are measured in sqrt(s_NN)=2.76 TeV Pb+Pb collisions using the ATLAS detector at the LHC. The measurements are performed using charged particles with transverse momentum pT> 0.5 GeV and in the pseudorapidity range |eta|<2.5 in a dataset of approximately 7 ub^-1 recorded in 2010. The shapes of the v_n distributions are described by a two-dimensional Gaussian function for the underlying flow vector in central collisions for v_2 and over most of the measured centrality range for v_3 and v_4. Significant deviations from this function are observed for v_2 in mid-central and peripheral collisions, and a small deviation is observed for v_3 in mid-central collisions. It is shown that the commonly used multi-particle cumulants are insensitive to the deviations for v_2. The v_n distributions are also measured independently for charged particles with 0.5
Resumo:
Mass and angular distributions of dijets produced in LHC proton-proton collisions at a centre-of-mass energy root s = 7TeV have been studied with the ATLAS detector using the full 2011 data set with an integrated luminosity of 4.8 fb(-1). Dijet masses up to similar to 4.0TeV have been probed. No resonance-like features have been observed in the dijet mass spectrum, and all angular distributions are consistent with the predictions of QCD. Exclusion limits on six hypotheses of new phenomena have been set at 95% CL in terms of mass or energy scale, as appropriate. These hypotheses include excited quarks below 2.83 TeV, colour octet scalars below 1.86TeV, heavy W bosons below 1.68 TeV, string resonances below 3.61 TeV, quantum black holes with six extra space-time dimensions for quantum gravity scales below 4.11 TeV, and quark contact interactions below a compositeness scale of 7.6 TeV in a destructive interference scenario.
Resumo:
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Resumo:
Vorlage d. Digitalisats: Sammelband, Nr. 1