962 resultados para Homogeneous Distributions


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let P be a probability distribution on q -dimensional space. The so-called Diaconis-Freedman effect means that for a fixed dimension d<distributions. The present paper provides necessary and sufficient conditions for this phenomenon in a suitable asymptotic framework with increasing dimension q . It turns out, that the conditions formulated by Diaconis and Freedman (1984) are not only sufficient but necessary as well. Moreover, letting P ^ be the empirical distribution of n independent random vectors with distribution P , we investigate the behavior of the empirical process n √ (P ^ −P) under random projections, conditional on P ^ .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several of multiasset derivatives like basket options or options on the weighted maximum of assets exhibit the property that their prices determine uniquely the underlying asset distribution. Related to that the question how to retrieve this distributions from the corresponding derivatives quotes will be discussed. On the contrary, the prices of exchange options do not uniquely determine the underlying distributions of asset prices and the extent of this non-uniqueness can be characterised. The discussion is related to a geometric interpretation of multiasset derivatives as support functions of convex sets. Following this, various symmetry properties for basket, maximum and exchange options are discussed alongside with their geometric interpretations and some decomposition results for more general payoff functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this paper is to establish exponential convergence of $hp$-version interior penalty (IP) discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems with homogeneous Dirichlet boundary conditions and piecewise analytic data in three-dimensional polyhedral domains. More precisely, we shall analyze the convergence of the $hp$-IP dG methods considered in [D. Schötzau, C. Schwab, T. P. Wihler, SIAM J. Numer. Anal., 51 (2013), pp. 1610--1633] based on axiparallel $\sigma$-geometric anisotropic meshes and $\bm{s}$-linear anisotropic polynomial degree distributions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Divalent metal ion transporter 1 (DMT1) is a proton-coupled Fe(2+) transporter that is essential for iron uptake in enterocytes and for transferrin-associated endosomal iron transport in many other cell types. DMT1 dysfunction is associated with several diseases such as iron overload disorders and neurodegenerative diseases. The main objective of the present work is to develop and validate a fluorescence-based screening assay for DMT1 modulators. We found that Fe(2+) or Cd(2+) influx could be reliably monitored in calcium 5-loaded DMT1-expressing HEK293 cells using the FLIPR Tetra fluorescence microplate reader. DMT1-mediated metal transport shows saturation kinetics depending on the extracellular substrate concentration, with a K0.5 value of 1.4 µM and 3.5 µM for Fe(2+) and Cd(2+), respectively. In addition, Cd(2+) was used as a substrate for DMT1, and we find a Ki value of 2.1 µM for a compound (2-(3-carbamimidoylsulfanylmethyl-benzyl)-isothiourea) belonging to the benzylisothioureas family, which has been identified as a DMT1 inhibitor. The optimized screening method using this compound as a reference demonstrated a Z' factor of 0.51. In summary, we developed and validated a sensitive and reproducible cell-based fluorescence assay suitable for the identification of compounds that specifically modulate DMT1 transport activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Claystones are considered worldwide as barrier materials for nuclear waste repositories. In the Mont Terri underground research laboratory (URL), a nearly 4-year diffusion and retention (DR) experiment has been performed in Opalinus Clay. It aimed at (1) obtaining data at larger space and time scales than in laboratory experiments and (2) under relevant in situ conditions with respect to pore water chemistry and mechanical stress, (3) quantifying the anisotropy of in situ diffusion, and (4) exploring possible effects of a borehole-disturbed zone. The experiment included two tracer injection intervals in a borehole perpendicular to bedding, through which traced artificial pore water (APW) was circulated, and a pressure monitoring interval. The APW was spiked with neutral tracers (HTO, HDO, H2O-18), anions (Br, I, SeO4), and cations (Na-22, Ba-133, Sr-85, Cs-137, Co-60, Eu-152, stable Cs, and stable Eu). Most tracers were added at the beginning, some were added at a later stage. The hydraulic pressure in the injection intervals was adjusted according to the measured value in the pressure monitoring interval to ensure transport by diffusion only. Concentration time-series in the APW within the borehole intervals were obtained, as well as 2D concentration distributions in the rock at the end of the experiment after overcoring and subsampling which resulted in �250 samples and �1300 analyses. As expected, HTO diffused the furthest into the rock, followed by the anions (Br, I, SeO4) and by the cationic sorbing tracers (Na-22, Ba-133, Cs, Cs-137, Co-60, Eu-152). The diffusion of SeO4 was slower than that of Br or I, approximately proportional to the ratio of their diffusion coefficients in water. Ba-133 diffused only into �0.1 m during the �4 a. Stable Cs, added at a higher concentration than Cs-137, diffused further into the rock than Cs-137, consistent with a non-linear sorption behavior. The rock properties (e.g., water contents) were rather homogeneous at the centimeter scale, with no evidence of a borehole-disturbed zone. In situ anisotropy ratios for diffusion, derived for the first time directly from field data, are larger for HTO and Na-22 (�5) than for anions (�3�4 for Br and I). The lower ionic strength of the pore water at this location (�0.22 M) as compared to locations of earlier experiments in the Mont Terri URL (�0.39 M) had no notable effect on the anion accessible pore fraction for Cl, Br, and I: the value of 0.55 is within the range of earlier data. Detailed transport simulations involving different codes will be presented in a companion paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Groundwater age is a key aspect of production well vulnerability. Public drinking water supply wells typically have long screens and are expected to produce a mixture of groundwater ages. The groundwater age distributions of seven production wells of the Holten well field (Netherlands) were estimated from tritium-helium (3H/3He), krypton-85 (85Kr), and argon-39 (39Ar), using a new application of a discrete age distribution model and existing mathematical models, by minimizing the uncertainty-weighted squared differences of modeled and measured tracer concentrations. The observed tracer concentrations fitted well to a 4-bin discrete age distribution model or a dispersion model with a fraction of old groundwater. Our results show that more than 75 of the water pumped by four shallow production wells has a groundwater age of less than 20 years and these wells are very vulnerable to recent surface contamination. More than 50 of the water pumped by three deep production wells is older than 60 years. 3H/3He samples from short screened monitoring wells surrounding the well field constrained the age stratification in the aquifer. The discrepancy between the age stratification with depth and the groundwater age distribution of the production wells showed that the well field preferentially pumps from the shallow part of the aquifer. The discrete groundwater age distribution model appears to be a suitable approach in settings where the shape of the age distribution cannot be assumed to follow a simple mathematical model, such as a production well field where wells compete for capture area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry.^ A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Serial correlation of extreme midlatitude cyclones observed at the storm track exits is explained by deviations from a Poisson process. To model these deviations, we apply fractional Poisson processes (FPPs) to extreme midlatitude cyclones, which are defined by the 850 hPa relative vorticity of the ERA interim reanalysis during boreal winter (DJF) and summer (JJA) seasons. Extremes are defined by a 99% quantile threshold in the grid-point time series. In general, FPPs are based on long-term memory and lead to non-exponential return time distributions. The return times are described by a Weibull distribution to approximate the Mittag–Leffler function in the FPPs. The Weibull shape parameter yields a dispersion parameter that agrees with results found for midlatitude cyclones. The memory of the FPP, which is determined by detrended fluctuation analysis, provides an independent estimate for the shape parameter. Thus, the analysis exhibits a concise framework of the deviation from Poisson statistics (by a dispersion parameter), non-exponential return times and memory (correlation) on the basis of a single parameter. The results have potential implications for the predictability of extreme cyclones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The International GNSS Service (IGS) provides operational products for the GPS and GLONASS constellation. Homogeneously processed time series of parameters from the IGS are only available for GPS. Reprocessed GLONASS series are provided only by individual Analysis Centers (i. e. CODE and ESA), making it difficult to fully include the GLONASS system into a rigorous GNSS analysis. In view of the increasing number of active GLONASS satellites and a steadily growing number of GPS+GLONASS-tracking stations available over the past few years, Technische Universität Dresden, Technische Universität München, Universität Bern and Eidgenössische Technische Hochschule Zürich performed a combined reprocessing of GPS and GLONASS observations. Also, SLR observations to GPS and GLONASS are included in this reprocessing effort. Here, we show only SLR results from a GNSS orbit validation. In total, 18 years of data (1994–2011) have been processed from altogether 340 GNSS and 70 SLR stations. The use of GLONASS observations in addition to GPS has no impact on the estimated linear terrestrial reference frame parameters. However, daily station positions show an RMS reduction of 0.3 mm on average for the height component when additional GLONASS observations can be used for the time series determination. Analyzing satellite orbit overlaps, the rigorous combination of GPS and GLONASS neither improves nor degrades the GPS orbit precision. For GLONASS, however, the quality of the microwave-derived GLONASS orbits improves due to the combination. These findings are confirmed using independent SLR observations for a GNSS orbit validation. In comparison to previous studies, mean SLR biases for satellites GPS-35 and GPS-36 could be reduced in magnitude from −35 and −38 mm to −12 and −13 mm, respectively. Our results show that remaining SLR biases depend on the satellite type and the use of coated or uncoated retro-reflectors. For Earth rotation parameters, the increasing number of GLONASS satellites and tracking stations over the past few years leads to differences between GPS-only and GPS+GLONASS combined solutions which are most pronounced in the pole rate estimates with maximum 0.2 mas/day in magnitude. At the same time, the difference between GLONASS-only and combined solutions decreases. Derived GNSS orbits are used to estimate combined GPS+GLONASS satellite clocks, with first results presented in this paper. Phase observation residuals from a precise point positioning are at the level of 2 mm and particularly reveal poorly modeled yaw maneuver periods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The distributions of event-by-event harmonic flow coefficients v_n for n=2-4 are measured in sqrt(s_NN)=2.76 TeV Pb+Pb collisions using the ATLAS detector at the LHC. The measurements are performed using charged particles with transverse momentum pT> 0.5 GeV and in the pseudorapidity range |eta|<2.5 in a dataset of approximately 7 ub^-1 recorded in 2010. The shapes of the v_n distributions are described by a two-dimensional Gaussian function for the underlying flow vector in central collisions for v_2 and over most of the measured centrality range for v_3 and v_4. Significant deviations from this function are observed for v_2 in mid-central and peripheral collisions, and a small deviation is observed for v_3 in mid-central collisions. It is shown that the commonly used multi-particle cumulants are insensitive to the deviations for v_2. The v_n distributions are also measured independently for charged particles with 0.51 GeV. When these distributions are rescaled to the same mean values, the adjusted shapes are found to be nearly the same for these two pT ranges. The v_n distributions are compared with the eccentricity distributions from two models for the initial collision geometry: a Glauber model and a model that includes corrections to the initial geometry due to gluon saturation effects. Both models fail to describe the experimental data consistently over most of the measured centrality range.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mass and angular distributions of dijets produced in LHC proton-proton collisions at a centre-of-mass energy root s = 7TeV have been studied with the ATLAS detector using the full 2011 data set with an integrated luminosity of 4.8 fb(-1). Dijet masses up to similar to 4.0TeV have been probed. No resonance-like features have been observed in the dijet mass spectrum, and all angular distributions are consistent with the predictions of QCD. Exclusion limits on six hypotheses of new phenomena have been set at 95% CL in terms of mass or energy scale, as appropriate. These hypotheses include excited quarks below 2.83 TeV, colour octet scalars below 1.86TeV, heavy W bosons below 1.68 TeV, string resonances below 3.61 TeV, quantum black holes with six extra space-time dimensions for quantum gravity scales below 4.11 TeV, and quark contact interactions below a compositeness scale of 7.6 TeV in a destructive interference scenario.