979 resultados para symmetric distribution functions


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese de Doutoramento em Ciência e Engenharia de Polímeros e Compósitos

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simultaneous measurements of the tt¯, W+W−, and Z/γ∗→ττ production cross-sections using an integrated luminosity of 4.6 fb−1 of pp collisions at s√=7 TeV collected by the ATLAS detector at the LHC are presented. Events are selected with two high transverse momentum leptons consisting of an oppositely charged electron and muon pair. The three processes are separated using the distributions of the missing transverse momentum of events with zero and greater than zero jet multiplicities. Measurements of the fiducial cross-section are presented along with results that quantify for the first time the underlying correlations in the predicted and measured cross-sections due to proton parton distribution functions. These results indicate that the correlated NLO predictions for tt¯ and Z/γ∗→ττ significantly underestimate the data, while those at NNLO generally describe the data well. The full cross-sections are measured to be σ(tt¯)=181.2±2.8+9.7−9.5±3.3±3.3 pb, σ(W+W−)=53.3±2.7+7.3−8.0±1.0±0.5 pb, and σ(Z/γ∗→ττ)=1174±24+72−87±21±9 pb, where the cited uncertainties are due to statistics, systematic effects, luminosity and the LHC beam energy measurement, respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The inclusive jet cross-section is measured in proton--proton collisions at a centre-of-mass energy of 7 TeV using a data set corresponding to an integrated luminosity of 4.5 fb−1 collected with the ATLAS detector at the Large Hadron Collider in 2011. Jets are identified using the anti-kt algorithm with radius parameter values of 0.4 and 0.6. The double-differential cross-sections are presented as a function of the jet transverse momentum and the jet rapidity, covering jet transverse momenta from 100 GeV to 2 TeV. Next-to-leading-order QCD calculations corrected for non-perturbative effects and electroweak effects, as well as Monte Carlo simulations with next-to-leading-order matrix elements interfaced to parton showering, are compared to the measured cross-sections. A quantitative comparison of the measured cross-sections to the QCD calculations using several sets of parton distribution functions is performed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A measurement of W boson production in lead-lead collisions at sNN−−−√=2.76 TeV is presented. It is based on the analysis of data collected with the ATLAS detector at the LHC in 2011 corresponding to an integrated luminosity of 0.14 nb−1 and 0.15 nb−1 in the muon and electron decay channels, respectively. The differential production cross-sections and lepton charge asymmetry are each measured as a function of the average number of participating nucleons ⟨Npart⟩ and absolute pseudorapidity of the charged lepton. The results are compared to predictions based on next-to-leading-order QCD calculations. These measurements are, in principle, sensitive to possible nuclear modifications to the parton distribution functions and also provide information on scaling of W boson production in multi-nucleon systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Double-differential three-jet production cross-sections are measured in proton--proton collisions at a centre-of-mass energy of s√=7TeV using the ATLAS detector at the Large Hadron Collider. The measurements are presented as a function of the three-jet mass (mjjj), in bins of the sum of the absolute rapidity separations between the three leading jets (|Y∗|). Invariant masses extending up to 5 TeV are reached for 8<|Y∗|<10. These measurements use a sample of data recorded using the ATLAS detector in 2011, which corresponds to an integrated luminosity of 4.51fb−1. Jets are identified using the anti-kt algorithm with two different jet radius parameters, R=0.4 and R=0.6. The dominant uncertainty in these measurements comes from the jet energy scale. Next-to-leading-order QCD calculations corrected to account for non-perturbative effects are compared to the measurements. Good agreement is found between the data and the theoretical predictions based on most of the available sets of parton distribution functions, over the full kinematic range, covering almost seven orders of magnitude in the measured cross-section values.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The classical central limit theorem states the uniform convergence of the distribution functions of the standardized sums of independent and identically distributed square integrable real-valued random variables to the standard normal distribution function. While first versions of the central limit theorem are already due to Moivre (1730) and Laplace (1812), a systematic study of this topic started at the beginning of the last century with the fundamental work of Lyapunov (1900, 1901). Meanwhile, extensions of the central limit theorem are available for a multitude of settings. This includes, e.g., Banach space valued random variables as well as substantial relaxations of the assumptions of independence and identical distributions. Furthermore, explicit error bounds are established and asymptotic expansions are employed to obtain better approximations. Classical error estimates like the famous bound of Berry and Esseen are stated in terms of absolute moments of the random summands and therefore do not reflect a potential closeness of the distributions of the single random summands to a normal distribution. Non-classical approaches take this issue into account by providing error estimates based on, e.g., pseudomoments. The latter field of investigation was initiated by work of Zolotarev in the 1960's and is still in its infancy compared to the development of the classical theory. For example, non-classical error bounds for asymptotic expansions seem not to be available up to now ...

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper proposes and applies statistical tests for poverty dominance that check for whether poverty comparisons can be made robustly over ranges of poverty lines and classes of poverty indices. This helps provide both normative and statistical confidence in establishing poverty rankings across distributions. The tests, which can take into account the complex sampling procedures that are typically used by statistical agencies to generate household-level surveys, are implemented using the Canadian Survey of Labour and Income Dynamics (SLID) for 1996, 1999 and 2002. Although the yearly cumulative distribution functions cross at the lower tails of the distributions, the more recent years tend to dominate earlier years for a relatively wide range of poverty lines. Failing to take into account SLID's sampling variability (as is sometimes done) can inflate significantly one's confidence in ranking poverty. Taking into account SLID's complex sampling design (as has not been done before) can also decrease substantially the range of poverty lines over which a poverty ranking can be inferred.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To describe the collective behavior of large ensembles of neurons in neuronal network, a kinetic theory description was developed in [13, 12], where a macroscopic representation of the network dynamics was directly derived from the microscopic dynamics of individual neurons, which are modeled by conductance-based, linear, integrate-and-fire point neurons. A diffusion approximation then led to a nonlinear Fokker-Planck equation for the probability density function of neuronal membrane potentials and synaptic conductances. In this work, we propose a deterministic numerical scheme for a Fokker-Planck model of an excitatory-only network. Our numerical solver allows us to obtain the time evolution of probability distribution functions, and thus, the evolution of all possible macroscopic quantities that are given by suitable moments of the probability density function. We show that this deterministic scheme is capable of capturing the bistability of stationary states observed in Monte Carlo simulations. Moreover, the transient behavior of the firing rates computed from the Fokker-Planck equation is analyzed in this bistable situation, where a bifurcation scenario, of asynchronous convergence towards stationary states, periodic synchronous solutions or damped oscillatory convergence towards stationary states, can be uncovered by increasing the strength of the excitatory coupling. Finally, the computation of moments of the probability distribution allows us to validate the applicability of a moment closure assumption used in [13] to further simplify the kinetic theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The dynamics of homogeneously heated granular gases which fragment due to particle collisions is analyzed. We introduce a kinetic model which accounts for correlations induced at the grain collisions and analyze both the kinetics and relevant distribution functions these systems develop. The work combines analytical and numerical studies based on direct simulation Monte Carlo calculations. A broad family of fragmentation probabilities is considered, and its implications for the system kinetics are discussed. We show that generically these driven materials evolve asymptotically into a dynamical scaling regime. If the fragmentation probability tends to a constant, the grain number diverges at a finite time, leading to a shattering singularity. If the fragmentation probability vanishes, then the number of grains grows monotonously as a power law. We consider different homogeneous thermostats and show that the kinetics of these systems depends weakly on both the grain inelasticity and driving. We observe that fragmentation plays a relevant role in the shape of the velocity distribution of the particles. When the fragmentation is driven by local stochastic events, the longvelocity tail is essentially exponential independently of the heating frequency and the breaking rule. However, for a Lowe-Andersen thermostat, numerical evidence strongly supports the conjecture that the scaled velocity distribution follows a generalized exponential behavior f (c)~exp (−cⁿ), with n ≈1.2, regarding less the fragmentation mechanisms

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction: Increased respiratory pattern variability is associated with improved oxygenation. Pressure support (PS) is a widely used partial-assist mechanical ventilation (MV) mode, in which each breathing cycle is initiated by flow or pressure variation at the airway due to patient inspiratory effort. Neurally adjusted ventilatory assist (NAVA) is relatively new and uses the electrical activity of the diaphragm (Eadi) to deliver ventilatory support proportional to the patient's inspiratory demand. We hypothesize that respiratory variability should be greater with NAVA compared with PS.Methods: Twenty-two patients underwent 20 minutes of PS followed by 20 minutes of NAVA. Flow and Eadi curves were used to obtain tidal volume (Vt) and ∫Eadi for 300 to 400 breaths in each patient. Patient-specific cumulative distribution functions (CDF) show the percentage Vt and ∫Eadi within a clinically defined (±10%) variability band for each patient. Values are normalized to patient-specific medians for direct comparison. Variability in Vt (outcome) is thus expressed in terms of variability in ∫Eadi (demand) on the same plot.Results: Variability in Vt relative to variability in ∫Eadi is significantly greater for NAVA than PS (P = 0.00012). Hence, greater variability in outcome Vt is obtained for a given demand in ∫Eadi, under NAVA, as illustrated in Figure 1 for a typical patient. A Fisher 2 × 2 contingency analysis showed that 45% of patients under NAVA had a Vt variability in equal proportion to ∫Eadi variability, versus 0% for PS (P < 0.05).Conclusions: NAVA yields greater variability in tidal volume, relative to ∫Eadi demand, and a better match between Vt and ∫Eadi. These results indicate that NAVA could achieve improved oxygenation compared with PS when sufficient underlying variability in ∫Eadi is present, due to its ability to achieve higher tidal volume variability from a given variability in ∫Eadi.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The well--known Minkowski's? $(x)$ function is presented as the asymptotic distribution function of an enumeration of the rationals in (0,1] based on their continued fraction representation. Besides, the singularity of ?$(x)$ is clearly proved in two ways: by exhibiting a set of measure one in which ?ï$(x)$ = 0; and again by actually finding a set of measure one which is mapped onto a set of measure zero and viceversa. These sets are described by means of metrical properties of different systems for real number representation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When the behaviour of a specific hypothesis test statistic is studied by aMonte Carlo experiment, the usual way to describe its quality is by givingthe empirical level of the test. As an alternative to this procedure, we usethe empirical distribution of the obtained \emph{p-}values and exploit itsinformation both graphically and numerically.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The energy and structure of dilute hard- and soft-sphere Bose gases are systematically studied in the framework of several many-body approaches, such as the variational correlated theory, the Bogoliubov model, and the uniform limit approximation, valid in the weak-interaction regime. When possible, the results are compared with the exact diffusion Monte Carlo ones. Jastrow-type correlation provides a good description of the systems, both hard- and soft-spheres, if the hypernetted chain energy functional is freely minimized and the resulting Euler equation is solved. The study of the soft-sphere potentials confirms the appearance of a dependence of the energy on the shape of the potential at gas paremeter values of x~0.001. For quantities other than the energy, such as the radial distribution functions and the momentum distributions, the dependence appears at any value of x. The occurrence of a maximum in the radial distribution function, in the momentum distribution, and in the excitation spectrum is a natural effect of the correlations when x increases. The asymptotic behaviors of the functions characterizing the structure of the systems are also investigated. The uniform limit approach is very easy to implement and provides a good description of the soft-sphere gas. Its reliability improves when the interaction weakens.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The soil CO2 emission has high spatial variability because it depends strongly on soil properties. The purpose of this study was to (i) characterize the spatial variability of soil respiration and related properties, (ii) evaluate the accuracy of results of the ordinary kriging method and sequential Gaussian simulation, and (iii) evaluate the uncertainty in predicting the spatial variability of soil CO2 emission and other properties using sequential Gaussian simulations. The study was conducted in a sugarcane area, using a regular sampling grid with 141 points, where soil CO2 emission, soil temperature, air-filled pore space, soil organic matter and soil bulk density were evaluated. All variables showed spatial dependence structure. The soil CO2 emission was positively correlated with organic matter (r = 0.25, p < 0.05) and air-filled pore space (r = 0.27, p < 0.01) and negatively with soil bulk density (r = -0.41, p < 0.01). However, when the estimated spatial values were considered, the air-filled pore space was the variable mainly responsible for the spatial characteristics of soil respiration, with a correlation of 0.26 (p < 0.01). For all variables, individual simulations represented the cumulative distribution functions and variograms better than ordinary kriging and E-type estimates. The greatest uncertainties in predicting soil CO2 emission were associated with areas with the highest estimated values, which produced estimates from 0.18 to 1.85 t CO2 ha-1, according to the different scenarios considered. The knowledge of the uncertainties generated by the different scenarios can be used in inventories of greenhouse gases, to provide conservative estimates of the potential emission of these gases.