923 resultados para radial distribution function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Inspired by the need for a representation of the biomass burning emissions injection height in the ECHAM/MESSy Atmospheric Chemistry model (EMAC)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il lavoro presentato in questa Tesi si basa sul calcolo di modelli dinamici per Galassie Sferoidali Nane studiando il problema mediante l'utilizzo di funzioni di distribuzione. Si è trattato un tipo di funzioni di distribuzione, "Action-Based distribution functions", le quali sono funzioni delle sole variabili azione. Fornax è stata descritta con un'appropriata funzione di distribuzione e il problema della costruzione di modelli dinamici è stato affrontato assumendo sia un alone di materia oscura con distribuzione di densità costante nelle regioni interne sia un alone con cuspide. Per semplicità è stata assunta simmetria sferica e non è stato calcolato esplicitamente il potenziale gravitazionale della componente stellare (le stelle sono traccianti in un potenziale gravitazionale fissato). Tramite un diretto confronto con alcune osservabili, quali il profilo di densità stellare proiettata e il profilo di dispersione di velocità lungo la linea di vista, sono stati trovati alcuni modelli rappresentativi della dinamica di Fornax. Modelli calcolati tramite funzioni di distribuzione basati su azioni permettono di determinare in maniera autoconsistente profili di anisotropia. Tutti i modelli calcolati sono caratterizzati dal possedere un profilo di anisotropia con forte anisotropia tangenziale. Sono state poi comparate le stime di materia oscura di questi modelli con i più comuni e usati stimatori di massa in letteratura. E stato inoltre stimato il rapporto tra la massa totale del sistema (componente stellare e materia oscura) e la componente stellare di Fornax, entro 1600 pc ed entro i 3 kpc. Come esplorazione preliminare, in questo lavoro abbiamo anche presentato anche alcuni esempi di modelli sferici a due componenti in cui il campo gravitazionale è determinato dall'autogravità delle stelle e da un potenziale esterno che rappresenta l'alone di materia oscura.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The recent development of semi-automated techniques for staining and analyzing flow cytometry samples has presented new challenges. Quality control and quality assessment are critical when developing new high throughput technologies and their associated information services. Our experience suggests that significant bottlenecks remain in the development of high throughput flow cytometry methods for data analysis and display. Especially, data quality control and quality assessment are crucial steps in processing and analyzing high throughput flow cytometry data. Methods: We propose a variety of graphical exploratory data analytic tools for exploring ungated flow cytometry data. We have implemented a number of specialized functions and methods in the Bioconductor package rflowcyt. We demonstrate the use of these approaches by investigating two independent sets of high throughput flow cytometry data. Results: We found that graphical representations can reveal substantial non-biological differences in samples. Empirical Cumulative Distribution Function and summary scatterplots were especially useful in the rapid identification of problems not identified by manual review. Conclusions: Graphical exploratory data analytic tools are quick and useful means of assessing data quality. We propose that the described visualizations should be used as quality assessment tools and where possible, be used for quality control.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Marshall's (1970) lemma is an analytical result which implies root-n-consistency of the distribution function corresponding to the Grenander (1956) estimator of a non-decreasing probability density. The present paper derives analogous results for the setting of convex densities on [0,\infty).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new approach, the four-window technique, was developed to measure optical phase-space-time-frequency tomography (OPSTFT). The four-window technique is based on balanced heterodyne detection with two local oscillator (LO) fields. This technique can provide independent control of position, momentum, time and frequency resolution. The OPSTFT is a Wigner distribution function of two independent Fourier transform pairs, phase-space and time-frequency. The OPSTFT can be applied for early disease detection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hall-effect thrusters (HETs) are compact electric propulsion devices with high specific impulse used for a variety of space propulsion applications. HET technology is well developed but the electron properties in the discharge are not completely understood, mainly due to the difficulty involved in performing accurate measurements in the discharge. Measurements of electron temperature and density have been performed using electrostatic probes, but presence of the probes can significantly disrupt thruster operation, and thus alter the electron temperature and density. While fast-probe studies have expanded understanding of HET discharges, a non-invasive method of measuring the electron temperature and density in the plasma is highly desirable. An alternative to electrostatic probes is a non-perturbing laser diagnostic technique that measures Thomson scattering from the plasma. Thomson scattering is the process by which photons are elastically scattered from the free electrons in a plasma. Since the electrons have thermal energy their motion causes a Doppler shift in the scattered photons that is proportional to their velocity. Like electrostatic probes, laser Thomson scattering (LTS) can be used to determine the temperature and density of free electrons in the plasma. Since Thomson scattering measures the electron velocity distribution function directly no assumptions of the plasma conditions are required, allowing accurate measurements in anisotropic and non-Maxwellian plasmas. LTS requires a complicated measurement apparatus, but has the potential to provide accurate, non-perturbing measurements of electron temperature and density in HET discharges. In order to assess the feasibility of LTS diagnostics on HETs non-invasive measurements of electron temperature and density in the near-field plume of a Hall thruster were performed using a custom built laser Thomson scattering diagnostic. Laser measurements were processed using a maximum likelihood estimation method and results were compared to conventional electrostatic double probe measurements performed at the same thruster conditions. Electron temperature was found to range from approximately 1 – 40 eV and density ranged from approximately 1.0 x 1017 m-3 to 1.3 x 1018 m-3 over discharge voltages from 250 to 450 V and mass flow rates of 40 to 80 SCCM using xenon propellant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fossil pollen data from stratigraphic cores are irregularly spaced in time due to non-linear age-depth relations. Moreover, their marginal distributions may vary over time. We address these features in a nonparametric regression model with errors that are monotone transformations of a latent continuous-time Gaussian process Z(T). Although Z(T) is unobserved, due to monotonicity, under suitable regularity conditions, it can be recovered facilitating further computations such as estimation of the long-memory parameter and the Hermite coefficients. The estimation of Z(T) itself involves estimation of the marginal distribution function of the regression errors. These issues are considered in proposing a plug-in algorithm for optimal bandwidth selection and construction of confidence bands for the trend function. Some high-resolution time series of pollen records from Lago di Origlio in Switzerland, which go back ca. 20,000 years are used to illustrate the methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report the first observation of protons in the near-lunar (100-200 km from the surface) and deeper (near anti-subsolar point) plasma wake when the interplanetary magnetic field (IMF) and solar wind velocity (vsw) are parallel (aligned flow; angle between IMF and vsw≤10°). More than 98% of the observations during aligned flow condition showed the presence of protons in the wake. These observations are obtained by the Solar Wind Monitor sensor of the Sub-keV Atom Reflecting Analyser experiment on Chandrayaan-1. The observation cannot be explained by the conventional fluid models for aligned flow. Back tracing of the observed protons suggests that their source is the solar wind. The larger gyroradii of the wake protons compared to that of solar wind suggest that they were part of the tail of the solar wind velocity distribution function. Such protons could enter the wake due to their large gyroradii even when the flow is aligned to IMF. However, the wake boundary electric field may also play a role in the entry of the protons into the wake.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present the observations of energetic neutral atoms (ENAs) produced at the lunar surface in the Earth's magnetotail. When the Moon was located in the terrestrial plasma sheet, Chandrayaan-1 Energetic Neutrals Analyzer (CENA) detected hydrogen ENAs from the Moon. Analysis of the data from CENA together with the Solar Wind Monitor (SWIM) onboard Chandrayaan-1 reveals the characteristic energy of the observed ENA energy spectrum (the e-folding energy of the distribution function) ∼100 eV and the ENA backscattering ratio (defined as the ratio of upward ENA flux to downward proton flux) <∼0.1. These characteristics are similar to those of the backscattered ENAs in the solar wind, suggesting that CENA detected plasma sheet particles backscattered as ENAs from the lunar surface. The observed ENA backscattering ratio in the plasma sheet exhibits no significant difference in the Southern Hemisphere, where a large and strong magnetized region exists, compared with that in the Northern Hemisphere. This is contrary to the CENA observations in the solar wind, when the backscattering ratio drops by ∼50% in the Southern Hemisphere. Our analysis and test particle simulations suggest that magnetic shielding of the lunar surface in the plasma sheet is less effective than in the solar wind due to the broad velocity distributions of the plasma sheet protons.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The production of a W boson in association with a single charm quark is studied using 4.6 fb−1 of pp collision data at ps = 7TeV collected with the ATLAS detector at the Large Hadron Collider. In events in which a W boson decays to an electron or muon, the charm quark is tagged either by its semileptonic decay to a muon or by the presence of a charmed meson. The integrated and differential cross sections as a function of the pseudorapidity of the lepton from the W-boson decay are measured. Results are compared to the predictions of next-to-leading-order QCD calculations obtained from various parton distribution function parameterisations. The ratio of the strange-to-down sea-quark distributions is determined to be 0.96+0.26−0.30 at Q2 = 1.9 GeV2, which supports the hypothesis of an SU(3)-symmetric composition of the light-quark sea. Additionally, the cross-section ratio ơ(W++c)/ơ(W−+c) is compared to the predictions obtained using parton distribution function parameterisations with different assumptions about the s–s quark asymmetry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We review various inequalities for Mills' ratio (1 - Φ)= Ø, where Ø and Φ denote the standard Gaussian density and distribution function, respectively. Elementary considerations involving finite continued fractions lead to a general approximation scheme which implies and refines several known bounds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study existence of random elements with partially specified distributions. The technique relies on the existence of a positive ex-tension for linear functionals accompanied by additional conditions that ensure the regularity of the extension needed for interpreting it as a probability measure. It is shown in which case the extens ion can be chosen to possess some invariance properties. The results are applied to the existence of point processes with given correlation measure and random closed sets with given two-point covering function or contact distribution function. It is shown that the regularity condition can be efficiently checked in many cases in order to ensure that the obtained point processes are indeed locally finite and random sets have closed realisations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the problem of nonparametric estimation of a concave regression function F. We show that the supremum distance between the least square s estimatorand F on a compact interval is typically of order(log(n)/n)2/5. This entails rates of convergence for the estimator’s derivative. Moreover, we discuss the impact of additional constraints on F such as monotonicity and pointwise bounds. Then we apply these results to the analysis of current status data, where the distribution function of the event times is assumed to be concave.