987 resultados para Probability generating function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

ATLAS measurements of the azimuthal anisotropy in lead–lead collisions at √sNN = 2.76 TeV are shown using a dataset of approximately 7μb−1 collected at the LHC in 2010. The measurements are performed for charged particles with transversemomenta 0.5 < pT < 20 GeV and in the pseudorapidity range |η| < 2.5. The anisotropy is characterized by the Fourier coefficients, vn, of the charged-particle azimuthal angle distribution for n = 2–4. The Fourier coefficients are evaluated using multi-particle cumulants calculated with the generating function method. Results on the transverse momentum, pseudorapidity and centrality dependence of the vn coefficients are presented. The elliptic flow, v2, is obtained from the two-, four-, six- and eight-particle cumulants while higher-order coefficients, v3 and v4, are determined with two- and four-particle cumulants. Flow harmonics vn measured with four-particle cumulants are significantly reduced compared to the measurement involving two-particle cumulants. A comparison to vn measurements obtained using different analysis methods and previously reported by the LHC experiments is also shown. Results of measurements of flow fluctuations evaluated with multiparticle cumulants are shown as a function of transverse momentum and the collision centrality. Models of the initial spatial geometry and its fluctuations fail to describe the flow fluctuations measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Based on the map of landscapes and permafrost conditions in Yakutia (Merzlotno-landshaftnaya karta Yakutskoi0 ASSR, Gosgeodeziya SSSR, 1991), rasterized maps of permafrost temperature and active-layer thickness of Yakutia, East Siberia were derived. The mean and standard deviation at 0.5-degree grid cell size are estimated by assigning a probability density function at 0.001-degree spatial resolution. Spatial pattern of both variables are dominated by a climatic gradient from north to south, and by mountains and the soil type distribution. Uncertainties are highest in mountains and in the sporadic permafrost zone in the south. The maps are best suited as a benchmark for land surface models which include a permafrost module.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this thesis is the development of cooperative localization and tracking algorithms using nonparametric message passing techniques. In contrast to the most well-known techniques, the goal is to estimate the posterior probability density function (PDF) of the position of each sensor. This problem can be solved using Bayesian approach, but it is intractable in general case. Nevertheless, the particle-based approximation (via nonparametric representation), and an appropriate factorization of the joint PDFs (using message passing methods), make Bayesian approach acceptable for inference in sensor networks. The well-known method for this problem, nonparametric belief propagation (NBP), can lead to inaccurate beliefs and possible non-convergence in loopy networks. Therefore, we propose four novel algorithms which alleviate these problems: nonparametric generalized belief propagation (NGBP) based on junction tree (NGBP-JT), NGBP based on pseudo-junction tree (NGBP-PJT), NBP based on spanning trees (NBP-ST), and uniformly-reweighted NBP (URW-NBP). We also extend NBP for cooperative localization in mobile networks. In contrast to the previous methods, we use an optional smoothing, provide a novel communication protocol, and increase the efficiency of the sampling techniques. Moreover, we propose novel algorithms for distributed tracking, in which the goal is to track the passive object which cannot locate itself. In particular, we develop distributed particle filtering (DPF) based on three asynchronous belief consensus (BC) algorithms: standard belief consensus (SBC), broadcast gossip (BG), and belief propagation (BP). Finally, the last part of this thesis includes the experimental analysis of some of the proposed algorithms, in which we found that the results based on real measurements are very similar with the results based on theoretical models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents some of the results of a method to determine the main reliability functions of concentrator solar cells. High concentrator GaAs single junction solar cells have been tested in an Accelerated Life Test. The method can be directly applied to multi-junction solar cells. The main conclusions of this test carried out show that these solar cells are robust devices with a very low probability of failure caused by degradation during their operation life (more than 30 years). The evaluation of the probability operation function (i.e. the reliability function R(t)) is obtained for two nominal operation conditions of these cells, namely simulated concentration ratios of 700 and 1050 suns. Preliminary determination of the Mean Time to Failure indicates a value much higher than the intended operation life time of the concentrator cells.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper deals with the detection and tracking of an unknown number of targets using a Bayesian hierarchical model with target labels. To approximate the posterior probability density function, we develop a two-layer particle filter. One deals with track initiation, and the other with track maintenance. In addition, the parallel partition method is proposed to sample the states of the surviving targets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract This paper describes a two-part methodology for managing the risk posed by water supply variability to irrigated agriculture. First, an econometric model is used to explain the variation in the production value of irrigated agriculture. The explanatory variables include an index of irrigation water availability (surface storage levels), a price index representative of the crops grown in each geographical unit, and a time variable. The model corrects for autocorrelation and it is applied to 16 representative Spanish provinces in terms of irrigated agriculture. In the second part, the fitted models are used for the economic evaluation of drought risk. In flow variability in the hydrological system servicing each province is used to perform ex-ante evaluations of economic output for the upcoming irrigation season. The model?s error and the probability distribution functions (PDFs) of the reservoirs? storage variations are used to generate Monte Carlo (Latin Hypercube) simulations of agricultural output 7 and 3 months prior to the irrigation season. The results of these simulations illustrate the different risk profiles of each management unit, which depend on farm productivity and on the probability distribution function of water in flow to reservoirs. The potential for ex-ante drought impact assessments is demonstrated. By complementing hydrological models, this method can assist water managers and decisionmakers in managing reservoirs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tropospheric scintillation can become a significant impairment in satellite communication systems, especially in those with low fade-margin. Moreover, fast amplitude fluctuations due to scintillation are even larger when rain is present on the propagation path. Few studies of scintillation during rain have been reported and the statistical characterization is still not totally clear. This paper presents experimental results on the relationship between scintillation and rain attenuation obtained from slant-path attenuation measurements at 50 GHz. The study is focused on the probability density function (PDF) of various scintillation parameters. It is shown that scintillation intensity, measured as the standard deviation of the amplitude fluctuations, increases with rain attenuation; in the range 1-10 dB this relationship can be expressed by power-law or linear equations. The PDFs of scintillation intensity conditioned to a given rain attenuation level are lognormal, while the overall long-term PDF is well fltted by a generalized extreme valué (GEV) distribution. The short-term PDFs of amplitude conditioned to a given intensity are normal, although skewness effects are observed for the strongest intensities. A procedure is given to derive numerically the overall PDF of scintillation amplitude using a combination of conditional PDFs and local statistics of rain attenuation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many existing engineering works model the statistical characteristics of the entities under study as normal distributions. These models are eventually used for decision making, requiring in practice the definition of the classification region corresponding to the desired confidence level. Surprisingly enough, however, a great amount of computer vision works using multidimensional normal models leave unspecified or fail to establish correct confidence regions due to misconceptions on the features of Gaussian functions or to wrong analogies with the unidimensional case. The resulting regions incur in deviations that can be unacceptable in high-dimensional models. Here we provide a comprehensive derivation of the optimal confidence regions for multivariate normal distributions of arbitrary dimensionality. To this end, firstly we derive the condition for region optimality of general continuous multidimensional distributions, and then we apply it to the widespread case of the normal probability density function. The obtained results are used to analyze the confidence error incurred by previous works related to vision research, showing that deviations caused by wrong regions may turn into unacceptable as dimensionality increases. To support the theoretical analysis, a quantitative example in the context of moving object detection by means of background modeling is given.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several authors have analysed the changes of the probability density function of the solar radiation with different time resolutions. Some others have approached to study the significance of these changes when produced energy calculations are attempted. We have undertaken different transformations to four Spanish databases in order to clarify the interrelationship between radiation models and produced energy estimations. Our contribution is straightforward: the complexity of a solar radiation model needed for yearly energy calculations, is very low. Twelve values of monthly mean of solar radiation are enough to estimate energy with errors below 3%. Time resolutions better than hourly samples do not improve significantly the result of energy estimations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Podemos definir la sociedad como un sistema complejo que emerge de la cooperación y coordinación de billones de individuos y centenares de países. En este sentido no vivimos en una isla sino que estamos integrados en redes sociales que influyen en nuestro comportamiento. En esta tesis doctoral, presentamos un modelo analítico y una serie de estudios empíricos en los que analizamos distintos procesos sociales dinámicos desde una perspectiva de la teoría de redes complejas. En primer lugar, introducimos un modelo para explorar el impacto que las redes sociales en las que vivimos inmersos tienen en la actividad económica que transcurre sobre ellas, y mas concretamente en hasta qué punto la estructura de estas redes puede limitar la meritocracia de una sociedad. Como concepto contrario a meritocracia, en esta tesis, introducimos el término topocracia. Definimos un sistema como topocrático cuando la influencia o el poder y los ingresos de los individuos vienen principalmente determinados por la posición que ocupan en la red. Nuestro modelo es perfectamente meritocrático para redes completamente conectadas (todos los nodos están enlazados con el resto de nodos). Sin embargo nuestro modelo predice una transición hacia la topocracia a medida que disminuye la densidad de la red, siendo las redes poco densascomo las de la sociedad- topocráticas. En este modelo, los individuos por un lado producen y venden contenidos, pero por otro lado también distribuyen los contenidos producidos por otros individuos mediando entre comprador y vendedor. La producción y distribución de contenidos definen dos medios por los que los individuos reciben ingresos. El primero de ellos es meritocrático, ya que los individuos ingresan de acuerdo a lo que producen. Por el contrario el segundo es topocrático, ya que los individuos son compensados de acuerdo al número de cadenas mas cortas de la red que pasan a través de ellos. En esta tesis resolvemos el modelo computacional y analíticamente. Los resultados indican que un sistema es meritocrático solamente si la conectividad media de los individuos es mayor que una raíz del número de individuos que hay en el sistema. Por tanto, a la luz de nuestros resultados la estructura de la red social puede representar una limitación para la meritocracia de una sociedad. En la segunda parte de esta tesis se presentan una serie de estudios empíricos en los que se analizan datos extraídos de la red social Twitter para caracterizar y modelar el comportamiento humano. En particular, nos centramos en analizar conversaciones políticas, como las que tienen lugar durante campañas electorales. Nuestros resultados indican que la atención colectiva está distribuida de una forma muy heterogénea, con una minoría de cuentas extremadamente influyente. Además, la capacidad de los individuos para diseminar información en Twitter está limitada por la estructura y la posición que ocupan en la red de seguidores. Por tanto, de acuerdo a nuestras observaciones las redes sociales de Internet no posibilitan que la mayoría sea escuchada por la mayoría. De hecho, nuestros resultados implican que Twitter es topocrático, ya que únicamente una minoría de cuentas ubicadas en posiciones privilegiadas en la red de seguidores consiguen que sus mensajes se expandan por toda la red social. En conversaciones políticas, esta minoría de cuentas influyentes se compone principalmente de políticos y medios de comunicación. Los políticos son los mas mencionados ya que la gente les dirige y se refiere a ellos en sus tweets. Mientras que los medios de comunicación son las fuentes desde las que la gente propaga información. En un mundo en el que los datos personales quedan registrados y son cada día mas abundantes y precisos, los resultados del modelo presentado en esta tesis pueden ser usados para fomentar medidas que promuevan la meritocracia. Además, los resultados de los estudios empíricos sobre Twitter que se presentan en la segunda parte de esta tesis son de vital importancia para entender la nueva "sociedad digital" que emerge. En concreto hemos presentado resultados relevantes que caracterizan el comportamiento humano en Internet y que pueden ser usados para crear futuros modelos. Abstract Society can be defined as a complex system that emerges from the cooperation and coordination of billions of individuals and hundreds of countries. Thus, we do not live in social vacuum and the social networks in which we are embedded inevitably shapes our behavior. Here, we present an analytical model and several empirical studies in which we analyze dynamical social systems through a network science perspective. First, we introduce a model to explore how the structure of the social networks underlying society can limit the meritocracy of the economies. Conversely to meritocracy, in this work we introduce the term topocracy. We say that a system is topocratic if the compensation and power available to an individual is determined primarily by her position in a network. Our model is perfectly meritocratic for fully connected networks but becomes topocratic for sparse networks-like the ones in society. In the model, individuals produce and sell content, but also distribute the content produced by others when they belong to the shortest path connecting a buyer and a seller. The production and distribution of content defines two channels of compensation: a meritocratic channel, where individuals are compensated for the content they produce, and a topocratic channel, where individual compensation is based on the number of shortest paths that go through them in the network. We solve the model analytically and show that the distribution of payoffs is meritocratic only if the average degree of the nodes is larger than a root of the total number of nodes. Hence, in the light of our model, the sparsity and structure of networks represents a fundamental constraint to the meritocracy of societies. Next, we present several empirical studies that use data gathered from Twitter to analyze online human behavioral patterns. In particular, we focus on political conversations such as electoral campaigns. We found that the collective attention is highly heterogeneously distributed, as there is a minority of extremely influential accounts. In fact, the ability of individuals to propagate messages or ideas through the platform is constrained by the structure of the follower network underlying the social media and the position they occupy on it. Hence, although people have argued that social media can allow more voices to be heard, our results suggest that Twitter is highly topocratic, as only the minority of well positioned users are widely heard. This minority of influential accounts belong mostly to politicians and traditional media. Politicians tend to be the most mentioned, while media are the sources of information from which people propagate messages. We also propose a methodology to study and measure the emergence of political polarization from social interactions. To this end, we first propose a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we illustrate our methodology by applying it to Twitter data. In a world where personal data is increasingly available, the results of the analytical model introduced in this work can be used to enhance meritocracy and promote policies that help to build more meritocratic societies. Moreover, the results obtained in the latter part, where we have analyzed Twitter, are key to understand the new data-driven society that is emerging. In particular, we have presented relevant information that can be used to benchmark future models for online communication systems or can be used as empirical rules characterizing our online behavior.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A relation between Cost Of Energy, COE, maximum allowed tip speed, and rated wind speed, is obtained for wind turbines with a given goal rated power. The wind regime is characterised by the corresponding parameters of the probability density function of wind speed. The non-dimensional characteristics of the rotor: number of blades, the blade radial distributions of local solidity, twist angle, and airfoil type, play the role of parameters in the mentioned relation. The COE is estimated using a cost model commonly used by the designers. This cost model requires basic design data such as the rotor radius and the ratio between the hub height and the rotor radius. Certain design options, DO, related to the technology of the power plant, tower and blades are also required as inputs. The function obtained for the COE can be explored to �nd those values of rotor radius that give rise to minimum cost of energy for a given wind regime as the tip speed limitation changes. The analysis reveals that iso-COE lines evolve parallel to iso-radius lines for large values of limit tip speed but that this is not the case for small values of the tip speed limits. It is concluded that, as the tip speed limit decreases, the optimum decision for keeping minimum COE values can be: a) reducing the rotor radius for places with high weibull scale parameter or b) increasing the rotor radius for places with low weibull scale parameter

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over four hundred years ago, Sir Walter Raleigh asked his mathematical assistant to find formulas for the number of cannonballs in regularly stacked piles. These investigations aroused the curiosity of the astronomer Johannes Kepler and led to a problem that has gone centuries without a solution: why is the familiar cannonball stack the most efficient arrangement possible? Here we discuss the solution that Hales found in 1998. Almost every part of the 282-page proof relies on long computer verifications. Random matrix theory was developed by physicists to describe the spectra of complex nuclei. In particular, the statistical fluctuations of the eigenvalues (“the energy levels”) follow certain universal laws based on symmetry types. We describe these and then discuss the remarkable appearance of these laws for zeros of the Riemann zeta function (which is the generating function for prime numbers and is the last special function from the last century that is not understood today.) Explaining this phenomenon is a central problem. These topics are distinct, so we present them separately with their own introductory remarks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mapas simpléticos têm sido amplamente utilizados para modelar o transporte caótico em plasmas e fluidos. Neste trabalho, propomos três tipos de mapas simpléticos que descrevem o movimento de deriva elétrica em plasmas magnetizados. Efeitos de raio de Larmor finito são incluídos em cada um dos mapas. No limite do raio de Larmor tendendo a zero, o mapa com frequência monotônica se reduz ao mapa de Chirikov-Taylor, e, nos casos com frequência não-monotônica, os mapas se reduzem ao mapa padrão não-twist. Mostramos como o raio de Larmor finito pode levar à supressão de caos, modificar a topologia do espaço de fases e a robustez de barreiras de transporte. Um método baseado na contagem dos tempos de recorrência é proposto para analisar a influência do raio de Larmor sobre os parâmetros críticos que definem a quebra de barreiras de transporte. Também estudamos um modelo para um sistema de partículas onde a deriva elétrica é descrita pelo mapa de frequência monotônica, e o raio de Larmor é uma variável aleatória que assume valores específicos para cada partícula do sistema. A função densidade de probabilidade para o raio de Larmor é obtida a partir da distribuição de Maxwell-Boltzmann, que caracteriza plasmas na condição de equilíbrio térmico. Um importante parâmetro neste modelo é a variável aleatória gama, definida pelo valor da função de Bessel de ordem zero avaliada no raio de Larmor da partícula. Resultados analíticos e numéricos descrevendo as principais propriedades estatísticas do parâmetro gama são apresentados. Tais resultados são então aplicados no estudo de duas medidas de transporte: a taxa de escape e a taxa de aprisionamento por ilhas de período um.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose a novel filter for feature selection. Such filter relies on the estimation of the mutual information between features and classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rényi entropy, and the subsequent approximation of the Shannon one. The complexity of such bypassing process does not depend on the number of dimensions but on the number of patterns/samples, and thus the curse of dimensionality is circumvented. We show that it is then possible to outperform a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification.