6 resultados para Large-scale experiments

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questa dissertazione esamina le sfide e i limiti che gli algoritmi di analisi di grafi incontrano in architetture distribuite costituite da personal computer. In particolare, analizza il comportamento dell'algoritmo del PageRank così come implementato in una popolare libreria C++ di analisi di grafi distribuiti, la Parallel Boost Graph Library (Parallel BGL). I risultati qui presentati mostrano che il modello di programmazione parallela Bulk Synchronous Parallel è inadatto all'implementazione efficiente del PageRank su cluster costituiti da personal computer. L'implementazione analizzata ha infatti evidenziato una scalabilità negativa, il tempo di esecuzione dell'algoritmo aumenta linearmente in funzione del numero di processori. Questi risultati sono stati ottenuti lanciando l'algoritmo del PageRank della Parallel BGL su un cluster di 43 PC dual-core con 2GB di RAM l'uno, usando diversi grafi scelti in modo da facilitare l'identificazione delle variabili che influenzano la scalabilità. Grafi rappresentanti modelli diversi hanno dato risultati differenti, mostrando che c'è una relazione tra il coefficiente di clustering e l'inclinazione della retta che rappresenta il tempo in funzione del numero di processori. Ad esempio, i grafi Erdős–Rényi, aventi un basso coefficiente di clustering, hanno rappresentato il caso peggiore nei test del PageRank, mentre i grafi Small-World, aventi un alto coefficiente di clustering, hanno rappresentato il caso migliore. Anche le dimensioni del grafo hanno mostrato un'influenza sul tempo di esecuzione particolarmente interessante. Infatti, si è mostrato che la relazione tra il numero di nodi e il numero di archi determina il tempo totale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mass estimation of galaxy clusters is a crucial point for modern cosmology, and can be obtained by several different techniques. In this work we discuss a new method to measure the mass of galaxy clusters connecting the gravitational potential of the cluster with the kinematical properties of its surroundings. We explore the dynamics of the structures located in the region outside virialized cluster, We identify groups of galaxies, as sheets or filaments, in the cluster outer region, and model how the cluster gravitational potential perturbs the motion of these structures from the Hubble fow. This identification is done in the redshift space where we look for overdensities with a filamentary shape. Then we use a radial mean velocity profile that has been found as a quite universal trend in simulations, and we fit the radial infall velocity profile of the overdensities found. The method has been tested on several cluster-size haloes from cosmological N-body simulations giving results in very good agreement with the true values of virial masses of the haloes and orientation of the sheets. We then applied the method to the Coma cluster and even in this case we found a good correspondence with previous. It is possible to notice a mass discrepancy between sheets with different alignments respect to the center of the cluster. This difference can be used to reproduce the shape of the cluster, and to demonstrate that the spherical symmetry is not always a valid assumption. In fact, if the cluster is not spherical, sheets oriented along different axes should feel a slightly different gravitational potential, and so give different masses as result of the analysis described before. Even this estimation has been tested on cosmological simulations and then applied to Coma, showing the actual non-sphericity of this cluster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless sensor networks can transform our buildings in smart environments, improving comfort, energy efficiency and safety. Today however, wireless sensor networks are not considered reliable enough for being deployed on large scale. In this thesis, we study the main failure causes for wireless sensor networks, the existing solutions to improve reliability and investigate the possibility to implement self-diagnosis through power consumption measurements on the sensor nodes. Especially, we focus our interest on faults that generate in-range errors: those are wrong readings but belong to the range of the sensor and can therefore be missed by external observers. Using a wireless sensor network deployed in the R\&D building of NXP at the High Tech Campus of Eindhoven, we performed a power consumption characterization of the Wireless Autonomous Sensor (WAS), and studied through some experiments the effect that faults have in the power consumption of the sensor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The BLEVE, acronym for Boiling Liquid Expanding Vapour Explosion, is one of the most dangerous accidents that can occur in pressure vessels. It can be defined as an explosion resulting from the failure of a vessel containing a pressure liquefied gas stored at a temperature significantly above its boiling point at atmospheric pressure. This phenomenon frequently appears when a vessel is engulfed by a fire: the heat causes the internal pressure to raise and the mechanical proprieties of the wall to decrease, with the consequent rupture of the tank and the instantaneous release of its whole content. After the breakage, the vapour outflows and expands and the liquid phase starts boiling due to the pressure drop. The formation and propagation of a distructive schock wave may occur, together with the ejection of fragments, the generation of a fireball if the stored fluid is flammable and immediately ignited or the atmospheric dispersion of a toxic cloud if the fluid contained inside the vessel is toxic. Despite the presence of many studies on the BLEVE mechanism, the exact causes and conditions of its occurrence are still elusive. In order to better understand this phenomenon, in the present study first of all the concept and definition of BLEVE are investigated. A historical analysis of the major events that have occurred over the past 60 years is described. A research of the principal causes of this event, including the analysis of the substances most frequently involved, is presented too. Afterwards a description of the main effects of BLEVEs is reported, focusing especially on the overpressure. Though the major aim of the present thesis is to contribute, with a comparative analysis, to the validation of the main models present in the literature for the calculation and prediction of the overpressure caused by BLEVEs. In line with this purpose, after a short overview of the available approaches, their ability to reproduce the trend of the overpressure is investigated. The overpressure calculated with the different models is compared with values deriving from events happened in the past and ad-hoc experiments, focusing the attention especially on medium and large scale phenomena. The ability of the models to consider different filling levels of the reservoir and different substances is analyzed too. The results of these calculations are extensively discussed. Finally some conclusive remarks are reported.