801 resultados para Expectation-maximization (em) Algorithm
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.
Resumo:
[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...
Resumo:
[EN]We present a new method, based on the idea of the meccano method and a novel T-mesh optimization procedure, to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. First, we define a parametric mapping between the input boundary of the object and the boundary of the parametric domain. Then, we build a T-mesh adapted to the geometric singularities of the domain in order to preserve the features of the object boundary with a desired tolerance…
Resumo:
This thesis presents and discusses TEDA, an algorithm for the automatic detection in real-time of tsunamis and large amplitude waves on sea level records. TEDA has been developed in the frame of the Tsunami Research Team of the University of Bologna for coastal tide gauges and it has been calibrated and tested for the tide gauge station of Adak Island, in Alaska. A preliminary study to apply TEDA to offshore buoys in the Pacific Ocean is also presented.
Resumo:
Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.
Resumo:
Complex networks analysis is a very popular topic in computer science. Unfortunately this networks, extracted from different contexts, are usually very large and the analysis may be very complicated: computation of metrics on these structures could be very complex. Among all metrics we analyse the extraction of subnetworks called communities: they are groups of nodes that probably play the same role within the whole structure. Communities extraction is an interesting operation in many different fields (biology, economics,...). In this work we present a parallel community detection algorithm that can operate on networks with huge number of nodes and edges. After an introduction to graph theory and high performance computing, we will explain our design strategies and our implementation. Then, we will show some performance evaluation made on a distributed memory architectures i.e. the supercomputer IBM-BlueGene/Q "Fermi" at the CINECA supercomputing center, Italy, and we will comment our results.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
Lo scopo del presente lavoro di tesi riguarda la caratterizzazione di un sensore ottico per la lettura di ematocrito e lo sviluppo dell’algoritmo di calibrazione del dispositivo. In altre parole, utilizzando dati ottenuti da una sessione di calibrazione opportunamente pianificata, l’algoritmo sviluppato ha lo scopo di restituire la curva di interpolazione dei dati che caratterizza il trasduttore. I passi principali del lavoro di tesi svolto sono sintetizzati nei punti seguenti: 1) Pianificazione della sessione di calibrazione necessaria per la raccolta dati e conseguente costruzione di un modello black box. Output: dato proveniente dal sensore ottico (lettura espressa in mV) Input: valore di ematocrito espresso in punti percentuali ( questa grandezza rappresenta il valore vero di volume ematico ed è stata ottenuta con un dispositivo di centrifugazione sanguigna) 2) Sviluppo dell’algoritmo L’algoritmo sviluppato e utilizzato offline ha lo scopo di restituire la curva di regressione dei dati. Macroscopicamente, il codice possiamo distinguerlo in due parti principali: 1- Acquisizione dei dati provenienti da sensore e stato di funzionamento della pompa bifasica 2- Normalizzazione dei dati ottenuti rispetto al valore di riferimento del sensore e implementazione dell’algoritmo di regressione. Lo step di normalizzazione dei dati è uno strumento statistico fondamentale per poter mettere a confronto grandezze non uniformi tra loro. Studi presenti, dimostrano inoltre un mutazione morfologica del globulo rosso in risposta a sollecitazioni meccaniche. Un ulteriore aspetto trattato nel presente lavoro, riguarda la velocità del flusso sanguigno determinato dalla pompa e come tale grandezza sia in grado di influenzare la lettura di ematocrito.
Resumo:
In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.
Resumo:
Il presente lavoro di tesi è stato svolto presso il servizio di Fisica Sanitaria del Policlinico Sant'Orsola-Malpighi di Bologna. Lo studio si è concentrato sul confronto tra le tecniche di ricostruzione standard (Filtered Back Projection, FBP) e quelle iterative in Tomografia Computerizzata. Il lavoro è stato diviso in due parti: nella prima è stata analizzata la qualità delle immagini acquisite con una CT multislice (iCT 128, sistema Philips) utilizzando sia l'algoritmo FBP sia quello iterativo (nel nostro caso iDose4). Per valutare la qualità delle immagini sono stati analizzati i seguenti parametri: il Noise Power Spectrum (NPS), la Modulation Transfer Function (MTF) e il rapporto contrasto-rumore (CNR). Le prime due grandezze sono state studiate effettuando misure su un fantoccio fornito dalla ditta costruttrice, che simulava la parte body e la parte head, con due cilindri di 32 e 20 cm rispettivamente. Le misure confermano la riduzione del rumore ma in maniera differente per i diversi filtri di convoluzione utilizzati. Lo studio dell'MTF invece ha rivelato che l'utilizzo delle tecniche standard e iterative non cambia la risoluzione spaziale; infatti gli andamenti ottenuti sono perfettamente identici (a parte le differenze intrinseche nei filtri di convoluzione), a differenza di quanto dichiarato dalla ditta. Per l'analisi del CNR sono stati utilizzati due fantocci; il primo, chiamato Catphan 600 è il fantoccio utilizzato per caratterizzare i sistemi CT. Il secondo, chiamato Cirs 061 ha al suo interno degli inserti che simulano la presenza di lesioni con densità tipiche del distretto addominale. Lo studio effettuato ha evidenziato che, per entrambi i fantocci, il rapporto contrasto-rumore aumenta se si utilizza la tecnica di ricostruzione iterativa. La seconda parte del lavoro di tesi è stata quella di effettuare una valutazione della riduzione della dose prendendo in considerazione diversi protocolli utilizzati nella pratica clinica, si sono analizzati un alto numero di esami e si sono calcolati i valori medi di CTDI e DLP su un campione di esame con FBP e con iDose4. I risultati mostrano che i valori ricavati con l'utilizzo dell'algoritmo iterativo sono al di sotto dei valori DLR nazionali di riferimento e di quelli che non usano i sistemi iterativi.