814 resultados para PC-algorithm
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.
Resumo:
[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...
Resumo:
[EN]We present a new method, based on the idea of the meccano method and a novel T-mesh optimization procedure, to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. First, we define a parametric mapping between the input boundary of the object and the boundary of the parametric domain. Then, we build a T-mesh adapted to the geometric singularities of the domain in order to preserve the features of the object boundary with a desired tolerance…
Resumo:
This thesis presents and discusses TEDA, an algorithm for the automatic detection in real-time of tsunamis and large amplitude waves on sea level records. TEDA has been developed in the frame of the Tsunami Research Team of the University of Bologna for coastal tide gauges and it has been calibrated and tested for the tide gauge station of Adak Island, in Alaska. A preliminary study to apply TEDA to offshore buoys in the Pacific Ocean is also presented.
Resumo:
Die vorliegende Dissertation befasst sich mit der Synthese, physikochemischen und polymerspezifischen Charakterisierung und insbesondere der impedanzspektroskopischen Untersuchung von sowohl neuartigen, solvensfreien lithiumionen- als auch protonenleitfähigen Polymermaterialien für potentielle Anwendungen in sekundären Lithiumionenbatterien bzw. in Hochtemperatur-Protonenaustauschmembran-Brennstoffzellen (engl.: proton exchange membrane fuel cell, auch: polymer electrolyte membrane fuel cell, PEMFC). Beiden Typen von ionenleitfähigen Membranen liegt das gängige Prinzip der chemischen Anbindung einer für den Ionentransport verantwortlichen Seitengruppe an eine geeignete Polymerhauptkette zugrunde („Entkopplung“; auch Immobilisierung), welcher hinsichtlich Glasübergangstemperatur (Tg), elektrochemischer und thermischer Stabilität (Td) eine dynamisch entkoppelte, aber nicht minder bedeutsame Rolle zukommt. Die Transportaktivierung erfolgt in beiden Fällen thermisch. Im Falle der Protonenleiter liegt die zusätzliche Intention darin, eine Alternative aufzuzeigen, in der die Polymerhauptkette gekoppelt direkt am Protonentransportmechanismus beteiligt ist, d.h., dass der translatorisch diffusive Ionentransport entlang der Hauptkette stattfindet und nicht zwischen benachbarten Seitenketten. Ein Hauptaugenmerk der Untersuchungen liegt sowohl bei den lithiumionen- als auch den protonenleitfähigen Polymermembranen auf temperaturabhängigen dynamischen Prozessen der jeweiligen Ionenspezies in der polymeren Matrix, was die Ionenleitfähigkeit selbst, Relaxationsphänomene, die translatorische Ionendiffusion und im Falle der Protonenleiter etwaige mesomere Grenzstrukturübergänge umfasst. Lithiumionenleiter: Poly(meth)acrylate mit (2-Oxo-1,3-dioxolan)resten (Cyclocarbonat-) in der Seitenkette unterschiedlicher Spacerlänge wurden synthetisiert und charakterisiert. Die Leitfähigkeit s(,T) erreicht bei Poly(2-oxo-[1,3]dioxolan-4-yl)methylacrylat (PDOA): Lithium-bis-trifluormethansulfonimid (LiTFSI) (10:3) ca. 10^-3,5 S cm^-1 bei 150 °C. Weichmachen (Dotieren) mit äquimolaren Mengen an Propylencarbonat (PC) bewirkt in allen Fällen einen enormen Anstieg der Leitfähigkeit. Die höchsten Leitfähigkeiten von Mischungen dieser Polymere mit LiTFSI (und LiBOB) werden nicht beim System mit der niedrigsten Tg gefunden. Auch dient Tg nicht als Referenztemperatur (Tref) nach Williams-Landel-Ferry (WLF), so dass eine WLF-Anpassung der Leitfähigkeitsdaten nur über einen modifizierten WLF-Algorithmus gelingt. Die ermittelten Tref liegen deutlich unterhalb von Tg bei Temperaturen, die charakteristisch für die Seitenkettenrelaxation sind („Einfrieren“). Dies legt nahe, dass der Relaxation der Seitenketten eine entscheidende Rolle im Li^+-Leitfähigkeitsmechanismus zukommt. Die Li^+-Überführungszahlen tLi^+ in diesen Systemen schwanken zwischen 0,13 (40 °C) und 0,55 (160 °C). Protonenleiter: Polymere mit Barbitursäure- bzw. Hypoxanthinresten in der Seitenkette und Polyalkylenbiguanide unterschiedlicher Spacerlänge wurden synthetisiert und charakterisiert. Die Leitfähigkeit s(,T) erreicht bei Poly(2,4,6(1H,3H,5H)-trioxopyrimidin-5-yl)methacrylat (PTPMA) maximal ca. 10^-4,4 S cm^-1 bei 140 °C. Höhere Leitfähigkeiten sind nur durch Mischen mit aprotischen Lösungsmitteln erreichbar. Die höchste Leitfähigkeit wird im Falle der Polyalkylenbiguanide bei Polyethylenbiguanid (PEB) erzielt. Sie erreicht 10^-2,4 S cm^-1 bei 190 °C. Die Aktivierungsenergien EA der Polyalkylenbiguanide liegen (jeweils unterhalb von Tg) zwischen ca. 3 – 6 kJ mol^-1. In allen beobachteten Fällen dient Tg als Tref, so dass eine konventionelle WLF-Behandlung möglich ist und davon auszugehen ist, dass die Leitfähigkeit mit dem freien Volumen Vf korreliert.
Resumo:
Complex networks analysis is a very popular topic in computer science. Unfortunately this networks, extracted from different contexts, are usually very large and the analysis may be very complicated: computation of metrics on these structures could be very complex. Among all metrics we analyse the extraction of subnetworks called communities: they are groups of nodes that probably play the same role within the whole structure. Communities extraction is an interesting operation in many different fields (biology, economics,...). In this work we present a parallel community detection algorithm that can operate on networks with huge number of nodes and edges. After an introduction to graph theory and high performance computing, we will explain our design strategies and our implementation. Then, we will show some performance evaluation made on a distributed memory architectures i.e. the supercomputer IBM-BlueGene/Q "Fermi" at the CINECA supercomputing center, Italy, and we will comment our results.
Aussagekraft einer quantitativen, PC-unterstützten PCNA-Bestimmung an Leiomyosarkomen und Leiomyomen
Resumo:
Ziel der Studie war die Untersuchung inwieweit PCNA als immunhistochemisches Hilfsmittel eine Verbesserung der Abgrenzung von uterinen LM gegenüber uterinen LMS und eine Hilfe bei der Prognoseaussage bei uterinen und gastrointestinalen LMS darstellt. Dabei wurden auch weitere, herkömmliche, in einer Reihe von Studien untersuchte Prognoseparameter mit in Betracht gezogen.rnDie Auswertung der Färbungen erfolgte quantitativ mittels einer PC-gesteuerten Bildverarbeitung, die neben der positiven Zellfläche und Zellzahl auch die Kernfläche berechnet. Vorteil der quantitativen Messung ist die Möglichkeit der Reproduzierbarkeit unabhängig vom Benutzer, im Gegensatz zu semiquantitativen Analysen, die häufiger angewendet werden.rnEs fällt dabei auf, dass sowohl die Zellkerngröße generell als auch die Zellkerngröße der positiv gefärbten Zellen signifikant größer bei LMS als bei LM ist. (p< 0,0001). Zwischen den Zellkerngrößen der uterinen und der gastrointestinalen LMS fanden sich keine signifikanten Unterschiede.rnBezüglich der Abgrenzung zwischen uterinen LM und LMS zeigten sich signifikant höhere PCNA-Meßwerte für LMS (mittlere positive Zellkernzahl 15,08%, mittlere positive Zellkernzahl 16,67%) gegenüber LM (mittlere positive Zellkernzahl 1,95%, mittlere positive Zellkernzahl 2,02%). Zu den gastrointestinalen LMS-PCNA Messwerten gab es keinen signifikanten Unterschied (mittlere positive Zellkernfläche 19,07%, mittlere positive Zellkernzahl 16,43).rnKein signifikanter Zusammenhang zeigte sich bei der Höhe der PCNA-Meßwerte und der Prognose sowohl der uterinen als auch der gastrointestinalen LMS.rnIn dieser Studie stellten sich als signifikant prognoserelevant lediglich das Alter sowohl bei den uterinen und gastrointestinalen LMS heraus. Bei den uterinen LMS war auch noch ein schlechteres Grading und das Vorhandensein von Nekrosen signifikant mit der Prognose korreliert.rnNicht signifikant prognoserelevant waren sowohl bei den uterinen wie den gastrointestinalen LMS die Tumorgröße und die PCNA-Meßwerthöhe. Bei den gastrointestinalen Leiomyosarkomen zusätzlich noch das Vorhandensein von Nekrosen.rnEin signifikanter Zusammenhang wurde zwischen PCNA-Meßwerten und der Uterusgröße bzw. auch der Myomgröße bei den uterinen Leiomyomen sowie analog der Tumorgröße der uterinen und der gastrointestinalen LMS festgestellt. Innerhalb der Gruppe der uterinen Myome zeigten sich für zellreiche Myome zwar höhere Messwerte, die jedoch nicht signifikant waren.rnKein Zusammenhang zeigte sich dagegen für die Höhe der PNCA-Meßwerte und dem Grading der uterinen sowohl als auch der gastrointestinalen Leiomyosarkome, dem Alter der Patientinnen bei den uterinen LMS, dem Vorhandensein von Nekrosen bei den uterinen sowohl als auch bei gastrointestinalen LMS und dem Geschlecht bei den gastrointestinalen LMS.rnDie Ergebnisse stehen bezüglich der Aussage der Prognose teilweise in Widerspruch, teilweise in Einklang mit den bisherigen Veröffentlichungen. Ein großes Hemmnis ist die geringe Fallzahl, die vor allem durch die Seltenheit der Tumore bedingt ist, desweiteren die ungenügende Nachverfolgung der Patienten, die auch daraus resultiert, dass es sich um eine retrospektive Studie handelt. Hier bedarf es größerer Studien, möglicherweise als multizentrische Studien mit genauer Verlaufsbeobachtung, um auch im deutschsprachigen Raum eine weitere Erforschung dieser seltenen, umsomehr sehr interessanten und an den untersuchenden Pathologen besondere Ansprüche stellende Tumore zu geben.rn
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
Lo scopo del presente lavoro di tesi riguarda la caratterizzazione di un sensore ottico per la lettura di ematocrito e lo sviluppo dell’algoritmo di calibrazione del dispositivo. In altre parole, utilizzando dati ottenuti da una sessione di calibrazione opportunamente pianificata, l’algoritmo sviluppato ha lo scopo di restituire la curva di interpolazione dei dati che caratterizza il trasduttore. I passi principali del lavoro di tesi svolto sono sintetizzati nei punti seguenti: 1) Pianificazione della sessione di calibrazione necessaria per la raccolta dati e conseguente costruzione di un modello black box. Output: dato proveniente dal sensore ottico (lettura espressa in mV) Input: valore di ematocrito espresso in punti percentuali ( questa grandezza rappresenta il valore vero di volume ematico ed è stata ottenuta con un dispositivo di centrifugazione sanguigna) 2) Sviluppo dell’algoritmo L’algoritmo sviluppato e utilizzato offline ha lo scopo di restituire la curva di regressione dei dati. Macroscopicamente, il codice possiamo distinguerlo in due parti principali: 1- Acquisizione dei dati provenienti da sensore e stato di funzionamento della pompa bifasica 2- Normalizzazione dei dati ottenuti rispetto al valore di riferimento del sensore e implementazione dell’algoritmo di regressione. Lo step di normalizzazione dei dati è uno strumento statistico fondamentale per poter mettere a confronto grandezze non uniformi tra loro. Studi presenti, dimostrano inoltre un mutazione morfologica del globulo rosso in risposta a sollecitazioni meccaniche. Un ulteriore aspetto trattato nel presente lavoro, riguarda la velocità del flusso sanguigno determinato dalla pompa e come tale grandezza sia in grado di influenzare la lettura di ematocrito.