922 resultados para 2D Convolutional Codes
Resumo:
Obiettivo della tesi è quello di analizzare le diverse tecnologie messe a disposizioe dai dispositivi mobili, in particolare smartphon e tablet. Verranno analizzate le principali innovazioni portate da questi dispositivi nel campo dei videogiochi e verrà proposto un caso di sviluppo di Shooter game 2d multi piattaforma.
Resumo:
Il tumore al seno si colloca al primo posto per livello di mortalità tra le patologie tumorali che colpiscono la popolazione femminile mondiale. Diversi studi clinici hanno dimostrato come la diagnosi da parte del radiologo possa essere aiutata e migliorata dai sistemi di Computer Aided Detection (CAD). A causa della grande variabilità di forma e dimensioni delle masse tumorali e della somiglianza di queste con i tessuti che le ospitano, la loro ricerca automatizzata è un problema estremamente complicato. Un sistema di CAD è generalmente composto da due livelli di classificazione: la detection, responsabile dell’individuazione delle regioni sospette presenti sul mammogramma (ROI) e quindi dell’eliminazione preventiva delle zone non a rischio; la classificazione vera e propria (classification) delle ROI in masse e tessuto sano. Lo scopo principale di questa tesi è lo studio di nuove metodologie di detection che possano migliorare le prestazioni ottenute con le tecniche tradizionali. Si considera la detection come un problema di apprendimento supervisionato e lo si affronta mediante le Convolutional Neural Networks (CNN), un algoritmo appartenente al deep learning, nuova branca del machine learning. Le CNN si ispirano alle scoperte di Hubel e Wiesel riguardanti due tipi base di cellule identificate nella corteccia visiva dei gatti: le cellule semplici (S), che rispondono a stimoli simili ai bordi, e le cellule complesse (C) che sono localmente invarianti all’esatta posizione dello stimolo. In analogia con la corteccia visiva, le CNN utilizzano un’architettura profonda caratterizzata da strati che eseguono sulle immagini, alternativamente, operazioni di convoluzione e subsampling. Le CNN, che hanno un input bidimensionale, vengono solitamente usate per problemi di classificazione e riconoscimento automatico di immagini quali oggetti, facce e loghi o per l’analisi di documenti.
Resumo:
This thesis describes the developments of new models and toolkits for the orbit determination codes to support and improve the precise radio tracking experiments of the Cassini-Huygens mission, an interplanetary mission to study the Saturn system. The core of the orbit determination process is the comparison between observed observables and computed observables. Disturbances in either the observed or computed observables degrades the orbit determination process. Chapter 2 describes a detailed study of the numerical errors in the Doppler observables computed by NASA's ODP and MONTE, and ESA's AMFIN. A mathematical model of the numerical noise was developed and successfully validated analyzing against the Doppler observables computed by the ODP and MONTE, with typical relative errors smaller than 10%. The numerical noise proved to be, in general, an important source of noise in the orbit determination process and, in some conditions, it may becomes the dominant noise source. Three different approaches to reduce the numerical noise were proposed. Chapter 3 describes the development of the multiarc library, which allows to perform a multi-arc orbit determination with MONTE. The library was developed during the analysis of the Cassini radio science gravity experiments of the Saturn's satellite Rhea. Chapter 4 presents the estimation of the Rhea's gravity field obtained from a joint multi-arc analysis of Cassini R1 and R4 fly-bys, describing in details the spacecraft dynamical model used, the data selection and calibration procedure, and the analysis method followed. In particular, the approach of estimating the full unconstrained quadrupole gravity field was followed, obtaining a solution statistically not compatible with the condition of hydrostatic equilibrium. The solution proved to be stable and reliable. The normalized moment of inertia is in the range 0.37-0.4 indicating that Rhea's may be almost homogeneous, or at least characterized by a small degree of differentiation.
Resumo:
The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.
Resumo:
Die Entstehung und Evolution des genetischen Codes, der die Nukleotidsequenz der mRNA in die Aminosäuresequenz der Proteine übersetzt, zählen zu den größten Rätseln der Biologie. Die ersten Organismen, die vor etwa 3,8 Milliarden Jahren auf der Erde auftraten, nutzten einen ursprünglichen genetischen Code, der vermutlich ausschließlich abiotisch verfügbare Aminosäuren terrestrischer oder extraterrestrischer Herkunft umfasste. Neue Aminosäuren wurden sukzessive biosynthetisiert und selektiv in den Code aufgenommen, welcher in der modernen Form aus bis zu 22 Aminosäuren besteht. Die Ursachen für die Selektion und die Chronologie ihrer Aufnahme sind bis heute unbekannt und sollten im Rahmen der vorliegenden Arbeit erforscht werden. Auf Grundlage quanten-chemischer Berechnungen konnte in dieser Arbeit zunächst ein Zusammenhang zwischen der HOMO-LUMO-Energiedifferenz (H-L-Distanz), die ein inverses quanten-chemisches Korrelat für allgemeine chemische Reaktivität darstellt, und der chronologischen Aufnahme der Aminosäuren in den genetischen Code aufgezeigt werden. Demnach sind ursprüngliche Aminosäuren durch große H-L-Distanzen und neue Aminosäuren durch kleine H-L-Distanzen gekennzeichnet. Bei einer Analyse des Metabolismus von Tyrosin und Tryptophan, bei denen es sich um die beiden jüngsten Standard-Aminosäuren handelt, wurde ihre Bedeutung als Vorläufer von Strukturen ersichtlich, die sich durch eine hohe Redox-Aktivität auszeichnen und deren Synthese gleichzeitig molekularen Sauerstoff erfordert. Aus diesem Grund wurden die Redox-Aktivitäten der 20 Standard-Aminosäuren gegenüber Peroxylradikalen und weiteren Radikalen getestet. Die Untersuchungen ergaben eine Korrelation zwischen evolutionärem Auftreten und chemischer Reaktivität der jeweiligen Aminosäure, die sich insbesondere in der effizienten Reaktion zwischen Tryptophan bzw. Tyrosin und Peroxylradikalen widerspiegelte. Dies indizierte eine potentielle Bedeutung reaktiver Sauerstoffspezies (ROS) bei der Konstituierung des genetischen Codes. Signifikante Mengen an ROS wurden erst zu Beginn der Oxygenierung der Geobiosphäre, die als Great Oxidation Event (GOE) bezeichnet wird und vor circa 2,3 Milliarden Jahren begann, gebildet und müssen zur oxidativen Schädigung vulnerabler, zellulärer Strukturen geführt haben. Aus diesem Grund wurde das antioxidative Potential von Aminosäuren beim Prozess der Lipidperoxidation untersucht. Es konnte gezeigt werden, dass lipophile Derivate von Tryptophan und Tyrosin befähigt sind, die Peroxidation von Rattenhirnmembranen zu verhindern und humane Fibroblasten vor oxidativem Zelltod zu schützen. Daraus gründete sich das in dieser Arbeit aufgestellte Postulat eines Selektionsvorteils primordialer Organismen während des GOEs, die Tryptophan und Tyrosin als redox-aktive Aminosäuren in Membranproteine einbauen konnten und somit vor Oxidationsprozessen geschützt waren. Demzufolge wurde die biochemische Reaktivität als Selektionsparameter sowie oxidativer Stress als prägender Faktor der Evolution des genetischen Codes identifiziert.
Resumo:
Molte applicazioni sono legate a tecniche di rilassometria e risonanza magnetica nucleare (NMR). Tali applicazioni danno luogo a problemi di inversione della trasformata di Laplace discreta che è un problema notoriamente mal posto. UPEN (Uniform Penalty) è un metodo numerico di regolarizzazione utile a risolvere problemi di questo tipo. UPEN riformula l’inversione della trasformata di Laplace come un problema di minimo vincolato in cui la funzione obiettivo contiene il fit di dati e una componente di penalizzazione locale, che varia a seconda della soluzione stessa. Nella moderna spettroscopia NMR si studiano le correlazioni multidimensionali dei parametri di rilassamento longitudinale e trasversale. Per studiare i problemi derivanti dall’analisi di campioni multicomponenti è sorta la necessità di estendere gli algoritmi che implementano la trasformata inversa di Laplace in una dimensione al caso bidimensionale. In questa tesi si propone una possibile estensione dell'algoritmo UPEN dal caso monodimensionale al caso bidimensionale e si fornisce un'analisi numerica di tale estensione su dati simulati e su dati reali.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.
Resumo:
The purpose of this study was to evaluate whether measurements on conventional cephalometric radiographs are comparable with 3D measurements on 3D models of human skulls, derived from cone beam CT (CBCT) data. A CBCT scan and a conventional cephalometric radiograph were made of 40 dry skulls. Standard cephalometric software was used to identify landmarks on both the 2D images and the 3D models. The same operator identified 17 landmarks on the cephalometric radiographs and on the 3D models. All images and 3D models were traced five times with a time-interval of 1 week and the mean value of repeated measurements was used for further statistical analysis. Distances and angles were calculated. Intra-observer reliability was good for all measurements. The reproducibility of the measurements on the conventional cephalometric radiographs was higher compared with the reproducibility of measurements on the 3D models. For a few measurements a clinically relevant difference between measurements on conventional cephalometric radiographs and 3D models was found. Measurements on conventional cephalometric radiographs can differ significantly from measurements on 3D models of the same skull. The authors recommend that 3D tracings for longitudinal research are not used in cases were there are only 2D records from the past.