957 resultados para Deep Learning
Resumo:
Quantitative Susceptibility Mapping (QSM) is an advanced magnetic resonance technique that can quantify in vivo biomarkers of pathology, such as alteration in iron and myelin concentration. It allows for the comparison of magnetic susceptibility properties within and between different subject groups. In this thesis, QSM acquisition and processing pipeline are discussed, together with clinical and methodological applications of QSM to neurodegeneration. In designing the studies, significant emphasis was placed on results reproducibility and interpretability. The first project focuses on the investigation of cortical regions in amyotrophic lateral sclerosis. By examining various histogram susceptibility properties, a pattern of increased iron content was revealed in patients with amyotrophic lateral sclerosis compared to controls and other neurodegenerative disorders. Moreover, there was a correlation between susceptibility and upper motor neuron impairment, particularly in patients experiencing rapid disease progression. Similarly, in the second application, QSM was used to examine cortical and sub-cortical areas in individuals with myotonic dystrophy type 1. The thalamus and brainstem were identified as structures of interest, with relevant correlations with clinical and laboratory data such as neurological evaluation and sleep records. In the third project, a robust pipeline for assessing radiomic susceptibility-based features reliability was implemented within a cohort of patients with multiple sclerosis and healthy controls. Lastly, a deep learning super-resolution model was applied to QSM images of healthy controls. The employed model demonstrated excellent generalization abilities and outperformed traditional up-sampling methods, without requiring a customized re-training. Across the three disorders investigated, it was evident that QSM is capable of distinguishing between patient groups and healthy controls while establishing correlations between imaging measurements and clinical data. These studies lay the foundation for future research, with the ultimate goal of achieving earlier and less invasive diagnoses of neurodegenerative disorders within the context of personalized medicine.
Resumo:
Embedded systems are increasingly integral to daily life, improving and facilitating the efficiency of modern Cyber-Physical Systems which provide access to sensor data, and actuators. As modern architectures become increasingly complex and heterogeneous, their optimization becomes a challenging task. Additionally, ensuring platform security is important to avoid harm to individuals and assets. This study primarily addresses challenges in contemporary Embedded Systems, focusing on platform optimization and security enforcement. The initial section of this study delves into the application of machine learning methods to efficiently determine the optimal number of cores for a parallel RISC-V cluster to minimize energy consumption using static source code analysis. Results demonstrate that automated platform configuration is not only viable but also that there is a moderate performance trade-off when relying solely on static features. The second part focuses on addressing the problem of heterogeneous device mapping, which involves assigning tasks to the most suitable computational device in a heterogeneous platform for optimal runtime. The contribution of this section lies in the introduction of novel pre-processing techniques, along with a training framework called Siamese Networks, that enhances the classification performance of DeepLLVM, an advanced approach for task mapping. Importantly, these proposed approaches are independent from the specific deep-learning model used. Finally, this research work focuses on addressing issues concerning the binary exploitation of software running in modern Embedded Systems. It proposes an architecture to implement Control-Flow Integrity in embedded platforms with a Root-of-Trust, aiming to enhance security guarantees with limited hardware modifications. The approach involves enhancing the architecture of a modern RISC-V platform for autonomous vehicles by implementing a side-channel communication mechanism that relays control-flow changes executed by the process running on the host core to the Root-of-Trust. This approach has limited impact on performance and it is effective in enhancing the security of embedded platforms.
Resumo:
In highly urbanized coastal lowlands, effective site characterization is crucial for assessing seismic risk. It requires a comprehensive stratigraphic analysis of the shallow subsurface, coupled with the precise assessment of the geophysical properties of buried deposits. In this context, late Quaternary paleovalley systems, shallowly buried fluvial incisions formed during the Late Pleistocene sea-level fall and filled during the Holocene sea-level rise, are crucial for understanding seismic amplification due to their soft sediment infill and sharp lithologic contrasts. In this research, we conducted high-resolution stratigraphic analyses of two regions, the Pescara and Manfredonia areas along the Adriatic coastline of Italy, to delineate the geometries and facies architecture of two paleovalley systems. Furthermore, we carried out geophysical investigations to characterize the study areas and perform seismic response analyses. We tested the microtremor-based horizontal-to-vertical spectral ratio as a mapping tool to reconstruct the buried paleovalley geometries. We evaluated the relationship between geological and geophysical data and identified the stratigraphic surfaces responsible for the observed resonances. To perform seismic response analysis of the Pescara paleovalley system, we integrated the stratigraphic framework with microtremor and shear wave velocity measurements. The seismic response analysis highlights strong seismic amplifications in frequency ranges that can interact with a wide variety of building types. Additionally, we explored the applicability of artificial intelligence in performing facies analysis from borehole images. We used a robust dataset of high-resolution digital images from continuous sediment cores of Holocene age to outline a novel, deep-learning-based approach for performing automatic semantic segmentation directly on core images, leveraging the power of convolutional neural networks. We propose an automated model to rapidly characterize sediment cores, reproducing the sedimentologist's interpretation, and providing guidance for stratigraphic correlation and subsurface reconstructions.
Resumo:
Negli ultimi anni, il natural language processing ha subito una forte evoluzione, principalmente dettata dai paralleli avanzamenti nell’area del deep-learning. Con dimensioni architetturali in crescita esponenziale e corpora di addestramento sempre più comprensivi, i modelli neurali sono attualmente in grado di generare testo in maniera indistinguibile da quello umano. Tuttavia, a predizioni accurate su task complessi, si contrappongono metriche frequentemente arretrate, non capaci di cogliere le sfumature semantiche o le dimensioni di valutazione richieste. Tale divario motiva ancora oggi l’adozione di una valutazione umana come metodologia standard, ma la natura pervasiva del testo sul Web rende evidente il bisogno di sistemi automatici, scalabili, ed efficienti sia sul piano dei tempi che dei costi. In questa tesi si propone un’analisi delle principali metriche allo stato dell’arte per la valutazione di modelli pre-addestrati, partendo da quelle più popolari come Rouge fino ad arrivare a quelle che a loro volta sfruttano modelli per valutare il testo. Inoltre, si introduce una nuova libreria – denominata Blanche– finalizzata a raccogliere in un unico ambiente le implementazioni dei principali contributi oggi disponibili, agevolando il loro utilizzo da parte di sviluppatori e ricercatori. Infine, si applica Blanche per una valutazione ad ampio spettro dei risultati generativi ottenuti all’interno di un reale caso di studio, incentrato sulla verbalizzazione di eventi biomedici espressi nella letteratura scientifica. Una particolare attenzione è rivolta alla gestione dell’astrattività, un aspetto sempre più cruciale e sfidante sul piano valutativo.
Resumo:
La malattia COVID-19 associata alla sindrome respiratoria acuta grave da coronavirus 2 (SARS-CoV-2) ha rappresentato una grave minaccia per la salute pubblica e l’economia globale sin dalla sua scoperta in Cina, nel dicembre del 2019. Gli studiosi hanno effettuato numerosi studi ed in particolar modo l’applicazione di modelli epidemiologici costruiti a partire dai dati raccolti, ha permesso la previsione di diversi scenari sullo sviluppo della malattia, nel breve-medio termine. Gli obiettivi di questa tesi ruotano attorno a tre aspetti: i dati disponibili sulla malattia COVID-19, i modelli matematici compartimentali, con particolare riguardo al modello SEIJDHR che include le vaccinazioni, e l’utilizzo di reti neurali ”physics-informed” (PINNs), un nuovo approccio basato sul deep learning che mette insieme i primi due aspetti. I tre aspetti sono stati dapprima approfonditi singolarmente nei primi tre capitoli di questo lavoro e si sono poi applicate le PINNs al modello SEIJDHR. Infine, nel quarto capitolo vengono riportati frammenti rilevanti dei codici Python utilizzati e i risultati numerici ottenuti. In particolare vengono mostrati i grafici sulle previsioni nel breve-medio termine, ottenuti dando in input dati sul numero di positivi, ospedalizzati e deceduti giornalieri prima riguardanti la città di New York e poi l’Italia. Inoltre, nell’indagine della parte predittiva riguardante i dati italiani, si è individuato un punto critico legato alla funzione che modella la percentuale di ricoveri; sono stati quindi eseguiti numerosi esperimenti per il controllo di tali previsioni.
Resumo:
In recent years, we have witnessed great changes in the industrial environment as a result of the innovations introduced by Industry 4.0, especially in the integration of Internet of Things, Automation and Robotics in the manufacturing field. The project presented in this thesis lies within this innovation context and describes the implementation of an Image Recognition application focused on the automotive field. The project aims at helping the supply chain operator to perform an effective and efficient check of the homologation tags present on vehicles. The user contribution consists in taking a picture of the tag and the application will automatically, exploiting Amazon Web Services, return the result of the control about the correctness of the tag, the correct positioning within the vehicle and the presence of faults or defects on the tag. To implement this application we ombined two IoT platforms widely used in industrial field: Amazon Web Services(AWS) and ThingWorx. AWS exploits Convolutional Neural Networks to perform Text Detection and Image Recognition, while PTC ThingWorx manages the user interface and the data manipulation.
Resumo:
Il Deep Learning ha radicalmente trasformato il mondo del Machine Learning migliorando lo stato dell'arte in diversi campi che spaziano dalla computer vision al natural language processing. Non fermandosi a problemi di classificazione, negli ultimi anni, applicazioni di tipo generativo hanno portato alla creazione di immagini realistiche e documenti letterali. Il mondo della musica non è esente da una moltitudine di esperimenti nello stesso campo, con risultati ancora acerbi ma comunque potenzialmente interessanti. In questa tesi verrà discussa l'applicazione di un di modello appartenente alla famiglia del Deep Learning per la generazione di musica simbolica.
Resumo:
Correctness of information gathered in production environments is an essential part of quality assurance processes in many industries, this task is often performed by human resources who visually take annotations in various steps of the production flow. Depending on the performed task the correlation between where exactly the information is gathered and what it represents is more than often lost in the process. The lack of labeled data places a great boundary on the application of deep neural networks aimed at object detection tasks, moreover supervised training of deep models requires a great amount of data to be available. Reaching an adequate large collection of labeled images through classic techniques of data annotations is an exhausting and costly task to perform, not always suitable for every scenario. A possible solution is to generate synthetic data that replicates the real one and use it to fine-tune a deep neural network trained on one or more source domains to a different target domain. The purpose of this thesis is to show a real case scenario where the provided data were both in great scarcity and missing the required annotations. Sequentially a possible approach is presented where synthetic data has been generated to address those issues while standing as a training base of deep neural networks for object detection, capable of working on images taken in production-like environments. Lastly, it compares performance on different types of synthetic data and convolutional neural networks used as backbones for the model.
Resumo:
State-of-the-art NLP systems are generally based on the assumption that the underlying models are provided with vast datasets to train on. However, especially when working in multi-lingual contexts, datasets are often scarce, thus more research should be carried out in this field. This thesis investigates the benefits of introducing an additional training step when fine-tuning NLP models, named Intermediate Training, which could be exploited to augment the data used for the training phase. The Intermediate Training step is applied by training models on NLP tasks that are not strictly related to the target task, aiming to verify if the models are able to leverage the learned knowledge of such tasks. Furthermore, in order to better analyze the synergies between different categories of NLP tasks, experimentations have been extended also to Multi-Task Training, in which the model is trained on multiple tasks at the same time.
Resumo:
Activation functions within neural networks play a crucial role in Deep Learning since they allow to learn complex and non-trivial patterns in the data. However, the ability to approximate non-linear functions is a significant limitation when implementing neural networks in a quantum computer to solve typical machine learning tasks. The main burden lies in the unitarity constraint of quantum operators, which forbids non-linearity and poses a considerable obstacle to developing such non-linear functions in a quantum setting. Nevertheless, several attempts have been made to tackle the realization of the quantum activation function in the literature. Recently, the idea of the QSplines has been proposed to approximate a non-linear activation function by implementing the quantum version of the spline functions. Yet, QSplines suffers from various drawbacks. Firstly, the final function estimation requires a post-processing step; thus, the value of the activation function is not available directly as a quantum state. Secondly, QSplines need many error-corrected qubits and a very long quantum circuits to be executed. These constraints do not allow the adoption of the QSplines on near-term quantum devices and limit their generalization capabilities. This thesis aims to overcome these limitations by leveraging hybrid quantum-classical computation. In particular, a few different methods for Variational Quantum Splines are proposed and implemented, to pave the way for the development of complete quantum activation functions and unlock the full potential of quantum neural networks in the field of quantum machine learning.
Resumo:
L’intelligenza artificiale è senza dubbio uno degli argomenti attualmente più in voga nel mondo dell’informatica, sempre in costante evoluzione ed espansione in nuovi settori. In questa elaborato progettuale viene combinato l’argomento sopracitato con il mondo dei social network, che ormai sono parte integrante della quotidianità di tutti. Viene infatti analizzato lo stato dell’arte attuale delle reti neurali, in particolare delle reti generative avversarie, e vengono esaminate le principali tipologie di social network. Su questa base, infatti, verrà realizzato un sistema di rete sociale completo nel quale una GAN sarà proprio la protagonista, sfruttando le più interessanti tecnologie attualmente disponibili. Il sistema sarà disponibile sia come applicativo per dispositivi mobile che come sito web e introdurrà elementi di gamification per aumentare l’interazione con l’utente.
Resumo:
Photoplethysmography (PPG) sensors allow for noninvasive and comfortable heart-rate (HR) monitoring, suitable for compact wearable devices. However, PPG signals collected from such devices often suffer from corruption caused by motion artifacts. This is typically addressed by combining the PPG signal with acceleration measurements from an inertial sensor. Recently, different energy-efficient deep learning approaches for heart rate estimation have been proposed. To test these new solutions, in this work, we developed a highly wearable platform (42mm x 48 mm x 1.2mm) for PPG signal acquisition and processing, based on GAP9, a parallel ultra low power system-on-chip featuring nine cores RISC-V compute cluster with neural network accelerator and 1 core RISC-V controller. The hardware platform also integrates a commercial complete Optical Biosensing Module and an ARM-Cortex M4 microcontroller unit (MCU) with Bluetooth low-energy connectivity. To demonstrate the capabilities of the system, a deep learning-based approach for PPG-based HR estimation has been deployed. Thanks to the reduced power consumption of the digital computational platform, the total power budget is just 2.67 mW providing up to 5 days of operation (105 mAh battery).
Resumo:
L’applicazione degli algoritmi di Intelligenza Artificiale (AI) al settore dell’imaging medico potrebbe apportare numerosi miglioramenti alla qualità delle cure erogate ai pazienti. Tuttavia, per poterla mettere a frutto si devono ancora superare alcuni limiti legati alla necessità di grandi quantità di immagini acquisite su pazienti reali, utili nell’addestramento degli stessi algoritmi. Il principale limite è costituito dalle norme che tutelano la privacy di dati sensibili, tra cui sono incluse le immagini mediche. La generazione di grandi dataset di immagini sintetiche, ottenute con algoritmi di Deep Learning (DL), sembra essere la soluzione a questi problemi.
Resumo:
The amplitude of motor evoked potentials (MEPs) elicited by transcranial magnetic stimulation (TMS) of the primary motor cortex (M1) shows a large variability from trial to trial, although MEPs are evoked by the same repeated stimulus. A multitude of factors is believed to influence MEP amplitudes, such as cortical, spinal and motor excitability state. The goal of this work is to explore to which degree the variation in MEP amplitudes can be explained by the cortical state right before the stimulation. Specifically, we analyzed a dataset acquired on eleven healthy subjects comprising, for each subject, 840 single TMS pulses applied to the left M1 during acquisition of electroencephalography (EEG) and electromyography (EMG). An interpretable convolutional neural network, named SincEEGNet, was utilized to discriminate between low- and high-corticospinal excitability trials, defined according to the MEP amplitude, using in input the pre-TMS EEG. This data-driven approach enabled considering multiple brain locations and frequency bands without any a priori selection. Post-hoc interpretation techniques were adopted to enhance interpretation by identifying the more relevant EEG features for the classification. Results show that individualized classifiers successfully discriminated between low and high M1 excitability states in all participants. Outcomes of the interpretation methods suggest the importance of the electrodes situated over the TMS stimulation site, as well as the relevance of the temporal samples of the input EEG closer to the stimulation time. This novel decoding method allows causal investigation of the cortical excitability state, which may be relevant for personalizing and increasing the efficacy of therapeutic brain-state dependent brain stimulation (for example in patients affected by Parkinson’s disease).
Resumo:
Il cancro è un processo autosufficiente e adattivo che interagisce dinamicamente con il suo microambiente, la cui diagnosi, complessa e dispendiosa in termini di tempo e numero di specialisti impiegati, viene solitamente effettuata valutando l’imaging radiografico oppure effettuando un esame istologico. L'interpretazione di tali immagini risulta generalmente molto complessa, a questo scopo sarebbe molto utile poter addestrare un computer a comprendere tali immagini potendo di fatto affiancarsi allo specialista, senza sostituirlo, al momento della diagnosi. A questo scopo è possibile affidarsi alle tecniche di apprendimento automatico, sistema alla base dell’intelligenza artificiale (AI), le quali permettono di fatto di apprendere automaticamente la rappresentazione delle caratteristiche da immagini campione. Tali tecniche di intelligenza artificiale, hanno però bisogno, per essere addestrate, di grandi quantità di dati in cui il segnale di uscita desiderato è noto, comportando di fatto un aumento delle tempistiche di addestramento. Inoltre, in ambito sanitario, i dati sono distribuiti su più archivi, dislocati sul territorio nazionale, rendendo impossibile l’utilizzo di soluzioni centralizzate. L’obbiettivo di questa trattazione sarà cercare di trovare una soluzione a queste due problematiche, ricorrendo all’utilizzo delle tecniche di parallelizzazione. A seguito dell'introduzione dello scenario biologico e delle tecniche di diagnostica ad esso associato è presentato il percorso di creazione della rete neurale. A seguito del suo addestramento sulla GPU di una singola macchina, ottenendo un'accuratezza dell'83.94% in 5 ore 48 minuti e 43 secondi, è stata introdotto la parallelizzazione ed una sua implementazione. In conclusione, sfruttando il sistema implementato, è stata distribuita la fase di addestramento prima su due macchine e poi su tre, ottenendo una diminuzione del tempo di addestramento rispettivamente del 31.4% e del 50%.