797 resultados para Machine Learning Algorithms


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the most visionary goals of Artificial Intelligence is to create a system able to mimic and eventually surpass the intelligence observed in biological systems including, ambitiously, the one observed in humans. The main distinctive strength of humans is their ability to build a deep understanding of the world by learning continuously and drawing from their experiences. This ability, which is found in various degrees in all intelligent biological beings, allows them to adapt and properly react to changes by incrementally expanding and refining their knowledge. Arguably, achieving this ability is one of the main goals of Artificial Intelligence and a cornerstone towards the creation of intelligent artificial agents. Modern Deep Learning approaches allowed researchers and industries to achieve great advancements towards the resolution of many long-standing problems in areas like Computer Vision and Natural Language Processing. However, while this current age of renewed interest in AI allowed for the creation of extremely useful applications, a concerningly limited effort is being directed towards the design of systems able to learn continuously. The biggest problem that hinders an AI system from learning incrementally is the catastrophic forgetting phenomenon. This phenomenon, which was discovered in the 90s, naturally occurs in Deep Learning architectures where classic learning paradigms are applied when learning incrementally from a stream of experiences. This dissertation revolves around the Continual Learning field, a sub-field of Machine Learning research that has recently made a comeback following the renewed interest in Deep Learning approaches. This work will focus on a comprehensive view of continual learning by considering algorithmic, benchmarking, and applicative aspects of this field. This dissertation will also touch on community aspects such as the design and creation of research tools aimed at supporting Continual Learning research, and the theoretical and practical aspects concerning public competitions in this field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Deep learning methods are extremely promising machine learning tools to analyze neuroimaging data. However, their potential use in clinical settings is limited because of the existing challenges of applying these methods to neuroimaging data. In this study, first a data leakage type caused by slice-level data split that is introduced during training and validation of a 2D CNN is surveyed and a quantitative assessment of the model’s performance overestimation is presented. Second, an interpretable, leakage-fee deep learning software written in a python language with a wide range of options has been developed to conduct both classification and regression analysis. The software was applied to the study of mild cognitive impairment (MCI) in patients with small vessel disease (SVD) using multi-parametric MRI data where the cognitive performance of 58 patients measured by five neuropsychological tests is predicted using a multi-input CNN model taking brain image and demographic data. Each of the cognitive test scores was predicted using different MRI-derived features. As MCI due to SVD has been hypothesized to be the effect of white matter damage, DTI-derived features MD and FA produced the best prediction outcome of the TMT-A score which is consistent with the existing literature. In a second study, an interpretable deep learning system aimed at 1) classifying Alzheimer disease and healthy subjects 2) examining the neural correlates of the disease that causes a cognitive decline in AD patients using CNN visualization tools and 3) highlighting the potential of interpretability techniques to capture a biased deep learning model is developed. Structural magnetic resonance imaging (MRI) data of 200 subjects was used by the proposed CNN model which was trained using a transfer learning-based approach producing a balanced accuracy of 71.6%. Brain regions in the frontal and parietal lobe showing the cerebral cortex atrophy were highlighted by the visualization tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The fourth industrial revolution is paving the way for Industrial Internet of Things applications where industrial assets (e.g., robotic arms, valves, pistons) are equipped with a large number of wireless devices (i.e., microcontroller boards that embed sensors and actuators) to enable a plethora of new applications, such as analytics, diagnostics, monitoring, as well as supervisory, and safety control use-cases. Nevertheless, current wireless technologies, such as Wi-Fi, Bluetooth, and even private 5G networks, cannot fulfill all the requirements set up by the Industry 4.0 paradigm, thus opening up new 6G-oriented research trends, such as the use of THz frequencies. In light of the above, this thesis provides (i) a broad overview of the main use-cases, requirements, and key enabling wireless technologies foreseen by the fourth industrial revolution, and (ii) proposes innovative contributions, both theoretical and empirical, to enhance the performance of current and future wireless technologies at different levels of the protocol stack. In particular, at the physical layer, signal processing techniques are being exploited to analyze two multiplexing schemes, namely Affine Frequency Division Multiplexing and Orthogonal Chirp Division Multiplexing, which seem promising for high-frequency wireless communications. At the medium access layer, three protocols for intra-machine communications are proposed, where one is based on LoRa at 2.4 GHz and the others work in the THz band. Different scheduling algorithms for private industrial 5G networks are compared, and two main proposals are described, i.e., a decentralized scheme that leverages machine learning techniques to better address aperiodic traffic patterns, and a centralized contention-based design that serves a federated learning industrial application. Results are provided in terms of numerical evaluations, simulation results, and real-world experiments. Several improvements over the state-of-the-art were obtained, and the description of up-and-running testbeds demonstrates the feasibility of some of the theoretical concepts when considering a real industry plant.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Deep Neural Networks (DNNs) have revolutionized a wide range of applications beyond traditional machine learning and artificial intelligence fields, e.g., computer vision, healthcare, natural language processing and others. At the same time, edge devices have become central in our society, generating an unprecedented amount of data which could be used to train data-hungry models such as DNNs. However, the potentially sensitive or confidential nature of gathered data poses privacy concerns when storing and processing them in centralized locations. To this purpose, decentralized learning decouples model training from the need of directly accessing raw data, by alternating on-device training and periodic communications. The ability of distilling knowledge from decentralized data, however, comes at the cost of facing more challenging learning settings, such as coping with heterogeneous hardware and network connectivity, statistical diversity of data, and ensuring verifiable privacy guarantees. This Thesis proposes an extensive overview of decentralized learning literature, including a novel taxonomy and a detailed description of the most relevant system-level contributions in the related literature for privacy, communication efficiency, data and system heterogeneity, and poisoning defense. Next, this Thesis presents the design of an original solution to tackle communication efficiency and system heterogeneity, and empirically evaluates it on federated settings. For communication efficiency, an original method, specifically designed for Convolutional Neural Networks, is also described and evaluated against the state-of-the-art. Furthermore, this Thesis provides an in-depth review of recently proposed methods to tackle the performance degradation introduced by data heterogeneity, followed by empirical evaluations on challenging data distributions, highlighting strengths and possible weaknesses of the considered solutions. Finally, this Thesis presents a novel perspective on the usage of Knowledge Distillation as a mean for optimizing decentralized learning systems in settings characterized by data heterogeneity or system heterogeneity. Our vision on relevant future research directions close the manuscript.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Machine Learning makes computers capable of performing tasks typically requiring human intelligence. A domain where it is having a considerable impact is the life sciences, allowing to devise new biological analysis protocols, develop patients’ treatments efficiently and faster, and reduce healthcare costs. This Thesis work presents new Machine Learning methods and pipelines for the life sciences focusing on the unsupervised field. At a methodological level, two methods are presented. The first is an “Ab Initio Local Principal Path” and it is a revised and improved version of a pre-existing algorithm in the manifold learning realm. The second contribution is an improvement over the Import Vector Domain Description (one-class learning) through the Kullback-Leibler divergence. It hybridizes kernel methods to Deep Learning obtaining a scalable solution, an improved probabilistic model, and state-of-the-art performances. Both methods are tested through several experiments, with a central focus on their relevance in life sciences. Results show that they improve the performances achieved by their previous versions. At the applicative level, two pipelines are presented. The first one is for the analysis of RNA-Seq datasets, both transcriptomic and single-cell data, and is aimed at identifying genes that may be involved in biological processes (e.g., the transition of tissues from normal to cancer). In this project, an R package is released on CRAN to make the pipeline accessible to the bioinformatic Community through high-level APIs. The second pipeline is in the drug discovery domain and is useful for identifying druggable pockets, namely regions of a protein with a high probability of accepting a small molecule (a drug). Both these pipelines achieve remarkable results. Lastly, a detour application is developed to identify the strengths/limitations of the “Principal Path” algorithm by analyzing Convolutional Neural Networks induced vector spaces. This application is conducted in the music and visual arts domains.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In pursuit of aligning with the European Union's ambitious target of achieving a carbon-neutral economy by 2050, researchers, vehicle manufacturers, and original equipment manufacturers have been at the forefront of exploring cutting-edge technologies for internal combustion engines. The introduction of these technologies has significantly increased the effort required to calibrate the models implemented in the engine control units. Consequently the development of tools that reduce costs and the time required during the experimental phases, has become imperative. Additionally, to comply with ever-stricter limits on 〖"CO" 〗_"2" emissions, it is crucial to develop advanced control systems that enhance traditional engine management systems in order to reduce fuel consumption. Furthermore, the introduction of new homologation cycles, such as the real driving emissions cycle, compels manufacturers to bridge the gap between engine operation in laboratory tests and real-world conditions. Within this context, this thesis showcases the performance and cost benefits achievable through the implementation of an auto-adaptive closed-loop control system, leveraging in-cylinder pressure sensors in a heavy-duty diesel engine designed for mining applications. Additionally, the thesis explores the promising prospect of real-time self-adaptive machine learning models, particularly neural networks, to develop an automatic system, using in-cylinder pressure sensors for the precise calibration of the target combustion phase and optimal spark advance in a spark-ignition engines. To facilitate the application of these combustion process feedback-based algorithms in production applications, the thesis discusses the results obtained from the development of a cost-effective sensor for indirect cylinder pressure measurement. Finally, to ensure the quality control of the proposed affordable sensor, the thesis provides a comprehensive account of the design and validation process for a piezoelectric washer test system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Riding the wave of recent groundbreaking achievements, artificial intelligence (AI) is currently the buzzword on everybody’s lips and, allowing algorithms to learn from historical data, Machine Learning (ML) emerged as its pinnacle. The multitude of algorithms, each with unique strengths and weaknesses, highlights the absence of a universal solution and poses a challenging optimization problem. In response, automated machine learning (AutoML) navigates vast search spaces within minimal time constraints. By lowering entry barriers, AutoML emerged as promising the democratization of AI, yet facing some challenges. In data-centric AI, the discipline of systematically engineering data used to build an AI system, the challenge of configuring data pipelines is rather simple. We devise a methodology for building effective data pre-processing pipelines in supervised learning as well as a data-centric AutoML solution for unsupervised learning. In human-centric AI, many current AutoML tools were not built around the user but rather around algorithmic ideas, raising ethical and social bias concerns. We contribute by deploying AutoML tools aiming at complementing, instead of replacing, human intelligence. In particular, we provide solutions for single-objective and multi-objective optimization and showcase the challenges and potential of novel interfaces featuring large language models. Finally, there are application areas that rely on numerical simulators, often related to earth observations, they tend to be particularly high-impact and address important challenges such as climate change and crop life cycles. We commit to coupling these physical simulators with (Auto)ML solutions towards a physics-aware AI. Specifically, in precision farming, we design a smart irrigation platform that: allows real-time monitoring of soil moisture, predicts future moisture values, and estimates water demand to schedule the irrigation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Trying to explain to a robot what to do is a difficult undertaking, and only specific types of people have been able to do so far, such as programmers or operators who have learned how to use controllers to communicate with a robot. My internship's goal was to create and develop a framework that would make that easier. The system uses deep learning techniques to recognize a set of hand gestures, both static and dynamic. Then, based on the gesture, it sends a command to a robot. To be as generic as feasible, the communication is implemented using Robot Operating System (ROS). Furthermore, users can add new recognizable gestures and link them to new robot actions; a finite state automaton enforces the users' input verification and correct action sequence. Finally, the users can create and utilize a macro to describe a sequence of actions performable by a robot.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Il ruolo dell’informatica è diventato chiave del funzionamento del mondo moderno, ormai sempre più in progressiva digitalizzazione di ogni singolo aspetto della vita dell’individuo. Con l’aumentare della complessità e delle dimensioni dei programmi, il rilevamento di errori diventa sempre di più un’attività difficile e che necessita l’impiego di tempo e risorse. Meccanismi di analisi del codice sorgente tradizionali sono esistiti fin dalla nascita dell’informatica stessa e il loro ruolo all’interno della catena produttiva di un team di programmatori non è mai stato cosi fondamentale come lo è tuttora. Questi meccanismi di analisi, però, non sono esenti da problematiche: il tempo di esecuzione su progetti di grandi dimensioni e la percentuale di falsi positivi possono, infatti, diventare un importante problema. Per questi motivi, meccanismi fondati su Machine Learning, e più in particolare Deep Learning, sono stati sviluppati negli ultimi anni. Questo lavoro di tesi si pone l’obbiettivo di esplorare e sviluppare un modello di Deep Learning per il riconoscimento di errori in un qualsiasi file sorgente scritto in linguaggio C e C++.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is focused on the design of a flexible, dynamic and innovative telecommunication's system for future 6G applications on vehicular communications. The system is based on the development of drones acting as mobile base stations in an urban scenario to cope with the increasing traffic demand and avoid network's congestion conditions. In particular, the exploitation of Reinforcement Learning algorithms is used to let the drone learn autonomously how to behave in a scenario full of obstacles with the goal of tracking and serve the maximum number of moving vehicles, by at the same time, minimizing the energy consumed to perform its tasks. This project is an extraordinary opportunity to open the doors to a new way of applying and develop telecommunications in an urban scenario by mixing it to the rising world of the Artificial Intelligence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Il mondo della moda è in continua e costante evoluzione, non solo dal punto di vista sociale, ma anche da quello tecnologico. Nel corso del presente elaborato si è studiata la possibilità di riconoscere e segmentare abiti presenti in una immagine utilizzando reti neurali profonde e approcci moderni. Sono state, quindi, analizzate reti quali FasterRCNN, MaskRCNN, YOLOv5, FashionPedia e Match-RCNN. In seguito si è approfondito l’addestramento delle reti neurali profonde in scenari di alta parallelizzazione e su macchine dotate di molteplici GPU al fine di ridurre i tempi di addestramento. Inoltre si è sperimentata la possibilità di creare una rete per prevedere se un determinato abito possa avere successo in futuro analizzando semplicemente dati passati e una immagine del vestito in questione. Necessaria per tali compiti è stata, inoltre, una approfondita analisi dei dataset esistenti nel mondo della moda e dei metodi per utilizzarli per l’addestramento. Il presente elaborato è stato svolto nell’ambito del progetto FA.RE.TRA. per il quale l'Università di Bologna svolge un compito di consulenza per lo studio di fattibilità su reti neurali in grado di svolgere i compiti menzionati.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Neural Networks customized and tested in this thesis (WaldoNet, FlowNet and PatchNet) are a first exploration and approach to the Template Matching task. The possibilities of extension are therefore many and some are proposed below. During my thesis, I have analyzed the functioning of the classical algorithms and adapted with deep learning algorithms. The features extracted from both the template and the query images resemble the keypoints of the SIFT algorithm. Then, instead of similarity function or keypoints matching, WaldoNet and PatchNet use the convolutional layer to compare the features, while FlowNet uses the correlational layer. In addition, I have identified the major challenges of the Template Matching task (affine/non-affine transformations, intensity changes...) and solved them with a careful design of the dataset.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La tesi ha lo scopo di ricercare, esaminare ed implementare un sistema di Machine Learning, un Recommendation Systems per precisione, che permetta la racommandazione di documenti di natura giuridica, i quali sono già stati analizzati e categorizzati appropriatamente, in maniera ottimale, il cui scopo sarebbe quello di accompagnare un sistema già implementato di Information Retrieval, istanziato sopra una web application, che permette di ricercare i documenti giuridici appena menzionati.