778 resultados para Machine Learning. Semissupervised learning. Multi-label classification. Reliability Parameter
Resumo:
A Histologia, o estudo de tecidos, é uma das áreas fundamentais da Biologia que permitiu enormes avanços científicos. Sendo uma tarefa exigente, meticulosa e demorada, será importante aproveitar a existência de ferramentas e algoritmos computacionais no seu auxílio, tornando o processo mais rápido e possibilitando a descoberta de informação que poderá não estar visível à partida. Esta dissertação tem como principal objectivo averiguar se um animal foi ou não sujeito à ingestão de um xenobiótico. Com esse objectivo em vista, utilizaram-se técnicas de processamento e segmentação de imagem aplicadas a imagens de tecido renal de ratos saudáveis e ratos que ingeriram o xenobiótico. Destas imagens extraíram-se inúmeras características do corpúsculo renal que após serem analisadas através de vários algoritmos de classificação mostraram ser possível saber se o animal ingeriu ou não o xenobiótico, com um reduzido grau de incerteza. ABSTRACT: Histology, the study of tissues, is one of the key areas of Biology that has allowed huge advances in Science. Being a demanding, meticulous and time consuming task, it is important to use the existence of computational tools and algorithms in its aid, making the process faster and enabling the discovery of information that may not be initially visible. The main goal of this thesis is to ascertain if an animal was subjected or not to the ingestion of a xenobiotic. With this in mind, were used image processing and segmentation techniques applied on images of kidney tissue from healthy rats and rats that ingested the xenobiotic. From these images were extracted several features of renal glomeruli that after being analyzed by various classification algorithms had shown to be possible to know, with an acceptable degree of certainty, if the animal ingested or not the xenobiotic.
Resumo:
Nowadays the development of new Internal Combustion Engines is mainly driven by the need to reduce tailpipe emissions of pollutants, Green-House Gases and avoid the fossil fuels wasting. The design of dimension and shape of the combustion chamber together with the implementation of different injection strategies e.g., injection timing, spray targeting, higher injection pressure, play a key role in the accomplishment of the aforementioned targets. As far as the match between the fuel injection and evaporation and the combustion chamber shape is concerned, the assessment of the interaction between the liquid fuel spray and the engine walls in gasoline direct injection engines is crucial. The use of numerical simulations is an acknowledged technique to support the study of new technological solutions such as the design of new gasoline blends and of tailored injection strategies to pursue the target mixture formation. The current simulation framework lacks a well-defined best practice for the liquid fuel spray interaction simulation, which is a complex multi-physics problem. This thesis deals with the development of robust methodologies to approach the numerical simulation of the liquid fuel spray interaction with walls and lubricants. The accomplishment of this task was divided into three tasks: i) setup and validation of spray-wall impingement three-dimensional CFD spray simulations; ii) development of a one-dimensional model describing the liquid fuel – lubricant oil interaction; iii) development of a machine learning based algorithm aimed to define which mixture of known pure components mimics the physical behaviour of the real gasoline for the simulation of the liquid fuel spray interaction.
Resumo:
L’obiettivo principale della tesi, è quello di mettere a confronto soluzioni basate su tecnologie diverse e individuare la soluzione migliore che permetta di stabilire se le persone inquadrate in un’immagine indossano correttamente o meno la mascherina protettiva come previsto dalle norme anti-covid. Per raggiungere l’obiettivo verranno confrontate diverse architetture costruite per lo stesso scopo e che si basano sui principi di Machine Learning e Deep Learning, e verranno messe in funzione su insieme di dataset individuati, che sono stati creati per propositi affini.
Resumo:
The work described in this Master’s Degree thesis was born after the collaboration with the company Maserati S.p.a, an Italian luxury car maker with its headquarters located in Modena, in the heart of the Italian Motor Valley, where I worked as a stagiaire in the Virtual Engineering team between September 2021 and February 2022. This work proposes the validation using real-world ECUs of a Driver Drowsiness Detection (DDD) system prototype based on different detection methods with the goal to overcome input signal losses and system failures. Detection methods of different categories have been chosen from literature and merged with the goal of utilizing the benefits of each of them, overcoming their limitations and limiting as much as possible their degree of intrusiveness to prevent any kind of driving distraction: an image processing-based technique for human physical signals detection as well as methods based on driver-vehicle interaction are used. A Driver-In-the-Loop simulator is used to gather real data on which a Machine Learning-based algorithm will be trained and validated. These data come from the tests that the company conducts in its daily activities so confidential information about the simulator and the drivers will be omitted. Although the impact of the proposed system is not remarkable and there is still work to do in all its elements, the results indicate the main advantages of the system in terms of robustness against subsystem failures and signal losses.
Resumo:
Much of the real-world dataset, including textual data, can be represented using graph structures. The use of graphs to represent textual data has many advantages, mainly related to maintaining a more significant amount of information, such as the relationships between words and their types. In recent years, many neural network architectures have been proposed to deal with tasks on graphs. Many of them consider only node features, ignoring or not giving the proper relevance to relationships between them. However, in many node classification tasks, they play a fundamental role. This thesis aims to analyze the main GNNs, evaluate their advantages and disadvantages, propose an innovative solution considered as an extension of GAT, and apply them to a case study in the biomedical field. We propose the reference GNNs, implemented with methodologies later analyzed, and then applied to a question answering system in the biomedical field as a replacement for the pre-existing GNN. We attempt to obtain better results by using models that can accept as input both node and edge features. As shown later, our proposed models can beat the original solution and define the state-of-the-art for the task under analysis.
Resumo:
This Thesis is composed of a collection of works written in the period 2019-2022, whose aim is to find methodologies of Artificial Intelligence (AI) and Machine Learning to detect and classify patterns and rules in argumentative and legal texts. We define our approach “hybrid”, since we aimed at designing hybrid combinations of symbolic and sub-symbolic AI, involving both “top-down” structured knowledge and “bottom-up” data-driven knowledge. A first group of works is dedicated to the classification of argumentative patterns. Following the Waltonian model of argument and the related theory of Argumentation Schemes, these works focused on the detection of argumentative support and opposition, showing that argumentative evidences can be classified at fine-grained levels without resorting to highly engineered features. To show this, our methods involved not only traditional approaches such as TFIDF, but also some novel methods based on Tree Kernel algorithms. After the encouraging results of this first phase, we explored the use of a some emerging methodologies promoted by actors like Google, which have deeply changed NLP since 2018-19 — i.e., Transfer Learning and language models. These new methodologies markedly improved our previous results, providing us with best-performing NLP tools. Using Transfer Learning, we also performed a Sequence Labelling task to recognize the exact span of argumentative components (i.e., claims and premises), thus connecting portions of natural language to portions of arguments (i.e., to the logical-inferential dimension). The last part of our work was finally dedicated to the employment of Transfer Learning methods for the detection of rules and deontic modalities. In this case, we explored a hybrid approach which combines structured knowledge coming from two LegalXML formats (i.e., Akoma Ntoso and LegalRuleML) with sub-symbolic knowledge coming from pre-trained (and then fine-tuned) neural architectures.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.
Resumo:
Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity.
Resumo:
Artificial Intelligence (AI) and Machine Learning (ML) are novel data analysis techniques providing very accurate prediction results. They are widely adopted in a variety of industries to improve efficiency and decision-making, but they are also being used to develop intelligent systems. Their success grounds upon complex mathematical models, whose decisions and rationale are usually difficult to comprehend for human users to the point of being dubbed as black-boxes. This is particularly relevant in sensitive and highly regulated domains. To mitigate and possibly solve this issue, the Explainable AI (XAI) field became prominent in recent years. XAI consists of models and techniques to enable understanding of the intricated patterns discovered by black-box models. In this thesis, we consider model-agnostic XAI techniques, which can be applied to Tabular data, with a particular focus on the Credit Scoring domain. Special attention is dedicated to the LIME framework, for which we propose several modifications to the vanilla algorithm, in particular: a pair of complementary Stability Indices that accurately measure LIME stability, and the OptiLIME policy which helps the practitioner finding the proper balance among explanations' stability and reliability. We subsequently put forward GLEAMS a model-agnostic surrogate interpretable model which requires to be trained only once, while providing both Local and Global explanations of the black-box model. GLEAMS produces feature attributions and what-if scenarios, from both dataset and model perspective. Eventually, we argue that synthetic data are an emerging trend in AI, being more and more used to train complex models instead of original data. To be able to explain the outcomes of such models, we must guarantee that synthetic data are reliable enough to be able to translate their explanations to real-world individuals. To this end we propose DAISYnt, a suite of tests to measure synthetic tabular data quality and privacy.
Resumo:
The COVID-19 pandemic, sparked by the SARS-CoV-2 virus, stirred global comparisons to historical pandemics. Initially presenting a high mortality rate, it later stabilized globally at around 0.5-3%. Patients manifest a spectrum of symptoms, necessitating efficient triaging for appropriate treatment strategies, ranging from symptomatic relief to antivirals or monoclonal antibodies. Beyond traditional approaches, emerging research suggests a potential link between COVID-19 severity and alterations in gut microbiota composition, impacting inflammatory responses. However, most studies focus on severe hospitalized cases without standardized criteria for severity. Addressing this gap, the first study in this thesis spans diverse COVID-19 severity levels, utilizing 16S rRNA amplicon sequencing on fecal samples from 315 subjects. The findings highlight significant microbiota differences correlated with severity. Machine learning classifiers, including a multi-layer convoluted neural network, demonstrated the potential of microbiota compositional data to predict patient severity, achieving an 84.2% mean balanced accuracy starting one week post-symptom onset. These preliminary results underscore the gut microbiota's potential as a biomarker in clinical decision-making for COVID-19. The second study delves into mild COVID-19 cases, exploring their implications for ‘long COVID’ or Post-Acute COVID-19 Syndrome (PACS). Employing longitudinal analysis, the study unveils dynamic shifts in microbial composition during the acute phase, akin to severe cases. Innovative techniques, including network approaches and spline-based longitudinal analysis, were deployed to assess microbiota dynamics and potential associations with PACS. The research suggests that even in mild cases, similar mechanisms to hospitalized patients are established regarding changes in intestinal microbiota during the acute phase of the infection. These findings lay the foundation for potential microbiota-targeted therapies to mitigate inflammation, potentially preventing long COVID symptoms in the broader population. In essence, these studies offer valuable insights into the intricate relationships between COVID-19 severity, gut microbiota, and the potential for innovative clinical applications.
Resumo:
Embedded systems are increasingly integral to daily life, improving and facilitating the efficiency of modern Cyber-Physical Systems which provide access to sensor data, and actuators. As modern architectures become increasingly complex and heterogeneous, their optimization becomes a challenging task. Additionally, ensuring platform security is important to avoid harm to individuals and assets. This study primarily addresses challenges in contemporary Embedded Systems, focusing on platform optimization and security enforcement. The initial section of this study delves into the application of machine learning methods to efficiently determine the optimal number of cores for a parallel RISC-V cluster to minimize energy consumption using static source code analysis. Results demonstrate that automated platform configuration is not only viable but also that there is a moderate performance trade-off when relying solely on static features. The second part focuses on addressing the problem of heterogeneous device mapping, which involves assigning tasks to the most suitable computational device in a heterogeneous platform for optimal runtime. The contribution of this section lies in the introduction of novel pre-processing techniques, along with a training framework called Siamese Networks, that enhances the classification performance of DeepLLVM, an advanced approach for task mapping. Importantly, these proposed approaches are independent from the specific deep-learning model used. Finally, this research work focuses on addressing issues concerning the binary exploitation of software running in modern Embedded Systems. It proposes an architecture to implement Control-Flow Integrity in embedded platforms with a Root-of-Trust, aiming to enhance security guarantees with limited hardware modifications. The approach involves enhancing the architecture of a modern RISC-V platform for autonomous vehicles by implementing a side-channel communication mechanism that relays control-flow changes executed by the process running on the host core to the Root-of-Trust. This approach has limited impact on performance and it is effective in enhancing the security of embedded platforms.
Resumo:
Riding the wave of recent groundbreaking achievements, artificial intelligence (AI) is currently the buzzword on everybody’s lips and, allowing algorithms to learn from historical data, Machine Learning (ML) emerged as its pinnacle. The multitude of algorithms, each with unique strengths and weaknesses, highlights the absence of a universal solution and poses a challenging optimization problem. In response, automated machine learning (AutoML) navigates vast search spaces within minimal time constraints. By lowering entry barriers, AutoML emerged as promising the democratization of AI, yet facing some challenges. In data-centric AI, the discipline of systematically engineering data used to build an AI system, the challenge of configuring data pipelines is rather simple. We devise a methodology for building effective data pre-processing pipelines in supervised learning as well as a data-centric AutoML solution for unsupervised learning. In human-centric AI, many current AutoML tools were not built around the user but rather around algorithmic ideas, raising ethical and social bias concerns. We contribute by deploying AutoML tools aiming at complementing, instead of replacing, human intelligence. In particular, we provide solutions for single-objective and multi-objective optimization and showcase the challenges and potential of novel interfaces featuring large language models. Finally, there are application areas that rely on numerical simulators, often related to earth observations, they tend to be particularly high-impact and address important challenges such as climate change and crop life cycles. We commit to coupling these physical simulators with (Auto)ML solutions towards a physics-aware AI. Specifically, in precision farming, we design a smart irrigation platform that: allows real-time monitoring of soil moisture, predicts future moisture values, and estimates water demand to schedule the irrigation.
Resumo:
State-of-the-art NLP systems are generally based on the assumption that the underlying models are provided with vast datasets to train on. However, especially when working in multi-lingual contexts, datasets are often scarce, thus more research should be carried out in this field. This thesis investigates the benefits of introducing an additional training step when fine-tuning NLP models, named Intermediate Training, which could be exploited to augment the data used for the training phase. The Intermediate Training step is applied by training models on NLP tasks that are not strictly related to the target task, aiming to verify if the models are able to leverage the learned knowledge of such tasks. Furthermore, in order to better analyze the synergies between different categories of NLP tasks, experimentations have been extended also to Multi-Task Training, in which the model is trained on multiple tasks at the same time.
Resumo:
Con il crescente utilizzo delle reti wireless la sicurezza e l'affidabilità del servizio stanno diventando requisiti fondamentali da garantire. Questo studio ha come obiettivi il rilevamento di un attacco jammer e la classificazione della tipologia dell'attacco (reattivo, random e periodico) in una rete wireless in cui gli utenti comunicano con un access point tramite il protocollo random access slotted Aloha. La classificazione degli attacchi è infatti fondamentale per attuare le dovute contromisure ed evitare cali di performance nella rete. Le metriche estratte, fra cui la packet delivery ratio (PDR) e la rispettiva analisi spettrale, il rapporto segnale rumore medio e la varianza dell'rapporto segnale rumore, sono risultate essere efficaci nella classificazione dei jammers. In questo elaborato è stato implementato un sistema di detection e classificazione di jammer basato su machine learning, che ha permesso di ottenere una accuratezza complessiva del 92.5% nella classificazione ed una probabilità di detection superiore al 95% per valori di PDR inferiori o uguali al 70%.