7 resultados para CHD Prediction, Blood Serum Data Chemometrics Methods
em AMS Tesi di Laurea - Alm@DL - Universit
Resumo:
Increasing in resolution of numerical weather prediction models has allowed more and more realistic forecasts of atmospheric parameters. Due to the growing variability into predicted fields the traditional verification methods are not always able to describe the model ability because they are based on a grid-point-by-grid-point matching between observation and prediction. Recently, new spatial verification methods have been developed with the aim of show the benefit associated to the high resolution forecast. Nested in among of the MesoVICT international project, the initially aim of this work is to compare the newly tecniques remarking advantages and disadvantages. First of all, the MesoVICT basic examples, represented by synthetic precipitation fields, have been examined. Giving an error evaluation in terms of structure, amplitude and localization of the precipitation fields, the SAL method has been studied more thoroughly respect to the others approaches with its implementation in the core cases of the project. The verification procedure has concerned precipitation fields over central Europe: comparisons between the forecasts performed by the 00z COSMO-2 model and the VERA (Vienna Enhanced Resolution Analysis) have been done. The study of these cases has shown some weaknesses of the methodology examined; in particular has been highlighted the presence of a correlation between the optimal domain size and the extention of the precipitation systems. In order to increase ability of SAL, a subdivision of the original domain in three subdomains has been done and the method has been applied again. Some limits have been found in cases in which at least one of the two domains does not show precipitation. The overall results for the subdomains have been summarized on scatter plots. With the aim to identify systematic errors of the model the variability of the three parameters has been studied for each subdomain.
Resumo:
I recenti sviluppi nel campo dell’intelligenza artificiale hanno permesso una più adeguata classificazione del segnale EEG. Negli ultimi anni è stato dimostrato come sia possibile ottenere ottime performance di classificazione impiegando tecniche di Machine Learning (ML) e di Deep Learning (DL), facendo uso, per quest’ultime, di reti neurali convoluzionali (Convolutional Neural Networks, CNN). In particolare, il Deep Learning richiede molti dati di training mentre spesso i dataset per EEG sono limitati ed è difficile quindi raggiungere prestazioni elevate. I metodi di Data Augmentation possono alleviare questo problema. Partendo da dati reali, questa tecnica permette, la creazione di dati artificiali fondamentali per aumentare le dimensioni del dataset di partenza. L’applicazione più comune è quella di utilizzare i Data Augmentation per aumentare le dimensioni del training set, in modo da addestrare il modello/rete neurale su un numero di campioni più esteso, riducendo gli errori di classificazione. Partendo da questa idea, i Data Augmentation sono stati applicati in molteplici campi e in particolare per la classificazione del segnale EEG. In questo elaborato di tesi, inizialmente, vengono descritti metodi di Data Augmentation implementati nel corso degli anni, utilizzabili anche nell’ambito di applicazioni EEG. Successivamente, si presentano alcuni studi specifici che applicano metodi di Data Augmentation per migliorare le presentazioni di classificatori basati su EEG per l’identificazione dello stato sonno/veglia, per il riconoscimento delle emozioni, e per la classificazione di immaginazione motoria.
Resumo:
Il presente lavoro di tesi si inserisce nell’ambito della classificazione di dati ad alta dimensionalità, sviluppando un algoritmo basato sul metodo della Discriminant Analysis. Esso classifica i campioni attraverso le variabili prese a coppie formando un network a partire da quelle che hanno una performance sufficientemente elevata. Successivamente, l’algoritmo si avvale di proprietà topologiche dei network (in particolare la ricerca di subnetwork e misure di centralità di singoli nodi) per ottenere varie signature (sottoinsiemi delle variabili iniziali) con performance ottimali di classificazione e caratterizzate da una bassa dimensionalità (dell’ordine di 101, inferiore di almeno un fattore 103 rispetto alle variabili di partenza nei problemi trattati). Per fare ciò, l’algoritmo comprende una parte di definizione del network e un’altra di selezione e riduzione della signature, calcolando ad ogni passaggio la nuova capacità di classificazione operando test di cross-validazione (k-fold o leave- one-out). Considerato l’alto numero di variabili coinvolte nei problemi trattati – dell’ordine di 104 – l’algoritmo è stato necessariamente implementato su High-Performance Computer, con lo sviluppo in parallelo delle parti più onerose del codice C++, nella fattispecie il calcolo vero e proprio del di- scriminante e il sorting finale dei risultati. L’applicazione qui studiata è a dati high-throughput in ambito genetico, riguardanti l’espressione genica a livello cellulare, settore in cui i database frequentemente sono costituiti da un numero elevato di variabili (104 −105) a fronte di un basso numero di campioni (101 −102). In campo medico-clinico, la determinazione di signature a bassa dimensionalità per la discriminazione e classificazione di campioni (e.g. sano/malato, responder/not-responder, ecc.) è un problema di fondamentale importanza, ad esempio per la messa a punto di strategie terapeutiche personalizzate per specifici sottogruppi di pazienti attraverso la realizzazione di kit diagnostici per l’analisi di profili di espressione applicabili su larga scala. L’analisi effettuata in questa tesi su vari tipi di dati reali mostra che il metodo proposto, anche in confronto ad altri metodi esistenti basati o me- no sull’approccio a network, fornisce performance ottime, tenendo conto del fatto che il metodo produce signature con elevate performance di classifica- zione e contemporaneamente mantenendo molto ridotto il numero di variabili utilizzate per questo scopo.
Resumo:
Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.
Resumo:
Worldwide, biodiversity is decreasing due to climate change, habitat fragmentation and agricultural intensification. Bees are essential crops pollinator, but their abundance and diversity are decreasing as well. For their conservation, it is necessary to assess the status of bee population. Field data collection methods are expensive and time consuming thus, recently, new methods based on remote sensing are used. In this study we tested the possibility of using flower cover diversity estimated by UAV images (FCD-UAV) to assess bee diversity and abundance in 10 agricultural meadows in the Netherlands. In order to do so, field data of flower and bee diversity and abundance were collected during a campaign in May 2021. Furthermore, RGB images of the areas have been collected using Unmanned Aerial Vehicle (UAV) and post-processed into orthomosaics. Lastly, Random Forest machine learning algorithm was applied to estimate FCD of the species detected in each field. Resulting FCD was expressed with Shannon and Simpson diversity indices, which were successively correlated to bee Shannon and Simpson diversity indices, abundance and species richness. The results showed a positive relationship between FCD-UAV and in-situ collected data about bee diversity, evaluated with Shannon index, abundance and species richness. The strongest relationship was found between FCD (Shannon Index) and bee abundance with R2=0.52. Following, good correlations were found with bee species richness (R2=0.39) and bee diversity (R2=0.37). R2 values of the relationship between FCD (Simpson Index) and bee abundance, species richness and diversity were slightly inferior (0.45, 0.37 and 0.35, respectively). Our results suggest that the proposed method based on the coupling of UAV imagery and machine learning for the assessment of flower species diversity could be developed into valuable tools for large-scale, standardized and cost-effective monitoring of flower cover and of the habitat quality for bees.
Resumo:
Artificial Intelligence (AI) has substantially influenced numerous disciplines in recent years. Biology, chemistry, and bioinformatics are among them, with significant advances in protein structure prediction, paratope prediction, protein-protein interactions (PPIs), and antibody-antigen interactions. Understanding PPIs is critical since they are responsible for practically everything living and have several uses in vaccines, cancer, immunology, and inflammatory illnesses. Machine Learning (ML) offers enormous potential for effectively simulating antibody-antigen interactions and improving in-silico optimization of therapeutic antibodies for desired features, including binding activity, stability, and low immunogenicity. This research looks at the use of AI algorithms to better understand antibody-antigen interactions, and it further expands and explains several difficulties encountered in the field. Furthermore, we contribute by presenting a method that outperforms existing state-of-the-art strategies in paratope prediction from sequence data.
Resumo:
The aim of this thesis project is to automatically localize HCC tumors in the human liver and subsequently predict if the tumor will undergo microvascular infiltration (MVI), the initial stage of metastasis development. The input data for the work have been partially supplied by Sant'Orsola Hospital and partially downloaded from online medical databases. Two Unet models have been implemented for the automatic segmentation of the livers and the HCC malignancies within it. The segmentation models have been evaluated with the Intersection-over-Union and the Dice Coefficient metrics. The outcomes obtained for the liver automatic segmentation are quite good (IOU = 0.82; DC = 0.35); the outcomes obtained for the tumor automatic segmentation (IOU = 0.35; DC = 0.46) are, instead, affected by some limitations: it can be state that the algorithm is almost always able to detect the location of the tumor, but it tends to underestimate its dimensions. The purpose is to achieve the CT images of the HCC tumors, necessary for features extraction. The 14 Haralick features calculated from the 3D-GLCM, the 120 Radiomic features and the patients' clinical information are collected to build a dataset of 153 features. Now, the goal is to build a model able to discriminate, based on the features given, the tumors that will undergo MVI and those that will not. This task can be seen as a classification problem: each tumor needs to be classified either as “MVI positive” or “MVI negative”. Techniques for features selection are implemented to identify the most descriptive features for the problem at hand and then, a set of classification models are trained and compared. Among all, the models with the best performances (around 80-84% ± 8-15%) result to be the XGBoost Classifier, the SDG Classifier and the Logist Regression models (without penalization and with Lasso, Ridge or Elastic Net penalization).