816 resultados para type inference,machine learning,tipizzazione dinamica,dataset,python
Resumo:
In questa tesi vengono discusse le principali tecniche di machine learning riguardanti l'inferenza di tipo nei linguaggi tipati dinamicamente come Python. In aggiunta è stato creato un dataset di progetti Python per l'addestramento di modelli capaci di analizzare il codice
Resumo:
Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years ago. ML expertise is more and more requested and needed, though just a limited number of ML engineers are available on the job market, and their knowledge is always limited by an inherent characteristic of theirs: they are humans. This thesis explores the possibilities offered by meta-learning, a new field in ML that takes learning a level higher: models are trained on other models' training data, starting from features of the dataset they were trained on, inference times, obtained performances, to try to understand the relationship between a good model and the way it was obtained. The so-called metamodel was trained on data collected by OpenML, the largest ML metadata platform that's publicly available today. Datasets were analyzed to obtain meta-features that describe them, which were then tied to model performances in a regression task. The obtained metamodel predicts the expected performances of a given model type (e.g., a random forest) on a given ML task (e.g., classification on the UCI census dataset). This research was then integrated into a custom-made AutoML framework, to show how meta-learning is not an end in itself, but it can be used to further progress our ML research. Encoding ML engineering expertise in a model allows better, faster, and more impactful ML applications across the whole world, while reducing the cost that is inevitably tied to human engineers.
Resumo:
In CMS è stato lanciato un progetto di Data Analytics e, all’interno di esso, un’attività specifica pilota che mira a sfruttare tecniche di Machine Learning per predire la popolarità dei dataset di CMS. Si tratta di un’osservabile molto delicata, la cui eventuale predizione premetterebbe a CMS di costruire modelli di data placement più intelligenti, ampie ottimizzazioni nell’uso dello storage a tutti i livelli Tiers, e formerebbe la base per l’introduzione di un solito sistema di data management dinamico e adattivo. Questa tesi descrive il lavoro fatto sfruttando un nuovo prototipo pilota chiamato DCAFPilot, interamente scritto in python, per affrontare questa sfida.
Resumo:
In emergency situations, where time for blood transfusion is reduced, the O negative blood type (the universal donor) is administrated. However, sometimes even the universal donor can cause transfusion reactions that can be fatal to the patient. As commercial systems do not allow fast results and are not suitable for emergency situations, this paper presents the steps considered for the development and validation of a prototype, able to determine blood type compatibilities, even in emergency situations. Thus it is possible, using the developed system, to administer a compatible blood type, since the first blood unit transfused. In order to increase the system’s reliability, this prototype uses different approaches to classify blood types, the first of which is based on Decision Trees and the second one based on support vector machines. The features used to evaluate these classifiers are the standard deviation values, histogram, Histogram of Oriented Gradients and fast Fourier transform, computed on different regions of interest. The main characteristics of the presented prototype are small size, lightweight, easy transportation, ease of use, fast results, high reliability and low cost. These features are perfectly suited for emergency scenarios, where the prototype is expected to be used.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
Il riconoscimento delle condizioni del manto stradale partendo esclusivamente dai dati raccolti dallo smartphone di un ciclista a bordo del suo mezzo è un ambito di ricerca finora poco esplorato. Per lo sviluppo di questa tesi è stata sviluppata un'apposita applicazione, che combinata a script Python permette di riconoscere differenti tipologie di asfalto. L’applicazione raccoglie i dati rilevati dai sensori di movimento integrati nello smartphone, che registra i movimenti mentre il ciclista è alla guida del suo mezzo. Lo smartphone è fissato in un apposito holder fissato sul manubrio della bicicletta e registra i dati provenienti da giroscopio, accelerometro e magnetometro. I dati sono memorizzati su file CSV, che sono elaborati fino ad ottenere un unico DataSet contenente tutti i dati raccolti con le features estratte mediante appositi script Python. A ogni record sarà assegnato un cluster deciso in base ai risultati prodotti da K-means, risultati utilizzati in seguito per allenare algoritmi Supervised. Lo scopo degli algoritmi è riconoscere la tipologia di manto stradale partendo da questi dati. Per l’allenamento, il DataSet è stato diviso in due parti: il training set dal quale gli algoritmi imparano a classificare i dati e il test set sul quale gli algoritmi applicano ciò che hanno imparato per dare in output la classificazione che ritengono idonea. Confrontando le previsioni degli algoritmi con quello che i dati effettivamente rappresentano si ottiene la misura dell’accuratezza dell’algoritmo.
Resumo:
The Three-Dimensional Single-Bin-Size Bin Packing Problem is one of the most studied problem in the Cutting & Packing category. From a strictly mathematical point of view, it consists of packing a finite set of strongly heterogeneous “small” boxes, called items, into a finite set of identical “large” rectangles, called bins, minimizing the unused volume and requiring that the items are packed without overlapping. The great interest is mainly due to the number of real-world applications in which it arises, such as pallet and container loading, cutting objects out of a piece of material and packaging design. Depending on these real-world applications, more objective functions and more practical constraints could be needed. After a brief discussion about the real-world applications of the problem and a exhaustive literature review, the design of a two-stage algorithm to solve the aforementioned problem is presented. The algorithm must be able to provide the spatial coordinates of the placed boxes vertices and also the optimal boxes input sequence, while guaranteeing geometric, stability, fragility constraints and a reduced computational time. Due to NP-hard complexity of this type of combinatorial problems, a fusion of metaheuristic and machine learning techniques is adopted. In particular, a hybrid genetic algorithm coupled with a feedforward neural network is used. In the first stage, a rich dataset is created starting from a set of real input instances provided by an industrial company and the feedforward neural network is trained on it. After its training, given a new input instance, the hybrid genetic algorithm is able to run using the neural network output as input parameter vector, providing as output the optimal solution. The effectiveness of the proposed works is confirmed via several experimental tests.
Resumo:
Il Machine Learning si sta rivelando una tecnologia dalle incredibili potenzialità nei settori più disparati. Le diverse tecniche e gli algoritmi che vi fanno capo abilitano analisi dei dati molto più efficaci rispetto al passato. Anche l’industria assicurativa sta sperimentando l’adozione di soluzioni di Machine Learning e diverse sono le direzioni di innovamento che ne stanno conseguendo, dall’efficientamento dei processi interni all’offerta di prodotti rispondenti in maniera adattiva alle esigenze del cliente. Questo lavoro di tesi è stato realizzato durante un tirocinio presso Unisalute S.p.A., la prima assicurazione in ambito sanitario in Italia. La criticità intercettata è stata la sovrastima del capitale da destinare a riserva a fronte dell’impegno nei confronti dell’assicurato: questo capitale immobilizzato va a sottrarre risorse ad investimenti più proficui nel medio e lungo termine, per cui è di valore stimarlo appropriatamente. All'interno del settore IT di Unisalute, ho lavorato alla progettazione e implementazione di un modello di Machine Learning che riesca a prevedere se un sinistro appena preso in gestione sarà liquidato o meno. Dotare gli uffici impegnati nella determinazione del riservato di questa stima aggiuntiva basata sui dati, sarebbe di notevole supporto. La progettazione del modello di Machine Learning si è articolata in una Data Pipeline contenente le metodologie più efficienti con riferimento al preprocessamento e alla modellazione dei dati. L’implementazione ha visto Python come linguaggio di programmazione; il dataset, ottenuto a seguito di estrazioni e integrazioni a partire da diversi database Oracle, presenta una cardinalità di oltre 4 milioni di istanze caratterizzate da 32 variabili. A valle del tuning degli iperparamentri e dei vari addestramenti, si è raggiunta un’accuratezza dell’86% che, nel dominio di specie, è ritenuta più che soddisfacente e sono emersi contributi non noti alla liquidabilità dei sinistri.
Resumo:
Il mio progetto di tesi ha come obiettivo quello di creare un modello in grado di predire il rating delle applicazioni presenti all’interno del Play Store, uno dei più grandi servizi di distribuzione digitale Android. A tale scopo ho utilizzato il linguaggio Python, che grazie alle sue librerie, alla sua semplicità e alla sua versatilità è certamen- te uno dei linguaggi più usati nel campo dell’intelligenza artificiale. Il punto di partenza del mio studio è stato il Dataset (Insieme di dati strutturati in forma relazionale) “Google Play Store Apps” reperibile su Kaggle al seguente indirizzo: https://www.kaggle.com/datasets/lava18/google-play-store-apps, contenente 10841 osservazioni e 13 attributi. Dopo una prima parte relativa al caricamen- to, alla visualizzazione e alla preparazione dei dati su cui lavorare, ho applica- to quattro di↵erenti tecniche di Machine Learning per la stima del rating delle applicazioni. In particolare, sono state utilizzate:https://www.kaggle.com/datasets/lava18/google-play-store-apps, contenente 10841 osservazioni e 13 attributi. Dopo una prima parte relativa al caricamento, alla visualizzazione e alla preparazione dei dati su cui lavorare, ho applicato quattro differenti tecniche di Machine Learning per la stima del rating delle applicazioni: Ridje, Regressione Lineare, Random Forest e SVR. Tali algoritmi sono stati applicati attuando due tipi diversi di trasformazioni (Label Encoding e One Hot Encoding) sulla variabile ‘Category’, con lo scopo di analizzare come le suddette trasformazioni riescano a influire sulla bontà del modello. Ho confrontato poi l’errore quadratico medio (MSE), l’errore medio as- soluto (MAE) e l’errore mediano assoluto (MdAE) con il fine di capire quale sia l’algoritmo più efficiente.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
Propolis is a chemically complex biomass produced by honeybees (Apis mellifera) from plant resins added of salivary enzymes, beeswax, and pollen. The biological activities described for propolis were also identified for donor plants resin, but a big challenge for the standardization of the chemical composition and biological effects of propolis remains on a better understanding of the influence of seasonality on the chemical constituents of that raw material. Since propolis quality depends, among other variables, on the local flora which is strongly influenced by (a)biotic factors over the seasons, to unravel the harvest season effect on the propolis chemical profile is an issue of recognized importance. For that, fast, cheap, and robust analytical techniques seem to be the best choice for large scale quality control processes in the most demanding markets, e.g., human health applications. For that, UV-Visible (UV-Vis) scanning spectrophotometry of hydroalcoholic extracts (HE) of seventy-three propolis samples, collected over the seasons in 2014 (summer, spring, autumn, and winter) and 2015 (summer and autumn) in Southern Brazil was adopted. Further machine learning and chemometrics techniques were applied to the UV-Vis dataset aiming to gain insights as to the seasonality effect on the claimed chemical heterogeneity of propolis samples determined by changes in the flora of the geographic region under study. Descriptive and classification models were built following a chemometric approach, i.e. principal component analysis (PCA) and hierarchical clustering analysis (HCA) supported by scripts written in the R language. The UV-Vis profiles associated with chemometric analysis allowed identifying a typical pattern in propolis samples collected in the summer. Importantly, the discrimination based on PCA could be improved by using the dataset of the fingerprint region of phenolic compounds ( = 280-400m), suggesting that besides the biological activities of those secondary metabolites, they also play a relevant role for the discrimination and classification of that complex matrix through bioinformatics tools. Finally, a series of machine learning approaches, e.g., partial least square-discriminant analysis (PLS-DA), k-Nearest Neighbors (kNN), and Decision Trees showed to be complementary to PCA and HCA, allowing to obtain relevant information as to the sample discrimination.
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
Avalanche forecasting is a complex process involving the assimilation of multiple data sources to make predictions over varying spatial and temporal resolutions. Numerically assisted forecasting often uses nearest neighbour methods (NN), which are known to have limitations when dealing with high dimensional data. We apply Support Vector Machines to a dataset from Lochaber, Scotland to assess their applicability in avalanche forecasting. Support Vector Machines (SVMs) belong to a family of theoretically based techniques from machine learning and are designed to deal with high dimensional data. Initial experiments showed that SVMs gave results which were comparable with NN for categorical and probabilistic forecasts. Experiments utilising the ability of SVMs to deal with high dimensionality in producing a spatial forecast show promise, but require further work.
Resumo:
Mobile malwares are increasing with the growing number of Mobile users. Mobile malwares can perform several operations which lead to cybersecurity threats such as, stealing financial or personal information, installing malicious applications, sending premium SMS, creating backdoors, keylogging and crypto-ransomware attacks. Knowing the fact that there are many illegitimate Applications available on the App stores, most of the mobile users remain careless about the security of their Mobile devices and become the potential victim of these threats. Previous studies have shown that not every antivirus is capable of detecting all the threats; due to the fact that Mobile malwares use advance techniques to avoid detection. A Network-based IDS at the operator side will bring an extra layer of security to the subscribers and can detect many advanced threats by analyzing their traffic patterns. Machine Learning(ML) will provide the ability to these systems to detect unknown threats for which signatures are not yet known. This research is focused on the evaluation of Machine Learning classifiers in Network-based Intrusion detection systems for Mobile Networks. In this study, different techniques of Network-based intrusion detection with their advantages, disadvantages and state of the art in Hybrid solutions are discussed. Finally, a ML based NIDS is proposed which will work as a subsystem, to Network-based IDS deployed by Mobile Operators, that can help in detecting unknown threats and reducing false positives. In this research, several ML classifiers were implemented and evaluated. This study is focused on Android-based malwares, as Android is the most popular OS among users, hence most targeted by cyber criminals. Supervised ML algorithms based classifiers were built using the dataset which contained the labeled instances of relevant features. These features were extracted from the traffic generated by samples of several malware families and benign applications. These classifiers were able to detect malicious traffic patterns with the TPR upto 99.6% during Cross-validation test. Also, several experiments were conducted to detect unknown malware traffic and to detect false positives. These classifiers were able to detect unknown threats with the Accuracy of 97.5%. These classifiers could be integrated with current NIDS', which use signatures, statistical or knowledge-based techniques to detect malicious traffic. Technique to integrate the output from ML classifier with traditional NIDS is discussed and proposed for future work.
Resumo:
Developing successful navigation and mapping strategies is an essential part of autonomous robot research. However, hardware limitations often make for inaccurate systems. This project serves to investigate efficient alternatives to mapping an environment, by first creating a mobile robot, and then applying machine learning to the robot and controlling systems to increase the robustness of the robot system. My mapping system consists of a semi-autonomous robot drone in communication with a stationary Linux computer system. There are learning systems running on both the robot and the more powerful Linux system. The first stage of this project was devoted to designing and building an inexpensive robot. Utilizing my prior experience from independent studies in robotics, I designed a small mobile robot that was well suited for simple navigation and mapping research. When the major components of the robot base were designed, I began to implement my design. This involved physically constructing the base of the robot, as well as researching and acquiring components such as sensors. Implementing the more complex sensors became a time-consuming task, involving much research and assistance from a variety of sources. A concurrent stage of the project involved researching and experimenting with different types of machine learning systems. I finally settled on using neural networks as the machine learning system to incorporate into my project. Neural nets can be thought of as a structure of interconnected nodes, through which information filters. The type of neural net that I chose to use is a type that requires a known set of data that serves to train the net to produce the desired output. Neural nets are particularly well suited for use with robotic systems as they can handle cases that lie at the extreme edges of the training set, such as may be produced by "noisy" sensor data. Through experimenting with available neural net code, I became familiar with the code and its function, and modified it to be more generic and reusable for multiple applications of neural nets.