818 resultados para Machine learning,Keras,Tensorflow,Data parallelism,Model parallelism,Container,Docker
Resumo:
The purpose of this work in progress study was to test the concept of recognising plants using images acquired by image sensors in a controlled noise-free environment. The presence of vegetation on railway trackbeds and embankments presents potential problems. Woody plants (e.g. Scots pine, Norway spruce and birch) often establish themselves on railway trackbeds. This may cause problems because legal herbicides are not effective in controlling them; this is particularly the case for conifers. Thus, if maintenance administrators knew the spatial position of plants along the railway system, it may be feasible to mechanically harvest them. Primary data were collected outdoors comprising around 700 leaves and conifer seedlings from 11 species. These were then photographed in a laboratory environment. In order to classify the species in the acquired image set, a machine learning approach known as Bag-of-Features (BoF) was chosen. Irrespective of the chosen type of feature extraction and classifier, the ability to classify a previously unseen plant correctly was greater than 85%. The maintenance planning of vegetation control could be improved if plants were recognised and localised. It may be feasible to mechanically harvest them (in particular, woody plants). In addition, listed endangered species growing on the trackbeds can be avoided. Both cases are likely to reduce the amount of herbicides, which often is in the interest of public opinion. Bearing in mind that natural objects like plants are often more heterogeneous within their own class rather than outside it, the results do indeed present a stable classification performance, which is a sound prerequisite in order to later take the next step to include a natural background. Where relevant, species can also be listed under the Endangered Species Act.
Resumo:
Interactions in mobile devices normally happen in an explicit manner, which means that they are initiated by the users. Yet, users are typically unaware that they also interact implicitly with their devices. For instance, our hand pose changes naturally when we type text messages. Whilst the touchscreen captures finger touches, hand movements during this interaction however are unused. If this implicit hand movement is observed, it can be used as additional information to support or to enhance the users’ text entry experience. This thesis investigates how implicit sensing can be used to improve existing, standard interaction technique qualities. In particular, this thesis looks into enhancing front-of-device interaction through back-of-device and hand movement implicit sensing. We propose the investigation through machine learning techniques. We look into problems on how sensor data via implicit sensing can be used to predict a certain aspect of an interaction. For instance, one of the questions that this thesis attempts to answer is whether hand movement during a touch targeting task correlates with the touch position. This is a complex relationship to understand but can be best explained through machine learning. Using machine learning as a tool, such correlation can be measured, quantified, understood and used to make predictions on future touch position. Furthermore, this thesis also evaluates the predictive power of the sensor data. We show this through a number of studies. In Chapter 5 we show that probabilistic modelling of sensor inputs and recorded touch locations can be used to predict the general area of future touches on touchscreen. In Chapter 7, using SVM classifiers, we show that data from implicit sensing from general mobile interactions is user-specific. This can be used to identify users implicitly. In Chapter 6, we also show that touch interaction errors can be detected from sensor data. In our experiment, we show that there are sufficient distinguishable patterns between normal interaction signals and signals that are strongly correlated with interaction error. In all studies, we show that performance gain can be achieved by combining sensor inputs.
Resumo:
Reinforcement learning is a particular paradigm of machine learning that, recently, has proved times and times again to be a very effective and powerful approach. On the other hand, cryptography usually takes the opposite direction. While machine learning aims at analyzing data, cryptography aims at maintaining its privacy by hiding such data. However, the two techniques can be jointly used to create privacy preserving models, able to make inferences on the data without leaking sensitive information. Despite the numerous amount of studies performed on machine learning and cryptography, reinforcement learning in particular has never been applied to such cases before. Being able to successfully make use of reinforcement learning in an encrypted scenario would allow us to create an agent that efficiently controls a system without providing it with full knowledge of the environment it is operating in, leading the way to many possible use cases. Therefore, we have decided to apply the reinforcement learning paradigm to encrypted data. In this project we have applied one of the most well-known reinforcement learning algorithms, called Deep Q-Learning, to simple simulated environments and studied how the encryption affects the training performance of the agent, in order to see if it is still able to learn how to behave even when the input data is no longer readable by humans. The results of this work highlight that the agent is still able to learn with no issues whatsoever in small state spaces with non-secure encryptions, like AES in ECB mode. For fixed environments, it is also able to reach a suboptimal solution even in the presence of secure modes, like AES in CBC mode, showing a significant improvement with respect to a random agent; however, its ability to generalize in stochastic environments or big state spaces suffers greatly.
Resumo:
The final goal of the thesis should be a real-world application in the production test data environment. This includes the pre-processing of the data, building models and visualizing the results. To do this, different machine learning models, outlier prediction oriented, should be investigated using a real dataset. Finally, the different outlier prediction algorithms should be compared, and their performance discussed.
Resumo:
Besides increasing the share of electric and hybrid vehicles, in order to comply with more stringent environmental protection limitations, in the mid-term the auto industry must improve the efficiency of the internal combustion engine and the well to wheel efficiency of the employed fuel. To achieve this target, a deeper knowledge of the phenomena that influence the mixture formation and the chemical reactions involving new synthetic fuel components is mandatory, but complex and time intensive to perform purely by experimentation. Therefore, numerical simulations play an important role in this development process, but their use can be effective only if they can be considered accurate enough to capture these variations. The most relevant models necessary for the simulation of the reacting mixture formation and successive chemical reactions have been investigated in the present work, with a critical approach, in order to provide instruments to define the most suitable approaches also in the industrial context, which is limited by time constraints and budget evaluations. To overcome these limitations, new methodologies have been developed to conjugate detailed and simplified modelling techniques for the phenomena involving chemical reactions and mixture formation in non-traditional conditions (e.g. water injection, biofuels etc.). Thanks to the large use of machine learning and deep learning algorithms, several applications have been revised or implemented, with the target of reducing the computing time of some traditional tasks by orders of magnitude. Finally, a complete workflow leveraging these new models has been defined and used for evaluating the effects of different surrogate formulations of the same experimental fuel on a proof-of-concept GDI engine model.
Resumo:
In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented.
Resumo:
Machine (and deep) learning technologies are more and more present in several fields. It is undeniable that many aspects of our society are empowered by such technologies: web searches, content filtering on social networks, recommendations on e-commerce websites, mobile applications, etc., in addition to academic research. Moreover, mobile devices and internet sites, e.g., social networks, support the collection and sharing of information in real time. The pervasive deployment of the aforementioned technological instruments, both hardware and software, has led to the production of huge amounts of data. Such data has become more and more unmanageable, posing challenges to conventional computing platforms, and paving the way to the development and widespread use of the machine and deep learning. Nevertheless, machine learning is not only a technology. Given a task, machine learning is a way of proceeding (a way of thinking), and as such can be approached from different perspectives (points of view). This, in particular, will be the focus of this research. The entire work concentrates on machine learning, starting from different sources of data, e.g., signals and images, applied to different domains, e.g., Sport Science and Social History, and analyzed from different perspectives: from a non-data scientist point of view through tools and platforms; setting a problem stage from scratch; implementing an effective application for classification tasks; improving user interface experience through Data Visualization and eXtended Reality. In essence, not only in a quantitative task, not only in a scientific environment, and not only from a data-scientist perspective, machine (and deep) learning can do the difference.
Resumo:
The rapid progression of biomedical research coupled with the explosion of scientific literature has generated an exigent need for efficient and reliable systems of knowledge extraction. This dissertation contends with this challenge through a concentrated investigation of digital health, Artificial Intelligence, and specifically Machine Learning and Natural Language Processing's (NLP) potential to expedite systematic literature reviews and refine the knowledge extraction process. The surge of COVID-19 complicated the efforts of scientists, policymakers, and medical professionals in identifying pertinent articles and assessing their scientific validity. This thesis presents a substantial solution in the form of the COKE Project, an initiative that interlaces machine reading with the rigorous protocols of Evidence-Based Medicine to streamline knowledge extraction. In the framework of the COKE (“COVID-19 Knowledge Extraction framework for next-generation discovery science”) Project, this thesis aims to underscore the capacity of machine reading to create knowledge graphs from scientific texts. The project is remarkable for its innovative use of NLP techniques such as a BERT + bi-LSTM language model. This combination is employed to detect and categorize elements within medical abstracts, thereby enhancing the systematic literature review process. The COKE project's outcomes show that NLP, when used in a judiciously structured manner, can significantly reduce the time and effort required to produce medical guidelines. These findings are particularly salient during times of medical emergency, like the COVID-19 pandemic, when quick and accurate research results are critical.
Resumo:
Negli ultimi anni, a causa della crescente tendenza verso i Big Data, l’apprendimento automatico è diventato un approccio di previsione fondamentale perché può prevedere i prezzi delle case in modo accurato in base agli attributi delle abitazioni. In questo elaborato, verranno messe in pratica alcune tecniche di machine learning con l’obiettivo di effettuare previsioni sui prezzi delle abitazioni. Ad esempio, si può pensare all’acquisto di una nuova casa, saranno tanti i fattori di cui si dovrà preoccuparsi, la posizione, i metri quadrati, l’inquinamento dell’aria, il numero di stanze, il numero dei bagni e così via. Tutti questi fattori possono influire in modo più o meno pesante sul prezzo di quell’abitazione. E’ proprio in casi come questi che può essere applicata l’intelligenza artificiale, nello specifico il machine learning, per riuscire a trovare un modello che approssimi nel miglior modo un prezzo, data una serie di caratteristiche. In questa tesi verrà dimostrato come è possibile utilizzare l’apprendimento automatico per effettuare delle stime il più preciso possibile dei prezzi delle case. La tesi è divisa in 5 capitoli, nel primo capitolo verranno introdotti i concetti di base su cui si basa l’elaborato e alcune spiegazioni dei singoli modelli. Nel secondo capitolo, invece, viene trattato l’ambiente di lavoro utilizzato, il linguaggio e le relative librerie utilizzate. Il terzo capitolo contiene un’analisi esplorativa sul dataset utilizzato e vengono effettuate delle operazioni per preparare i dati agli algoritmi che verranno applicati in seguito. Nel capitolo 4 vengono creati i diversi modelli ed effettuate le previsioni sui prezzi mentre nel capitolo 5 vengono analizzati i risultati ottenuti e riportate le conclusioni.
Resumo:
Il Machine Learning si sta rivelando una tecnologia dalle incredibili potenzialità nei settori più disparati. Le diverse tecniche e gli algoritmi che vi fanno capo abilitano analisi dei dati molto più efficaci rispetto al passato. Anche l’industria assicurativa sta sperimentando l’adozione di soluzioni di Machine Learning e diverse sono le direzioni di innovamento che ne stanno conseguendo, dall’efficientamento dei processi interni all’offerta di prodotti rispondenti in maniera adattiva alle esigenze del cliente. Questo lavoro di tesi è stato realizzato durante un tirocinio presso Unisalute S.p.A., la prima assicurazione in ambito sanitario in Italia. La criticità intercettata è stata la sovrastima del capitale da destinare a riserva a fronte dell’impegno nei confronti dell’assicurato: questo capitale immobilizzato va a sottrarre risorse ad investimenti più proficui nel medio e lungo termine, per cui è di valore stimarlo appropriatamente. All'interno del settore IT di Unisalute, ho lavorato alla progettazione e implementazione di un modello di Machine Learning che riesca a prevedere se un sinistro appena preso in gestione sarà liquidato o meno. Dotare gli uffici impegnati nella determinazione del riservato di questa stima aggiuntiva basata sui dati, sarebbe di notevole supporto. La progettazione del modello di Machine Learning si è articolata in una Data Pipeline contenente le metodologie più efficienti con riferimento al preprocessamento e alla modellazione dei dati. L’implementazione ha visto Python come linguaggio di programmazione; il dataset, ottenuto a seguito di estrazioni e integrazioni a partire da diversi database Oracle, presenta una cardinalità di oltre 4 milioni di istanze caratterizzate da 32 variabili. A valle del tuning degli iperparamentri e dei vari addestramenti, si è raggiunta un’accuratezza dell’86% che, nel dominio di specie, è ritenuta più che soddisfacente e sono emersi contributi non noti alla liquidabilità dei sinistri.
Resumo:
Il mio progetto di tesi ha come obiettivo quello di creare un modello in grado di predire il rating delle applicazioni presenti all’interno del Play Store, uno dei più grandi servizi di distribuzione digitale Android. A tale scopo ho utilizzato il linguaggio Python, che grazie alle sue librerie, alla sua semplicità e alla sua versatilità è certamen- te uno dei linguaggi più usati nel campo dell’intelligenza artificiale. Il punto di partenza del mio studio è stato il Dataset (Insieme di dati strutturati in forma relazionale) “Google Play Store Apps” reperibile su Kaggle al seguente indirizzo: https://www.kaggle.com/datasets/lava18/google-play-store-apps, contenente 10841 osservazioni e 13 attributi. Dopo una prima parte relativa al caricamen- to, alla visualizzazione e alla preparazione dei dati su cui lavorare, ho applica- to quattro di↵erenti tecniche di Machine Learning per la stima del rating delle applicazioni. In particolare, sono state utilizzate:https://www.kaggle.com/datasets/lava18/google-play-store-apps, contenente 10841 osservazioni e 13 attributi. Dopo una prima parte relativa al caricamento, alla visualizzazione e alla preparazione dei dati su cui lavorare, ho applicato quattro differenti tecniche di Machine Learning per la stima del rating delle applicazioni: Ridje, Regressione Lineare, Random Forest e SVR. Tali algoritmi sono stati applicati attuando due tipi diversi di trasformazioni (Label Encoding e One Hot Encoding) sulla variabile ‘Category’, con lo scopo di analizzare come le suddette trasformazioni riescano a influire sulla bontà del modello. Ho confrontato poi l’errore quadratico medio (MSE), l’errore medio as- soluto (MAE) e l’errore mediano assoluto (MdAE) con il fine di capire quale sia l’algoritmo più efficiente.
Resumo:
The comfort level of the seat has a major effect on the usage of a vehicle; thus, car manufacturers have been working on elevating car seat comfort as much as possible. However, still, the testing and evaluation of comfort are done using exhaustive trial and error testing and evaluation of data. In this thesis, we resort to machine learning and Artificial Neural Networks (ANN) to develop a fully automated approach. Even though this approach has its advantages in minimizing time and using a large set of data, it takes away the degree of freedom of the engineer on making decisions. The focus of this study is on filling the gap in a two-step comfort level evaluation which used pressure mapping with body regions to evaluate the average pressure supported by specific body parts and the Self-Assessment Exam (SAE) questions on evaluation of the person’s interest. This study has created a machine learning algorithm that works on giving a degree of freedom to the engineer in making a decision when mapping pressure values with body regions using ANN. The mapping is done with 92% accuracy and with the help of a Graphical User Interface (GUI) that facilitates the process during the testing time of comfort level evaluation of the car seat, which decreases the duration of the test analysis from days to hours.
Resumo:
There is not a specific test to diagnose Alzheimer`s disease (AD). Its diagnosis should be based upon clinical history, neuropsychological and laboratory tests, neuroimaging and electroencephalography (EEG). Therefore, new approaches are necessary to enable earlier and more accurate diagnosis and to follow treatment results. In this study we used a Machine Learning (ML) technique, named Support Vector Machine (SVM), to search patterns in EEG epochs to differentiate AD patients from controls. As a result, we developed a quantitative EEG (qEEG) processing method for automatic differentiation of patients with AD from normal individuals, as a complement to the diagnosis of probable dementia. We studied EEGs from 19 normal subjects (14 females/5 males, mean age 71.6 years) and 16 probable mild to moderate symptoms AD patients (14 females/2 males, mean age 73.4 years. The results obtained from analysis of EEG epochs were accuracy 79.9% and sensitivity 83.2%. The analysis considering the diagnosis of each individual patient reached 87.0% accuracy and 91.7% sensitivity.