45 resultados para Reinforcement Learning,Deep Neural Networks,Python,Stable Baseline,Gym


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La crescente disponibilità di scanner 3D ha reso più semplice l’acquisizione di modelli 3D dall’ambiente. A causa delle inevitabili imperfezioni ed errori che possono avvenire durante la fase di scansione, i modelli acquisiti possono risultare a volte inutilizzabili ed affetti da rumore. Le tecniche di denoising hanno come obiettivo quello di rimuovere dalla superficie della mesh 3D scannerizzata i disturbi provocati dal rumore, ristabilendo le caratteristiche originali della superficie senza introdurre false informazioni. Per risolvere questo problema, un approccio innovativo è quello di utilizzare il Geometric Deep Learning per addestrare una Rete Neurale in maniera da renderla in grado di eseguire efficacemente il denoising di mesh. L’obiettivo di questa tesi è descrivere il Geometric Deep Learning nell’ambito del problema sotto esame.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Application of dataset fusion techniques to an object detection task, involving the use of deep learning as convolutional neural networks, to manage to create a single RCNN architecture able to inference with good performances on two distinct datasets with different domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La seguente tesi propone un’introduzione al geometric deep learning. Nella prima parte vengono presentati i concetti principali di teoria dei grafi ed introdotta una dinamica di diffusione su grafo, in analogia con l’equazione del calore. A seguire, iniziando dal linear classifier verranno introdotte le architetture che hanno portato all’ideazione delle graph convolutional networks. In conclusione, si analizzano esempi di alcuni algoritmi utilizzati nel geometric deep learning e si mostra una loro implementazione sul Cora dataset, un insieme di dati con struttura a grafo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the industry of steelmaking, the process of galvanizing is a treatment which is applied to protect the steel from corrosion. The air knife effect (AKE) occurs when nozzles emit a steam of air on the surfaces of a steel strip to remove excess zinc from it. In our work we formalized the problem to control the AKE and we implemented, with the R&D dept.of MarcegagliaSPA, a DL model able to drive the AKE. We call it controller. It takes as input the tuple : a tuple of the physical conditions of the process line (t,h,s) with the target value of the zinc coating (c); and generates the expected tuple of (pres and dist) to drive the mechanical nozzles towards the (c). According to the requirements we designed the structure of the network. We collected and explored the data set of the historical data of the smart factory. Finally, we designed the loss function as sum of three components: the minimization between the coating addressed by the network and the target value we want to reach; and two weighted minimization components for both pressure and distance. In our solution we construct a second module, named coating net, to predict the coating of zinc resulting from the AKE when the conditions are applied to the prod. line. Its structure is made by a linear and a deep nonlinear “residual” component learned by empirical observations. The predictions made by the coating nets are used as ground truth in the loss function of the controller. By tuning the weights of the different components of the loss function, it is possible to train models with slightly different optimization purposes. In the tests we compared the regularization of different strategies with the standard one in condition of optimal estimation for both; the overall accuracy is ± 3 g/m^2 dal target for all of them. Lastly, we analyze how the controller modeled the current solutions with the new logic: the sub-optimal values of pres and dist can be optimize of 50% and 20%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questo lavoro è iniziato con uno studio teorico delle principali tecniche di classificazione di immagini note in letteratura, con particolare attenzione ai più diffusi modelli di rappresentazione dell’immagine, quali il modello Bag of Visual Words, e ai principali strumenti di Apprendimento Automatico (Machine Learning). In seguito si è focalizzata l’attenzione sulla analisi di ciò che costituisce lo stato dell’arte per la classificazione delle immagini, ovvero il Deep Learning. Per sperimentare i vantaggi dell’insieme di metodologie di Image Classification, si è fatto uso di Torch7, un framework di calcolo numerico, utilizzabile mediante il linguaggio di scripting Lua, open source, con ampio supporto alle metodologie allo stato dell’arte di Deep Learning. Tramite Torch7 è stata implementata la vera e propria classificazione di immagini poiché questo framework, grazie anche al lavoro di analisi portato avanti da alcuni miei colleghi in precedenza, è risultato essere molto efficace nel categorizzare oggetti in immagini. Le immagini su cui si sono basati i test sperimentali, appartengono a un dataset creato ad hoc per il sistema di visione 3D con la finalità di sperimentare il sistema per individui ipovedenti e non vedenti; in esso sono presenti alcuni tra i principali ostacoli che un ipovedente può incontrare nella propria quotidianità. In particolare il dataset si compone di potenziali ostacoli relativi a una ipotetica situazione di utilizzo all’aperto. Dopo avere stabilito dunque che Torch7 fosse il supporto da usare per la classificazione, l’attenzione si è concentrata sulla possibilità di sfruttare la Visione Stereo per aumentare l’accuratezza della classificazione stessa. Infatti, le immagini appartenenti al dataset sopra citato sono state acquisite mediante una Stereo Camera con elaborazione su FPGA sviluppata dal gruppo di ricerca presso il quale è stato svolto questo lavoro. Ciò ha permesso di utilizzare informazioni di tipo 3D, quali il livello di depth (profondità) di ogni oggetto appartenente all’immagine, per segmentare, attraverso un algoritmo realizzato in C++, gli oggetti di interesse, escludendo il resto della scena. L’ultima fase del lavoro è stata quella di testare Torch7 sul dataset di immagini, preventivamente segmentate attraverso l’algoritmo di segmentazione appena delineato, al fine di eseguire il riconoscimento della tipologia di ostacolo individuato dal sistema.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il mondo della moda è in continua e costante evoluzione, non solo dal punto di vista sociale, ma anche da quello tecnologico. Nel corso del presente elaborato si è studiata la possibilità di riconoscere e segmentare abiti presenti in una immagine utilizzando reti neurali profonde e approcci moderni. Sono state, quindi, analizzate reti quali FasterRCNN, MaskRCNN, YOLOv5, FashionPedia e Match-RCNN. In seguito si è approfondito l’addestramento delle reti neurali profonde in scenari di alta parallelizzazione e su macchine dotate di molteplici GPU al fine di ridurre i tempi di addestramento. Inoltre si è sperimentata la possibilità di creare una rete per prevedere se un determinato abito possa avere successo in futuro analizzando semplicemente dati passati e una immagine del vestito in questione. Necessaria per tali compiti è stata, inoltre, una approfondita analisi dei dataset esistenti nel mondo della moda e dei metodi per utilizzarli per l’addestramento. Il presente elaborato è stato svolto nell’ambito del progetto FA.RE.TRA. per il quale l'Università di Bologna svolge un compito di consulenza per lo studio di fattibilità su reti neurali in grado di svolgere i compiti menzionati.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Neural Networks customized and tested in this thesis (WaldoNet, FlowNet and PatchNet) are a first exploration and approach to the Template Matching task. The possibilities of extension are therefore many and some are proposed below. During my thesis, I have analyzed the functioning of the classical algorithms and adapted with deep learning algorithms. The features extracted from both the template and the query images resemble the keypoints of the SIFT algorithm. Then, instead of similarity function or keypoints matching, WaldoNet and PatchNet use the convolutional layer to compare the features, while FlowNet uses the correlational layer. In addition, I have identified the major challenges of the Template Matching task (affine/non-affine transformations, intensity changes...) and solved them with a careful design of the dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Depth estimation from images has long been regarded as a preferable alternative compared to expensive and intrusive active sensors, such as LiDAR and ToF. The topic has attracted the attention of an increasingly wide audience thanks to the great amount of application domains, such as autonomous driving, robotic navigation and 3D reconstruction. Among the various techniques employed for depth estimation, stereo matching is one of the most widespread, owing to its robustness, speed and simplicity in setup. Recent developments has been aided by the abundance of annotated stereo images, which granted to deep learning the opportunity to thrive in a research area where deep networks can reach state-of-the-art sub-pixel precision in most cases. Despite the recent findings, stereo matching still begets many open challenges, two among them being finding pixel correspondences in presence of objects that exhibits a non-Lambertian behaviour and processing high-resolution images. Recently, a novel dataset named Booster, which contains high-resolution stereo pairs featuring a large collection of labeled non-Lambertian objects, has been released. The work shown that training state-of-the-art deep neural network on such data improves the generalization capabilities of these networks also in presence of non-Lambertian surfaces. Regardless being a further step to tackle the aforementioned challenge, Booster includes a rather small number of annotated images, and thus cannot satisfy the intensive training requirements of deep learning. This thesis work aims to investigate novel view synthesis techniques to augment the Booster dataset, with ultimate goal of improving stereo matching reliability in presence of high-resolution images that displays non-Lambertian surfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial Intelligence is reshaping the field of fashion industry in different ways. E-commerce retailers exploit their data through AI to enhance their search engines, make outfit suggestions and forecast the success of a specific fashion product. However, it is a challenging endeavour as the data they possess is huge, complex and multi-modal. The most common way to search for fashion products online is by matching keywords with phrases in the product's description which are often cluttered, inadequate and differ across collections and sellers. A customer may also browse an online store's taxonomy, although this is time-consuming and doesn't guarantee relevant items. With the advent of Deep Learning architectures, particularly Vision-Language models, ad-hoc solutions have been proposed to model both the product image and description to solve this problems. However, the suggested solutions do not exploit effectively the semantic or syntactic information of these modalities, and the unique qualities and relations of clothing items. In this work of thesis, a novel approach is proposed to address this issues, which aims to model and process images and text descriptions as graphs in order to exploit the relations inside and between each modality and employs specific techniques to extract syntactic and semantic information. The results obtained show promising performances on different tasks when compared to the present state-of-the-art deep learning architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, we have witnessed great changes in the industrial environment as a result of the innovations introduced by Industry 4.0, especially in the integration of Internet of Things, Automation and Robotics in the manufacturing field. The project presented in this thesis lies within this innovation context and describes the implementation of an Image Recognition application focused on the automotive field. The project aims at helping the supply chain operator to perform an effective and efficient check of the homologation tags present on vehicles. The user contribution consists in taking a picture of the tag and the application will automatically, exploiting Amazon Web Services, return the result of the control about the correctness of the tag, the correct positioning within the vehicle and the presence of faults or defects on the tag. To implement this application we ombined two IoT platforms widely used in industrial field: Amazon Web Services(AWS) and ThingWorx. AWS exploits Convolutional Neural Networks to perform Text Detection and Image Recognition, while PTC ThingWorx manages the user interface and the data manipulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the problem of controlling a quadrotor UAV is considered. It is done by presenting an original control system, designed as a combination of Neural Networks and Disturbance Observer, using a composite learning approach for a system of the second order, which is a novel methodology in literature. After a brief introduction about the quadrotors, the concepts needed to understand the controller are presented, such as the main notions of advanced control, the basic structure and design of a Neural Network, the modeling of a quadrotor and its dynamics. The full simulator, developed on the MATLAB Simulink environment, used throughout the whole thesis, is also shown. For the guidance and control purposes, a Sliding Mode Controller, used as a reference, it is firstly introduced, and its theory and implementation on the simulator are illustrated. Finally the original controller is introduced, through its novel formulation, and implementation on the model. The effectiveness and robustness of the two controllers are then proven by extensive simulations in all different conditions of external disturbance and faults.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il ruolo dell’informatica è diventato chiave del funzionamento del mondo moderno, ormai sempre più in progressiva digitalizzazione di ogni singolo aspetto della vita dell’individuo. Con l’aumentare della complessità e delle dimensioni dei programmi, il rilevamento di errori diventa sempre di più un’attività difficile e che necessita l’impiego di tempo e risorse. Meccanismi di analisi del codice sorgente tradizionali sono esistiti fin dalla nascita dell’informatica stessa e il loro ruolo all’interno della catena produttiva di un team di programmatori non è mai stato cosi fondamentale come lo è tuttora. Questi meccanismi di analisi, però, non sono esenti da problematiche: il tempo di esecuzione su progetti di grandi dimensioni e la percentuale di falsi positivi possono, infatti, diventare un importante problema. Per questi motivi, meccanismi fondati su Machine Learning, e più in particolare Deep Learning, sono stati sviluppati negli ultimi anni. Questo lavoro di tesi si pone l’obbiettivo di esplorare e sviluppare un modello di Deep Learning per il riconoscimento di errori in un qualsiasi file sorgente scritto in linguaggio C e C++.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Activation functions within neural networks play a crucial role in Deep Learning since they allow to learn complex and non-trivial patterns in the data. However, the ability to approximate non-linear functions is a significant limitation when implementing neural networks in a quantum computer to solve typical machine learning tasks. The main burden lies in the unitarity constraint of quantum operators, which forbids non-linearity and poses a considerable obstacle to developing such non-linear functions in a quantum setting. Nevertheless, several attempts have been made to tackle the realization of the quantum activation function in the literature. Recently, the idea of the QSplines has been proposed to approximate a non-linear activation function by implementing the quantum version of the spline functions. Yet, QSplines suffers from various drawbacks. Firstly, the final function estimation requires a post-processing step; thus, the value of the activation function is not available directly as a quantum state. Secondly, QSplines need many error-corrected qubits and a very long quantum circuits to be executed. These constraints do not allow the adoption of the QSplines on near-term quantum devices and limit their generalization capabilities. This thesis aims to overcome these limitations by leveraging hybrid quantum-classical computation. In particular, a few different methods for Variational Quantum Splines are proposed and implemented, to pave the way for the development of complete quantum activation functions and unlock the full potential of quantum neural networks in the field of quantum machine learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural scene representation and neural rendering are new computer vision techniques that enable the reconstruction and implicit representation of real 3D scenes from a set of 2D captured images, by fitting a deep neural network. The trained network can then be used to render novel views of the scene. A recent work in this field, Neural Radiance Fields (NeRF), presented a state-of-the-art approach, which uses a simple Multilayer Perceptron (MLP) to generate photo-realistic RGB images of a scene from arbitrary viewpoints. However, NeRF does not model any light interaction with the fitted scene; therefore, despite producing compelling results for the view synthesis task, it does not provide a solution for relighting. In this work, we propose a new architecture to enable relighting capabilities in NeRF-based representations and we introduce a new real-world dataset to train and evaluate such a model. Our method demonstrates the ability to perform realistic rendering of novel views under arbitrary lighting conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comfort level of the seat has a major effect on the usage of a vehicle; thus, car manufacturers have been working on elevating car seat comfort as much as possible. However, still, the testing and evaluation of comfort are done using exhaustive trial and error testing and evaluation of data. In this thesis, we resort to machine learning and Artificial Neural Networks (ANN) to develop a fully automated approach. Even though this approach has its advantages in minimizing time and using a large set of data, it takes away the degree of freedom of the engineer on making decisions. The focus of this study is on filling the gap in a two-step comfort level evaluation which used pressure mapping with body regions to evaluate the average pressure supported by specific body parts and the Self-Assessment Exam (SAE) questions on evaluation of the person’s interest. This study has created a machine learning algorithm that works on giving a degree of freedom to the engineer in making a decision when mapping pressure values with body regions using ANN. The mapping is done with 92% accuracy and with the help of a Graphical User Interface (GUI) that facilitates the process during the testing time of comfort level evaluation of the car seat, which decreases the duration of the test analysis from days to hours.