7 resultados para Line and edge detection

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last semester of the Master’s Degree in Artificial Intelligence, I carried out my internship working for TXT e-Solution on the ADMITTED project. This paper describes the work done in those months. The thesis will be divided into two parts representing the two different tasks I was assigned during the course of my experience. The First part will be about the introduction of the project and the work done on the admittedly library, maintaining the code base and writing the test suits. The work carried out is more connected to the Software engineer role, developing features, fixing bugs and testing. The second part will describe the experiments done on the Anomaly detection task using a Deep Learning technique called Autoencoder, this task is on the other hand more connected to the data science role. The two tasks were not done simultaneously but were dealt with one after the other, which is why I preferred to divide them into two separate parts of this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis project aims to the development of an algorithm for the obstacle detection and the interaction between the safety areas of an Automated Guided Vehicles (AGV) and a Point Cloud derived map inside the context of a CAD software. The first part of the project focuses on the implementation of an algorithm for the clipping of general polygons, with which has been possible to: construct the safety areas polygon, derive the sweep of this areas along the navigation path performing a union and detect the intersections with line or polygon representing the obstacles. The second part is about the construction of a map in terms of geometric entities (lines and polygons) starting from a point cloud given by the 3D scan of the environment. The point cloud is processed using: filters, clustering algorithms and concave/convex hull derived algorithms in order to extract line and polygon entities representing obstacles. Finally, the last part aims to use the a priori knowledge of possible obstacle detections on a given segment, to predict the behavior of the AGV and use this prediction to optimize the choice of the vehicle's assigned velocity in that segment, minimizing the travel time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much of the real-world dataset, including textual data, can be represented using graph structures. The use of graphs to represent textual data has many advantages, mainly related to maintaining a more significant amount of information, such as the relationships between words and their types. In recent years, many neural network architectures have been proposed to deal with tasks on graphs. Many of them consider only node features, ignoring or not giving the proper relevance to relationships between them. However, in many node classification tasks, they play a fundamental role. This thesis aims to analyze the main GNNs, evaluate their advantages and disadvantages, propose an innovative solution considered as an extension of GAT, and apply them to a case study in the biomedical field. We propose the reference GNNs, implemented with methodologies later analyzed, and then applied to a question answering system in the biomedical field as a replacement for the pre-existing GNN. We attempt to obtain better results by using models that can accept as input both node and edge features. As shown later, our proposed models can beat the original solution and define the state-of-the-art for the task under analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we aim to discuss a simple mathematical model for the edge detection mechanism and the boundary completion problem in the human brain in a differential geometry framework. We describe the columnar structure of the primary visual cortex as the fiber bundle R2 × S1, the orientation bundle, and by introducing a first vector field on it, explain the edge detection process. Edges are detected through a lift from the domain in R2 into the manifold R2 × S1 and are horizontal to a completely non-integrable distribution. Therefore, we can construct a subriemannian structure on the manifold R2 × S1, through which we retrieve perceived smooth contours as subriemannian geodesics, solutions to Hamilton’s equations. To do so, in the first chapter, we illustrate the functioning of the most fundamental structures of the early visual system in the brain, from the retina to the primary visual cortex. We proceed with introducing the necessary concepts of differential and subriemannian geometry in chapters two and three. We finally implement our model in chapter four, where we conclude, comparing our results with the experimental findings of Heyes, Fields, and Hess on the existence of an association field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The LHCb experiment has been designed to perform precision measurements in the flavour physics sector at the Large Hadron Collider (LHC) located at CERN. After the recent observation of CP violation in the decay of the Bs0 meson to a charged pion-kaon pair at LHCb, it is interesting to see whether the same quark-level transition in Λ0b baryon decays gives rise to large CP-violating effects. Such decay processes involve both tree and penguin Feynman diagrams and could be sensitive probes for physics beyond the Standard Model. The measurement of the CP-violating observable defined as ∆ACP = ACP(Λ0b → pK−)−ACP(Λ0b →pπ−),where ACP(Λ0b →pK−) and ACP(Λ0b →pπ−) are the direct CP asymmetries in Λ0b → pK− and Λ0b → pπ− decays, is presented for the first time using LHCb data. The procedure followed to optimize the event selection, to calibrate particle identification, to parametrise the various components of the invariant mass spectra, and to compute corrections due to the production asymmetry of the initial state and the detection asymmetries of the final states, is discussed in detail. Using the full 2011 and 2012 data sets of pp collisions collected with the LHCb detector, corresponding to an integrated luminosity of about 3 fb−1, the value ∆ACP = (0.8 ± 2.1 ± 0.2)% is obtained. The first uncertainty is statistical and the second corresponds to one of the dominant systematic effects. As the result is compatible with zero, no evidence of CP violation is found. This is the most precise measurement of CP violation in the decays of baryons containing the b quark to date. Once the analysis will be completed with an exhaustive study of systematic uncertainties, the results will be published by the LHCb Collaboration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In cardiovascular disease the definition and the detection of the ECG parameters related to repolarization dynamics in post MI patients is still a crucial unmet need. In addition, the use of a 3D sensor in the implantable medical devices would be a crucial mean in the assessment or prediction of Heart Failure status, but the inclusion of such feature is limited by hardware and firmware constraints. The aim of this thesis is the definition of a reliable surrogate of the 500 Hz ECG signal to reach the aforementioned objective. To evaluate the worsening of reliability due to sampling frequency reduction on delineation performance, the signals have been consecutively down sampled by a factor 2, 4, 8 thus obtaining the ECG signals sampled at 250, 125 and 62.5 Hz, respectively. The final goal is the feasibility assessment of the detection of the fiducial points in order to translate those parameters into meaningful clinical parameter for Heart Failure prediction, such as T waves intervals heterogeneity and variability of areas under T waves. An experimental setting for data collection on healthy volunteers has been set up at the Bakken Research Center in Maastricht. A 16 – channel ambulatory system, provided by TMSI, has recorded the standard 12 – Leads ECG, two 3D accelerometers and a respiration sensor. The collection platform has been set up by the TMSI property software Polybench, the data analysis of such signals has been performed with Matlab. The main results of this study show that the 125 Hz sampling rate has demonstrated to be a good candidate for a reliable detection of fiducial points. T wave intervals proved to be consistently stable, even at 62.5 Hz. Further studies would be needed to provide a better comparison between sampling at 250 Hz and 125 Hz for areas under the T waves.