917 resultados para Opencv, Zbar, Computer Vision
Resumo:
Lo scopo della tesi è creare un’architettura in FPGA in grado di ricavare informazioni 3D da una coppia di sensori stereo. La pipeline è stata realizzata utilizzando il System-on-Chip Zynq, che permette una stretta interazione tra la parte hardware realizzata in FPGA e la CPU. Dopo uno studio preliminare degli strumenti hardware e software, è stata realizzata l’architettura base per la scrittura e la lettura di immagini nella memoria DDR dello Zynq. In seguito l’attenzione si è spostata sull’implementazione di algoritmi stereo (rettificazione e stereo matching) su FPGA e nella realizzazione di una pipeline in grado di ricavare accurate mappe di disparità in tempo reale acquisendo le immagini da una camera stereo.
Resumo:
[EN]In this paper, a basic conceptual architecture aimed at the design of Computer Vision System is qualitatively described. The proposed architecture addresses the design of vision systems in a modular fashion using modules with three distinct units or components: a processing network or diagnostics unit, a control unit and a communications unit. The control of the system at the modules level is designed based on a Discrete Events Model. This basic methodology has been used to design a realtime active vision system for detection, tracking and recognition of people. It is made up of three functional modules aimed at the detection, tracking, recognition of moving individuals plus a supervision module.
Resumo:
[EN]This paper describes an Active Vision System whose design assumes a distinction between fast or reactive and slow or background processes. Fast processes need to operate in cycles with critical timeouts that may affect system stability. While slow processes, though necessary, do not compromise system stability if its execution is delayed. Based on this simple taxonomy, a control architecture has been proposed and a prototype implemented that is able to track people in real-time with a robotic head while trying to identify the target. In this system, the tracking module is considered as the reactive part of the system while person identification is considered a background task.
Resumo:
[EN]Active Vision Systems can be considered as dynamical systems which close the loop around artificial visual perception, controlling camera parameters, motion and also controlling processing to simplify, accelerate and do more robust visual perception. Research and Development in Active Vision Systems [Aloi87], [Bajc88] is a main area of interest in Computer Vision, mainly by its potential application in different scenarios where real-time performance is needed such as robot navigation, surveillance, visual inspection, among many others. Several systems have been developed during last years using robotic-heads for this purpose...
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Visual inputs to artificial and biological visual systems are often quantized: cameras accumulate photons from the visual world, and the brain receives action potentials from visual sensory neurons. Collecting more information quanta leads to a longer acquisition time and better performance. In many visual tasks, collecting a small number of quanta is sufficient to solve the task well. The ability to determine the right number of quanta is pivotal in situations where visual information is costly to obtain, such as photon-starved or time-critical environments. In these situations, conventional vision systems that always collect a fixed and large amount of information are infeasible. I develop a framework that judiciously determines the number of information quanta to observe based on the cost of observation and the requirement for accuracy. The framework implements the optimal speed versus accuracy tradeoff when two assumptions are met, namely that the task is fully specified probabilistically and constant over time. I also extend the framework to address scenarios that violate the assumptions. I deploy the framework to three recognition tasks: visual search (where both assumptions are satisfied), scotopic visual recognition (where the model is not specified), and visual discrimination with unknown stimulus onset (where the model is dynamic over time). Scotopic classification experiments suggest that the framework leads to dramatic improvement in photon-efficiency compared to conventional computer vision algorithms. Human psychophysics experiments confirmed that the framework provides a parsimonious and versatile explanation for human behavior under time pressure in both static and dynamic environments.
Resumo:
The first mechanical Automaton concept was found in a Chinese text written in the 3rd century BC, while Computer Vision was born in the late 1960s. Therefore, visual perception applied to machines (i.e. the Machine Vision) is a young and exciting alliance. When robots came in, the new field of Robotic Vision was born, and these terms began to be erroneously interchanged. In short, we can say that Machine Vision is an engineering domain, which concern the industrial use of Vision. The Robotic Vision, instead, is a research field that tries to incorporate robotics aspects in computer vision algorithms. Visual Servoing, for example, is one of the problems that cannot be solved by computer vision only. Accordingly, a large part of this work deals with boosting popular Computer Vision techniques by exploiting robotics: e.g. the use of kinematics to localize a vision sensor, mounted as the robot end-effector. The remainder of this work is dedicated to the counterparty, i.e. the use of computer vision to solve real robotic problems like grasping objects or navigate avoiding obstacles. Will be presented a brief survey about mapping data structures most widely used in robotics along with SkiMap, a novel sparse data structure created both for robotic mapping and as a general purpose 3D spatial index. Thus, several approaches to implement Object Detection and Manipulation, by exploiting the aforementioned mapping strategies, will be proposed, along with a completely new Machine Teaching facility in order to simply the training procedure of modern Deep Learning networks.
Resumo:
Correctness of information gathered in production environments is an essential part of quality assurance processes in many industries, this task is often performed by human resources who visually take annotations in various steps of the production flow. Depending on the performed task the correlation between where exactly the information is gathered and what it represents is more than often lost in the process. The lack of labeled data places a great boundary on the application of deep neural networks aimed at object detection tasks, moreover supervised training of deep models requires a great amount of data to be available. Reaching an adequate large collection of labeled images through classic techniques of data annotations is an exhausting and costly task to perform, not always suitable for every scenario. A possible solution is to generate synthetic data that replicates the real one and use it to fine-tune a deep neural network trained on one or more source domains to a different target domain. The purpose of this thesis is to show a real case scenario where the provided data were both in great scarcity and missing the required annotations. Sequentially a possible approach is presented where synthetic data has been generated to address those issues while standing as a training base of deep neural networks for object detection, capable of working on images taken in production-like environments. Lastly, it compares performance on different types of synthetic data and convolutional neural networks used as backbones for the model.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
This paper proposes a novel computer vision approach that processes video sequences of people walking and then recognises those people by their gait. Human motion carries different information that can be analysed in various ways. The skeleton carries motion information about human joints, and the silhouette carries information about boundary motion of the human body. Moreover, binary and gray-level images contain different information about human movements. This work proposes to recover these different kinds of information to interpret the global motion of the human body based on four different segmented image models, using a fusion model to improve classification. Our proposed method considers the set of the segmented frames of each individual as a distinct class and each frame as an object of this class. The methodology applies background extraction using the Gaussian Mixture Model (GMM), a scale reduction based on the Wavelet Transform (WT) and feature extraction by Principal Component Analysis (PCA). We propose four new schemas for motion information capture: the Silhouette-Gray-Wavelet model (SGW) captures motion based on grey level variations; the Silhouette-Binary-Wavelet model (SBW) captures motion based on binary information; the Silhouette-Edge-Binary model (SEW) captures motion based on edge information and the Silhouette Skeleton Wavelet model (SSW) captures motion based on skeleton movement. The classification rates obtained separately from these four different models are then merged using a new proposed fusion technique. The results suggest excellent performance in terms of recognising people by their gait.
Resumo:
Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.
Resumo:
The reconstruction of a complex scene from multiple images is a fundamental problem in the field of computer vision. Volumetric methods have proven to be a strong alternative to traditional correspondence-based methods due to their flexible visibility models. In this paper we analyse existing methods for volumetric reconstruction and identify three key properties of voxel colouring algorithms: a water-tight surface model, a monotonic carving order, and causality. We present a new Voxel Colouring algorithm which embeds all reconstructions of a scene into a single output. While modelling exact visibility for arbitrary camera locations, Embedded Voxel Colouring removes the need for a priori threshold selection present in previous work. An efficient implementation is given along with results demonstrating the advantages of posteriori threshold selection.
Resumo:
This work discusses the use of optical flow to generate the sensorial information a mobile robot needs to react to the presence of obstacles when navigating in a non-structured environment. A sensing system based on optical flow and time-to-collision calculation is here proposed and experimented, which accomplishes two important paradigms. The first one is that all computations are performed onboard the robot, in spite of the limited computational capability available. The second one is that the algorithms for optical flow and time-to-collision calculations are fast enough to give the mobile robot the capability of reacting to any environmental change in real-time. Results of real experiments in which the sensing system here proposed is used as the only source of sensorial data to guide a mobile robot to avoid obstacles while wandering around are presented, and the analysis of such results allows validating the proposed sensing system.