956 resultados para Robot vision systems
Resumo:
[EN]Automatic detection systems do not perform as well as human observers, even on simple detection tasks. A potential solution to this problem is training vision systems on appropriate regions of interests (ROIs), in contrast to training on predefined and arbitrarily selected regions. Here we focus on detecting pedestrians in static scenes. Our aim is to answer the following question: Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?
Resumo:
[EN]This paper describes a face detection system which goes beyond traditional approaches normally designed for still images. First the video stream context is considered to apply the detector, and therefore, the resulting system is designed taking into consideration a main feature available in a video stream, i.e. temporal coherence. The resulting system builds a feature based model for each detected face, and searches them using various model information in the next frame. The results achieved for video stream processing outperform Rowley-Kanade's and Viola-Jones' solutions providing eye and face data in a reduced time with a notable correct detection rate.
Resumo:
AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.
Resumo:
[EN]Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?
Resumo:
AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.
Resumo:
A cor da superfície dos alimentos é o primeiro parâmetro de qualidade avaliado pelos consumidores, e é critico para a aceitação do produto, então a medição adequada da cor é uma importante ferramenta. Nesta pesquisa avaliou-se a variação da cor em corvina (Micropogonias furnieri) armazenada em gelo durante 16 dias; os parâmetros de luminosidade (L*), valor cromático a*, valor cromático b*, variação total da cor (ΔE) e croma (C*) foram obtidos por sistema de visão computacional, e por colorímetro Konica Minolta CR-400. O frescor da corvina baseada nas mudanças da cor das brânquias foi avaliado utilizando um sistema de visão computacional. Também se modelou a oxidação da mioglobina em files de burriquete (Pogonias cromis), utilizando os parâmetros de vermelho (valor a* e R). Para registrar as mudanças da cor durante 57,6 h utilizou-se um sistema de visão computacional, a análise química realizou-se determinando a concentração de metamioglobina (%). Na avaliação da cor de corvina armazenada em gelo, o sistema de visão computacional mostrou diferenças significativas para L*, a*, ΔE e C*, enquanto que o colorímetro mostrou diferenças significativas para L* e ΔE, o único parâmetro que não apresentou diferenças entre instrumentos foi ΔE durante a avaliação da corvina armazenada em gelo. O coeficiente de correlação entre os parâmetros da cor (L*, a* e b*) das brânquias da corvina armazenada em gelo pelo tempo de armazenamento foi de 0,9747. O sistema de visão computacional registrou as mudanças da cor em filés de burriquete e se modelaram as mudanças utilizando um modelo exponencial. O sistema de visão computacional mostrou ser mais sensível às mudanças da cor durante a avaliação da cor na corvina armazenada em gelo. É possível prognosticar o tempo de armazenamento da corvina em gelo em função da mudança da cor das brânquias. Assim, foi possível modelar a variação da mioglobina em filés de burriquete utilizando sistemas de visão computacional para registrar ditas mudanças. Os sistemas de visão computacional têm grande capacidade para registrar as mudanças da cor e é possível utiliza-los para avaliar os alimentos em função da cor.
Resumo:
in RoboCup 2007: Robot Soccer World Cup XI
Resumo:
This paper presents a vision-based localization approach for an underwater robot in a structured environment. The system is based on a coded pattern placed on the bottom of a water tank and an onboard down looking camera. Main features are, absolute and map-based localization, landmark detection and tracking, and real-time computation (12.5 Hz). The proposed system provides three-dimensional position and orientation of the vehicle along with its velocity. Accuracy of the drift-free estimates is very high, allowing them to be used as feedback measures of a velocity-based low-level controller. The paper details the localization algorithm, by showing some graphical results, and the accuracy of the system
Resumo:
This paper presents a vision-based localization approach for an underwater robot in a structured environment. The system is based on a coded pattern placed on the bottom of a water tank and an onboard down looking camera. Main features are, absolute and map-based localization, landmark detection and tracking, and real-time computation (12.5 Hz). The proposed system provides three-dimensional position and orientation of the vehicle along with its velocity. Accuracy of the drift-free estimates is very high, allowing them to be used as feedback measures of a velocity-based low-level controller. The paper details the localization algorithm, by showing some graphical results, and the accuracy of the system
Resumo:
This project aims to apply image processing techniques in computer vision featuring an omnidirectional vision system to agricultural mobile robots (AMR) used for trajectory navigation problems, as well as localization matters. To carry through this task, computational methods based on the JSEG algorithm were used to provide the classification and the characterization of such problems, together with Artificial Neural Networks (ANN) for pattern recognition. Therefore, it was possible to run simulations and carry out analyses of the performance of JSEG image segmentation technique through Matlab/Octave platforms, along with the application of customized Back-propagation algorithm and statistical methods in a Simulink environment. Having the aforementioned procedures been done, it was practicable to classify and also characterize the HSV space color segments, not to mention allow the recognition of patterns in which reasonably accurate results were obtained.
Resumo:
This paper presents a completely autonomous solution to participate in the Indoor Challenge of the 2013 International Micro Air Vehicle Competition (IMAV 2013). Our proposal is a multi-robot system with no centralized coordination whose robotic agents share their position estimates. The capability of each agent to navigate avoiding collisions is a consequence of the resulting emergent behavior. Each agent consists of a ground station running an instance of the proposed architecture that communicates over WiFi with an AR Drone 2.0 quadrotor. Visual markers are employed to sense and map obstacles and to improve the pose estimation based on Inertial Measurement Unit (IMU) and ground optical flow data. Based on our architecture, each robotic agent can navigate avoiding obstacles and other members of the multi-robot system. The solution is demonstrated and the achieved navigation performance is evaluated by means of experimental flights. This work also analyzes the capabilities of the presented solution in simulated flights of the IMAV 2013 Indoor Challenge. The performance of the CVG UPM team was awarded with the First Prize in the Indoor Autonomy Challenge of the IMAV 2013 competition.
Resumo:
Computational Vision stands as the most comprehensive way of knowing the surrounding environment. Accordingly to that, this study aims to present a method to obtain from a common webcam, environment information to guide a mobile differential robot through a path similar to a roadway.
Resumo:
Computational Vision stands as the most comprehensive way of knowing the surrounding environment. Accordingly to that, this study aims to present a method to obtain from a common webcam, environment information to guide a mobile differential robot through a path similar to a roadway.
Resumo:
Dissertation presented at the Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer Engineering.
Resumo:
13th International Conference on Autonomous Robot Systems (Robotica), 2013