33 resultados para Visual robot control
Resumo:
In this article we describe a semantic localization dataset for indoor environments named ViDRILO. The dataset provides five sequences of frames acquired with a mobile robot in two similar office buildings under different lighting conditions. Each frame consists of a point cloud representation of the scene and a perspective image. The frames in the dataset are annotated with the semantic category of the scene, but also with the presence or absence of a list of predefined objects appearing in the scene. In addition to the frames and annotations, the dataset is distributed with a set of tools for its use in both place classification and object recognition tasks. The large number of labeled frames in conjunction with the annotation scheme make this dataset different from existing ones. The ViDRILO dataset is released for use as a benchmark for different problems such as multimodal place classification and object recognition, 3D reconstruction or point cloud data compression.
Resumo:
Virtual Worlds Generator is a grammatical model that is proposed to define virtual worlds. It integrates the diversity of sensors and interaction devices, multimodality and a virtual simulation system. Its grammar allows the definition and abstraction in symbols strings of the scenes of the virtual world, independently of the hardware that is used to represent the world or to interact with it. A case study is presented to explain how to use the proposed model to formalize a robot navigation system with multimodal perception and a hybrid control scheme of the robot.
Resumo:
Comunicación presentada en el X Workshop of Physical Agents, Cáceres, 10-11 septiembre 2009.
Resumo:
This paper analyzes the learning experiences and opinions from a group of undergraduate students in a course about Robotics. The contents of this course were taught as a set of seminars. In each seminar, the student learned interdisciplinary knowledge of computer science, control engineering, electronics and other fields related to Robotics. The aim of this course is that the students are able to design and implement their own and custom robotic solution for a series of tests planned by the teachers. These tests measure the behavior and mechatronic features of the students' robots. Finally, the students' robots are confronted with some competitions. In this paper, the low-cost robotic architecture used by the students, the contents of the course, the tests to compare the solutions of students and the opinion of them are amply discussed.
Resumo:
En la búsqueda de la modelización y cuantificación óptimas de la correlación instrumental y visual de materiales goniocromáticos (pinturas y plásticos, principalmente) en los procesos de control de calidad típicos del sector automovilístico, se presenta aquí una comparación de las prestaciones de diversos tipos actuales de instrumentos de medida del color y cabinas de iluminación. Los multi-gonioespectrofotómetros analizados fueron: Datacolor FX10®, X-Rite MA98® y BYK-mac®, y, las cabinas de iluminación direccional: gonio-vision-box® y byko-spectra effect®. Las propiedades ópticas, básicamente fotométricas y colorimétricas, evaluadas fueron: geometrías de medida, fuente de luz, distancia de observación y dirección visual a la muestra. Los resultados aportados mostraron que actualmente dicha correlación instrumental y visual no es óptima debido a desajustes en las cabinas.
Resumo:
For many years, humans and machines have shared the same physical space. To facilitate their interaction with humans, their social integration and for more rational behavior has been sought that the robots demonstrate human-like behavior. For this it is necessary to understand how human behavior is generated, discuss what tasks are performed and how relate to themselves, for subsequent implementation in robots. In this paper, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this work has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.
Resumo:
Paper submitted to the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan, August 29-31, 2012.
Resumo:
Virtual Worlds Generator is a grammatical model that is proposed to define virtual worlds. It integrates the diversity of sensors and interaction devices, multimodality and a virtual simulation system. Its grammar allows the definition and abstraction in symbols strings of the scenes of the virtual world, independently of the hardware that is used to represent the world or to interact with it. A case study is presented to explain how to use the proposed model to formalize a robot navigation system with multimodal perception and a hybrid control scheme of the robot. The result is an instance of the model grammar that implements the robotic system and is independent of the sensing devices used for perception and interaction. As a conclusion the Virtual Worlds Generator adds value in the simulation of virtual worlds since the definition can be done formally and independently of the peculiarities of the supporting devices.
Resumo:
Parkinson disease is mainly characterized by the degeneration of dopaminergic neurons in the central nervous system, including the retina. Different interrelated molecular mechanisms underlying Parkinson disease-associated neuronal death have been put forward in the brain, including oxidative stress and mitochondrial dysfunction. Systemic injection of the proneurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) to monkeys elicits the appearance of a parkinsonian syndrome, including morphological and functional impairments in the retina. However, the intracellular events leading to derangement of dopaminergic and other retinal neurons in MPTP-treated animal models have not been so far investigated. Here we have used a comparative proteomics approach to identify proteins differentially expressed in the retina of MPTP-treated monkeys. Proteins were solubilized from the neural retinas of control and MPTP-treated animals, labelled separately with two different cyanine fluorophores and run pairwise on 2D DIGE gels. Out of >700 protein spots resolved and quantified, 36 were found to exhibit statistically significant differences in their expression levels, of at least ±1.4-fold, in the parkinsonian monkey retina compared with controls. Most of these spots were excised from preparative 2D gels, trypsinized and subjected to MALDI-TOF MS and LC-MS/MS analyses. Data obtained were used for protein sequence database interrogation, and 15 different proteins were successfully identified, of which 13 were underexpressed and 2 overexpressed. These proteins were involved in key cellular functional pathways such as glycolysis and mitochondrial electron transport, neuronal protection against stress and survival, and phototransduction processes. These functional categories underscore that alterations in energy metabolism, neuroprotective mechanisms and signal transduction are involved in MPTPinduced neuronal degeneration in the retina, in similarity to mechanisms thought to underlie neuronal death in the Parkinson’s diseased brain and neurodegenerative diseases of the retina proper.
Resumo:
Humans and machines have shared the same physical space for many years. To share the same space, we want the robots to behave like human beings. This will facilitate their social integration, their interaction with humans and create an intelligent behavior. To achieve this goal, we need to understand how human behavior is generated, analyze tasks running our nerves and how they relate to them. Then and only then can we implement these mechanisms in robotic beings. In this study, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this study has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.
Resumo:
Grady distinguishes two main types of metaphor in order to provide a solution in the controversies stemming from the conceptual theory of metaphor: correlation-based metaphors and resemblance metaphors. In “correlation-based metaphors”, the source domain is sensory-motor, while the target domain is not. On the contrary, “resemblance metaphors” are originated by a physical or conceptual perception which is common in both domains, by the association of concepts with common features. Primary metaphors are the minimal units of correlation-based metaphors; they are inherent in human nature and the result of the nature of our brain, our body and the world that we inhabit. We acquire them automatically and we cannot avoid them. Furthermore, as corporal experiences are universal, so are primary metaphors. In this paper, I will argue that primary metaphors manifest themselves visually through scene-setting techniques such as composition, framing, camera movement or lighting. Film-makers can use the different aspects of mise-en-scène metaphorically in order to express abstract notions like evil, importance, control, relationship or confusion. Such visual manifestations, as also occurs with their verbal equivalents, frequently go unnoticed or have been used so often that they have become clichés. But the important thing to bear in mind is that their origin lies in a primary metaphor and due to this origin these kinds of film-making strategies have been so expressively successful.
Resumo:
La percepción de profundidad se hace imprescindible en muchas tareas de manipulación, control visual y navegación de robots. Las cámaras de tiempo de vuelo (ToF: Time of Flight) generan imágenes de rango que proporcionan medidas de profundidad en tiempo real. No obstante, el parámetro distancia que calculan estas cámaras es fuertemente dependiente del tiempo de integración que se configura en el sensor y de la frecuencia de modulación empleada por el sistema de iluminación que integran. En este artículo, se presenta una metodología para el ajuste adaptativo del tiempo de integración y un análisis experimental del comportamiento de una cámara ToF cuando se modifica la frecuencia de modulación. Este método ha sido probado con éxito en algoritmos de control visual con arquitectura ‘eye-in-hand’ donde el sistema sensorial está compuesto por una cámara ToF. Además, la misma metodología puede ser aplicada en otros escenarios de trabajo.
Resumo:
En este proyecto se pretende diseñar un sistema embebido capaz de realizar procesamiento de imágenes y guiado de un hexacóptero. El hexacóptero dispondrá a bordo de una cámara así como las baterías y todo el hardware necesario para realizar el procesamiento de la información visual obtenida e implementar el controlador necesario para permitir su guiado. OpenCV es una biblioteca de primitivas de procesado de imagen que permite crear algoritmos de Visión por Computador de última generación. OpenCV fue desarrollado originalmente por Intel en 1999 para mostrar la capacidad de procesamiento de los micros de Intel, por lo que la mayoría de la biblioteca está optimizada para correr en estos micros, incluyendo las extensiones MMX y SSE. http://en.wikipedia.org/wiki/OpenCV Actualmente es ampliamente utilizada tanto por la comunidad científica como por la industria, para desarrollar nuevos algoritmos para equipos de sobremesa y sobre todo para sistemas empotrados (robots móviles, cámaras inteligentes, sistemas de inspección, sistemas de vigilancia, etc..). Debido a su gran popularidad se han realizado compilaciones de la biblioteca para distintos sistemas operativos tradicionales (Windows, Linux, Mac), para dispositivos móviles (Android, iOS) y para sistemas embebidos basados en distintos tipos de procesadores (ARM principalmente). - iPhone port: http://www.eosgarden.com/en/opensource/opencv-ios/overview/ - Android port: http://opencv.willowgarage.com/wiki/AndroidExperimental Un ejemplo de plataforma embebida es la tarjeta Zedboard (http://www.zedboard.org/), que representa el estado del arte en dispositivos embebidos basados en la arquitectura Cortex de ARM. La tarjeta incluye un procesador Cortex-A9 dual core junto con una gran cantidad de periféricos y posibilidades de conexión a tarjetas de expansión de terceras partes, lo que permite desarrollar aplicaciones en muy distintos campos de la Visión por Computador.
Resumo:
New low cost sensors and the new open free libraries for 3D image processing are permitting to achieve important advances for robot vision applications such as tridimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a method to recognize the human hand and to track the fingers is proposed. This new method is based on point clouds from range images, RGBD. It does not require visual marks, camera calibration, environment knowledge and complex expensive acquisition systems. Furthermore, this method has been implemented to create a human interface in order to move a robot hand. The human hand is recognized and the movement of the fingers is analyzed. Afterwards, it is imitated from a Barret hand, using communication events programmed from ROS.
Resumo:
This paper presents a method for the fast calculation of a robot’s egomotion using visual features. The method is part of a complete system for automatic map building and Simultaneous Location and Mapping (SLAM). The method uses optical flow to determine whether the robot has undergone a movement. If so, some visual features that do not satisfy several criteria are deleted, and then egomotion is calculated. Thus, the proposed method improves the efficiency of the whole process because not all the data is processed. We use a state-of-the-art algorithm (TORO) to rectify the map and solve the SLAM problem. Additionally, a study of different visual detectors and descriptors has been conducted to identify which of them are more suitable for the SLAM problem. Finally, a navigation method is described using the map obtained from the SLAM solution.