44 resultados para Robot vision systems
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given
Resumo:
We present a computer vision system that associates omnidirectional vision with structured light with the aim of obtaining depth information for a 360 degrees field of view. The approach proposed in this article combines an omnidirectional camera with a panoramic laser projector. The article shows how the sensor is modelled and its accuracy is proved by means of experimental results. The proposed sensor provides useful information for robot navigation applications, pipe inspection, 3D scene modelling etc
Resumo:
This paper is focused on the robot mobile platform PRIM (platform robot information multimedia). This robot has been made in order to cover two main needs of our group, on one hand the need for a full open mobile robotic platform that is very useful in fulfilling the teaching and research activity of our school community, and on the other hand with the idea of introducing an ethical product which would be useful as mobile multimedia information point as a service tool. This paper introduces exactly how the system is made up and explains just what the philosophy is behind this work. The navigation strategies and sensor fusion, where machine vision system is the most important one, are oriented towards goal achievement and are the key to the behaviour of the robot
Resumo:
Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of nontextured objets or objets for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance
Resumo:
This research extends a previously developed work concerning about the use of local model predictive control in mobile robots. Hence, experimental results are presented as a way to improve the methodology by considering aspects as trajectory accuracy and time performance. In this sense, the cost function and the prediction horizon are important aspects to be considered. The platformused is a differential driven robot with a free rotating wheel. The aim of the present work is to test the control method by measuring trajectory tracking accuracy and time performance. Moreover, strategies for the integration with perception system and path planning are also introduced. In this sense, monocular image data provide an occupancy grid where safety trajectories are computed by using goal attraction potential fields
Resumo:
Reinforcement learning (RL) is a very suitable technique for robot learning, as it can learn in unknown environments and in real-time computation. The main difficulties in adapting classic RL algorithms to robotic systems are the generalization problem and the correct observation of the Markovian state. This paper attempts to solve the generalization problem by proposing the semi-online neural-Q_learning algorithm (SONQL). The algorithm uses the classic Q_learning technique with two modifications. First, a neural network (NN) approximates the Q_function allowing the use of continuous states and actions. Second, a database of the most representative learning samples accelerates and stabilizes the convergence. The term semi-online is referred to the fact that the algorithm uses the current but also past learning samples. However, the algorithm is able to learn in real-time while the robot is interacting with the environment. The paper shows simulated results with the "mountain-car" benchmark and, also, real results with an underwater robot in a target following behavior
Resumo:
This paper proposes a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot. Although the dominant approach, when using RL, has been to apply value function based algorithms, the system here detailed is characterized by the use of direct policy search methods. Rather than approximating a value function, these methodologies approximate a policy using an independent function approximator with its own parameters, trying to maximize the future expected reward. The policy based algorithm presented in this paper is used for learning the internal state/action mapping of a behavior. In this preliminary work, we demonstrate its feasibility with simulated experiments using the underwater robot GARBI in a target reaching task
Resumo:
When underwater vehicles perform navigation close to the ocean floor, computer vision techniques can be applied to obtain quite accurate motion estimates. The most crucial step in the vision-based estimation of the vehicle motion consists on detecting matchings between image pairs. Here we propose the extensive use of texture analysis as a tool to ameliorate the correspondence problem in underwater images. Once a robust set of correspondences has been found, the three-dimensional motion of the vehicle can be computed with respect to the bed of the sea. Finally, motion estimates allow the construction of a map that could aid to the navigation of the robot
Resumo:
This paper deals with the problem of navigation for an unmanned underwater vehicle (UUV) through image mosaicking. It represents a first step towards a real-time vision-based navigation system for a small-class low-cost UUV. We propose a navigation system composed by: (i) an image mosaicking module which provides velocity estimates; and (ii) an extended Kalman filter based on the hydrodynamic equation of motion, previously identified for this particular UUV. The obtained system is able to estimate the position and velocity of the robot. Moreover, it is able to deal with visual occlusions that usually appear when the sea bottom does not have enough visual features to solve the correspondence problem in a certain area of the trajectory
Resumo:
Omnidirectional cameras offer a much wider field of view than the perspective ones and alleviate the problems due to occlusions. However, both types of cameras suffer from the lack of depth perception. A practical method for obtaining depth in computer vision is to project a known structured light pattern on the scene avoiding the problems and costs involved by stereo vision. This paper is focused on the idea of combining omnidirectional vision and structured light with the aim to provide 3D information about the scene. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. It is also discussed how this sensor can be used in robot navigation applications
Resumo:
This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system
Resumo:
In a search for new sensor systems and new methods for underwater vehicle positioning based on visual observation, this paper presents a computer vision system based on coded light projection. 3D information is taken from an underwater scene. This information is used to test obstacle avoidance behaviour. In addition, the main ideas for achieving stabilisation of the vehicle in front of an object are presented
Resumo:
The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method
Resumo:
Microsoft Robotics Studio (MRS) és un entorn per a crear aplicacions per a robots utilitzant una gran varietat de plataformes hardware. Conté un entorn de simulació en el que es pot modelar i simular el moviment del robot. Permet també programar el robot, i executar-lo en l’entorn simulat o bé en el real. MRS resol la comunicació entre elsdiferents processos asíncrons que solen estar presents en el software de control d’unrobot: processos per atendre sensors, actuadors, sistemes de control, comunicacions amb l’exterior,... MRS es pot utilitzar per modelar nous robots utilitzant components que ja estiguin disponibles en les seves llibreries, o també permet crear component nous. Per tal de conèixer en detall aquesta eina, seria interessant utilitzar-la per programa els robots e-pucks, uns robots mòbils autònoms de petites dimensions que disposen de dos motors i un complet conjunt de sensors. El que es vol és simular-los, realitzar un programa de control, realitzar la interfície amb el robot i comprovar el funcionament amb el robot real
Resumo:
Dins el departament d’Electrònica, Informàtica i Automàtica de la Universitat de Girona s’handissenyat i construït dues plataformes bípedes per a l’ús docent. La mésevolucionada d’elles, finalitzada l’any 1999, està composada per dues cames d’alumini ambtres actuadors lineals cada una, simulant la funció del turmell, del genoll i del maluc. Els objectius que es pretenen aconseguir amb aquest projecte són molt concrets i tots ellsestan destinats a millorar el funcionament del robot bípede. Aquests objectius són: (1)dissenyar dos graus de llibertat lineals en forma de pla XY per moure el pes que convinguiper assegurar l’equilibri durant el moviment de la plataforma bípede, (2) dissenyar una placaamb una FPGA que generi senyals PWM pels vuit motors disponibles, que llegeixi els dosencoders dels motors del pla XY i que es comuniqui amb un PC equipat amb una tarjad’adquisició de dades específica, (3) dissenyar una placa de potència adequada pel controldels motors, (4) finalment realitzar un programa per comprovar el correcte funcionament deles plaques, dels actuadors i dels sensors utilitzats en la plataforma bípede