109 resultados para Visione Robotica Calibrazione Camera Robot Hand Eye
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
AMADEUS is a dexterous subsea robot hand incorporating force and slip contact sensing, using fluid filled tentacles for fingers. Hydraulic pressure variations in each of three flexible tubes (bellows) in each finger create a bending moment, and consequent motion or increase in contact force during grasping. Such fingers have inherent passive compliance, no moving parts, and are naturally depth pressure-compensated, making them ideal for reliable use in the deep ocean. In addition to the mechanical design, development of the hand has also considered closed loop finger position and force control, coordinated finger motion for grasping, force and slip sensor development/signal processing, and reactive world modeling/planning for supervisory `blind grasping¿. Initially, the application focus is for marine science tasks, but broader roles in offshore oil and gas, salvage, and military use are foreseen. Phase I of the project is complete, with the construction of a first prototype. Phase I1 is now underway, to deploy the hand from an underwater robot arm, and carry out wet trials with users.
Resumo:
AMADEUS is a dexterous subsea robot hand incorporating force and slip contact sensing, using fluid filled tentacles for fingers. Hydraulic pressure variations in each of three flexible tubes (bellows) in each finger create a bending moment, and consequent motion or increase in contact force during grasping. Such fingers have inherent passive compliance, no moving parts, and are naturally depth pressure-compensated, making them ideal for reliable use in the deep ocean. In addition to the mechanical design, development of the hand has also considered closed loop finger position and force control, coordinated finger motion for grasping, force and slip sensor development/signal processing, and reactive world modeling/planning for supervisory `blind grasping¿. Initially, the application focus is for marine science tasks, but broader roles in offshore oil and gas, salvage, and military use are foreseen. Phase I of the project is complete, with the construction of a first prototype. Phase I1 is now underway, to deploy the hand from an underwater robot arm, and carry out wet trials with users.
Resumo:
Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of nontextured objets or objets for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given
Resumo:
This paper presents a vision-based localization approach for an underwater robot in a structured environment. The system is based on a coded pattern placed on the bottom of a water tank and an onboard down looking camera. Main features are, absolute and map-based localization, landmark detection and tracking, and real-time computation (12.5 Hz). The proposed system provides three-dimensional position and orientation of the vehicle along with its velocity. Accuracy of the drift-free estimates is very high, allowing them to be used as feedback measures of a velocity-based low-level controller. The paper details the localization algorithm, by showing some graphical results, and the accuracy of the system
Resumo:
This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system
Resumo:
This paper is focused on the robot mobile platform PRIM (platform robot information multimedia). This robot has been made in order to cover two main needs of our group, on one hand the need for a full open mobile robotic platform that is very useful in fulfilling the teaching and research activity of our school community, and on the other hand with the idea of introducing an ethical product which would be useful as mobile multimedia information point as a service tool. This paper introduces exactly how the system is made up and explains just what the philosophy is behind this work. The navigation strategies and sensor fusion, where machine vision system is the most important one, are oriented towards goal achievement and are the key to the behaviour of the robot
Resumo:
In the future, robots will enter our everyday lives to help us with various tasks.For a complete integration and cooperation with humans, these robots needto be able to acquire new skills. Sensor capabilities for navigation in real humanenvironments and intelligent interaction with humans are some of the keychallenges.Learning by demonstration systems focus on the problem of human robotinteraction, and let the human teach the robot by demonstrating the task usinghis own hands. In this thesis, we present a solution to a subproblem within thelearning by demonstration field, namely human-robot grasp mapping. Robotgrasping of objects in a home or office environment is challenging problem.Programming by demonstration systems, can give important skills for aidingthe robot in the grasping task.The thesis presents two techniques for human-robot grasp mapping, directrobot imitation from human demonstrator and intelligent grasp imitation. Inintelligent grasp mapping, the robot takes the size and shape of the object intoconsideration, while for direct mapping, only the pose of the human hand isavailable.These are evaluated in a simulated environment on several robot platforms.The results show that knowing the object shape and size for a grasping taskimproves the robot precision and performance
Resumo:
This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational, and research tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system. In this context the research developed includes the visual information as a meaningful source that allows detecting the obstacle position coordinates as well as planning the free obstacle trajectory that should be reached by the robot
Resumo:
Estudio elaborado a partir de una estancia en la Universidad de Rochester, Estados Unidos, de octubre del 2006 a enero del 2007. La estancia realizada en la Universidad de Rochester estuvo orientada al aprendizaje en profundidad del oftalmoscopio láser de barrido. El oftalmoscopio láser de barrido emplea una técnica confocal con la finalidad de visualizar diferentes estructuras retinianas en seres vivos. El instrumento diseñado y desarrollado en el Centro de Ciencias de la Visión incorpora un sistema de óptica adaptativa y fluorescencia. La óptica adaptativa aplicada en este oftalmoscopio tiene como objetivo corregir las aberraciones existentes en el ojo y así permitir observar detalles de la retina que de otra forma se verían emborronados. De esta forma se consigue alcanzar valores de resolución muy cercanos a los impuestos por difracción. Por otro lado el uso de fluorescencia tiene por objetivo el permitir la visualización de células y estructuras que, de no ser teñidas, son transparentes a la luz y visible. Esta técnica se ha estado utilizando principalmente en primates y ratas, aunque actualmente también se están llevando a cabo medidas de células de epitelio pigmentario en seres humanos ya que el pigmento contenido en estas células permite la aplicación de la fluorescencia sin necesidad de utilizar tinción.
Resumo:
Aquest projecte consisteix en l'estudi, comparació i implementació en hardware d'algoritmes de reconeixement de caràcters per integrar en un sistema intel·ligent de captura d'imatges. Aquest sistema, integrat per una càmera amb format i característiques específiques i que anirà acoblat a un comptador d'aigua tradicional, en captarà imatges i les enviarà per RF al punt de recepció de la companyia. L'objectiu principal consisteix en aconseguir un disseny que redueixi al màxim la quantitat d'informació per transmetre, tenint en compte les limitacions de l'entorn.
Resumo:
El robot Aibo disposa de la llibreria Aibo Remote Framework per controlar-lo remotament mitjançant un PC i una xarxa inalàmbrica, també per accedir a informació d'estat del robot, o per veure les imatges que l'Aibo capta. Combinant Remote Framework i php s'ha creat una aplicació web que permet controlar Aibos diferents remotament per Internet, així com tenir accés a les imatges subjectives de cadascun dels Aibos. A més, la tecnologia existent d'streaming permet que l'aplicació web tingui un vídeo incrustat que possibilita veure en directe els Aibos mitjançant una càmera web enfocada cap a ells.
Resumo:
En aquest projecte, s'ha dissenyat, construït i programat un robot autònom, dotat de sistema de locomoció i sensors que li permeten navegar sense impactar en un entorn controlat. Per assolir aquests objectius s'ha dissenyat i programat una unitat de control que gestiona el hardware de baix volum de dades amb diferents modes d'operació, abstraient-lo en una única interfície. Posteriorment s'ha integrat aquest sistema en l'entorn de robòtica Pyro. Aquest entorn permet usar i adaptar, segons es necessiti, eines d'intel·ligència artificial ja desenvolupades.
Resumo:
En aquest projecte s’ha estudiat el disseny d’una plataforma robòtica mòbil per un PBL (Aprenentatge Basat en Problemes) en enginyeria informàtica. El principal objectiu és introduir aquest model en l’ensenyament universitari, com a complement de diferents assignatures de primer curs. Per arribar a aconseguir aquests objectius, s’ha dissenyat i construït una plataforma robòtica, dirigida per un microcontrolador i dotada de diversos sensors per interactuar amb l’entorn. El robot permet diferents tipus de programació i esta especialment dissenyada per ser una bona experiència educativa.
Resumo:
Treball de recerca realitzat per un alumne d'ensenyament secundari i guardonat amb un Premi CIRIT per fomentar l'esperit científic del Jovent l'any 2009. L'NXT és un robot creat per l'empresa Lego que disposa d'un controlador, de diversos servo motors i de sensors (tacte, llum, ultrasons, so...). Es programa mitjançant un programa especial, pensat per nois i noies de catorze anys, anomenat Lego Mindstorms. S'estudia el funcionament d'aquest programa i les parts del sistema de control del robot. L'estudi engloba el controlador, quatre sensors i els servomotors.