979 resultados para Visione Robotica Calibrazione Camera Robot Hand Eye


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis se centra en desarrollo de tecnologías para la interacción hombre-robot en entornos nucleares de fusión. La problemática principal del sector de fusión nuclear radica en las condiciones ambientales tan extremas que hay en el interior del reactor, y la necesidad de que los equipos cumplan requisitos muy restrictivos para poder aguantar esos niveles de radiación, magnetismo, ultravacío, temperatura... Como no es viable la ejecución de tareas directamente por parte de humanos, habrá que utilizar dispositivos de manipulación remota para llevar a cabo los procesos de operación y mantenimiento. En las instalaciones de ITER es obligatorio tener un entorno controlado de extrema seguridad, que necesita de estándares validados. La definición y uso de protocolos es indispensable para regir su buen funcionamiento. Si nos centramos en la telemanipulación con algo grado de escalado, surge la necesidad de definir protocolos para sistemas abiertos que permitan la interacción entre equipos y dispositivos de diversa índole. En este contexto se plantea la definición del Protocolo de Teleoperación que permita la interconexión entre dispositivos maestros y esclavos de distinta tipología, pudiéndose comunicar bilateralmente entre sí y utilizar distintos algoritmos de control según la tarea a desempeñar. Este protocolo y su interconectividad se han puesto a prueba en la Plataforma Abierta de Teleoperación (P.A.T.) que se ha desarrollado e integrado en la ETSII UPM como una herramienta que permita probar, validar y realizar experimentos de telerrobótica. Actualmente, este Protocolo de Teleoperación se ha propuesto a través de AENOR al grupo ISO de Telerobotics como una solución válida al problema existente y se encuentra bajo revisión. Con el diseño de dicho protocolo se ha conseguido enlazar maestro y esclavo, sin embargo con los niveles de radiación tan altos que hay en ITER la electrónica del controlador no puede entrar dentro del tokamak. Por ello se propone que a través de una mínima electrónica convenientemente protegida se puedan multiplexar las señales de control que van a través del cableado umbilical desde el controlador hasta la base del robot. En este ejercicio teórico se demuestra la utilidad y viabilidad de utilizar este tipo de solución para reducir el volumen y peso del cableado umbilical en cifras aproximadas de un 90%, para ello hay que desarrollar una electrónica específica y con certificación RadHard para soportar los enormes niveles de radiación de ITER. Para este manipulador de tipo genérico y con ayuda de la Plataforma Abierta de Teleoperación, se ha desarrollado un algoritmo que mediante un sensor de fuerza/par y una IMU colocados en la muñeca del robot, y convenientemente protegidos ante la radiación, permiten calcular las fuerzas e inercias que produce la carga, esto es necesario para poder transmitirle al operador unas fuerzas escaladas, y que pueda sentir la carga que manipula, y no otras fuerzas que puedan influir en el esclavo remoto, como ocurre con otras técnicas de estimación de fuerzas. Como el blindaje de los sensores no debe ser grande ni pesado, habrá que destinar este tipo de tecnología a las tareas de mantenimiento de las paradas programadas de ITER, que es cuando los niveles de radiación están en sus valores mínimos. Por otro lado para que el operador sienta lo más fielmente posible la fuerza de carga se ha desarrollado una electrónica que mediante el control en corriente de los motores permita realizar un control en fuerza a partir de la caracterización de los motores del maestro. Además para aumentar la percepción del operador se han realizado unos experimentos que demuestran que al aplicar estímulos multimodales (visuales, auditivos y hápticos) aumenta su inmersión y el rendimiento en la consecución de la tarea puesto que influyen directamente en su capacidad de respuesta. Finalmente, y en referencia a la realimentación visual del operador, en ITER se trabaja con cámaras situadas en localizaciones estratégicas, si bien el humano cuando manipula objetos hace uso de su visión binocular cambiando constantemente el punto de vista adecuándose a las necesidades visuales de cada momento durante el desarrollo de la tarea. Por ello, se ha realizado una reconstrucción tridimensional del espacio de la tarea a partir de una cámara-sensor RGB-D, lo cual nos permite obtener un punto de vista binocular virtual móvil a partir de una cámara situada en un punto fijo que se puede proyectar en un dispositivo de visualización 3D para que el operador pueda variar el punto de vista estereoscópico según sus preferencias. La correcta integración de estas tecnologías para la interacción hombre-robot en la P.A.T. ha permitido validar mediante pruebas y experimentos para verificar su utilidad en la aplicación práctica de la telemanipulación con alto grado de escalado en entornos nucleares de fusión. Abstract This thesis focuses on developing technologies for human-robot interaction in nuclear fusion environments. The main problem of nuclear fusion sector resides in such extreme environmental conditions existing in the hot-cell, leading to very restrictive requirements for equipment in order to deal with these high levels of radiation, magnetism, ultravacuum, temperature... Since it is not feasible to carry out tasks directly by humans, we must use remote handling devices for accomplishing operation and maintenance processes. In ITER facilities it is mandatory to have a controlled environment of extreme safety and security with validated standards. The definition and use of protocols is essential to govern its operation. Focusing on Remote Handling with some degree of escalation, protocols must be defined for open systems to allow interaction among different kind of equipment and several multifunctional devices. In this context, a Teleoperation Protocol definition enables interconnection between master and slave devices from different typologies, being able to communicate bilaterally one each other and using different control algorithms depending on the task to perform. This protocol and its interconnectivity have been tested in the Teleoperation Open Platform (T.O.P.) that has been developed and integrated in the ETSII UPM as a tool to test, validate and conduct experiments in Telerobotics. Currently, this protocol has been proposed for Teleoperation through AENOR to the ISO Telerobotics group as a valid solution to the existing problem, and it is under review. Master and slave connection has been achieved with this protocol design, however with such high radiation levels in ITER, the controller electronics cannot enter inside the tokamak. Therefore it is proposed a multiplexed electronic board, that through suitable and RadHard protection processes, to transmit control signals through an umbilical cable from the controller to the robot base. In this theoretical exercise the utility and feasibility of using this type of solution reduce the volume and weight of the umbilical wiring approximate 90% less, although it is necessary to develop specific electronic hardware and validate in RadHard qualifications in order to handle huge levels of ITER radiation. Using generic manipulators does not allow to implement regular sensors for force feedback in ITER conditions. In this line of research, an algorithm to calculate the forces and inertia produced by the load has been developed using a force/torque sensor and IMU, both conveniently protected against radiation and placed on the robot wrist. Scaled forces should be transmitted to the operator, feeling load forces but not other undesirable forces in slave system as those resulting from other force estimation techniques. Since shielding of the sensors should not be large and heavy, it will be necessary to allocate this type of technology for programmed maintenance periods of ITER, when radiation levels are at their lowest levels. Moreover, the operator perception needs to feel load forces as accurate as possible, so some current control electronics were developed to perform a force control of master joint motors going through a correct motor characterization. In addition to increase the perception of the operator, some experiments were conducted to demonstrate applying multimodal stimuli (visual, auditory and haptic) increases immersion and performance in achieving the task since it is directly correlated with response time. Finally, referring to the visual feedback to the operator in ITER, it is usual to work with 2D cameras in strategic locations, while humans use binocular vision in direct object manipulation, constantly changing the point of view adapting it to the visual needs for performing manipulation during task procedures. In this line a three-dimensional reconstruction of non-structured scenarios has been developed using RGB-D sensor instead of cameras in the remote environment. Thus a mobile virtual binocular point of view could be generated from a camera at a fixed point, projecting stereoscopic images in 3D display device according to operator preferences. The successful integration of these technologies for human-robot interaction in the T.O.P., and validating them through tests and experiments, verify its usefulness in practical application of high scaling remote handling at nuclear fusion environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dynamic characteristics of reflex eye movements were measured in two strains of chronically prepared mice by using an infrared television camera system. The horizontal vestibulo-ocular reflex (HVOR) and horizontal optokinetic response (HOKR) were induced by sinusoidal oscillations of a turntable, in darkness, by 10° (peak to peak) at 0.11–0.50 Hz and of a checked-pattern screen, in light, by 5–20°at 0.11–0.17 Hz, respectively. The gains and phases of the HVOR and HOKR of the C57BL/6 mice were nearly equivalent to those of rabbits and rats, whereas the 129/Sv mice exhibited very low gains in the HVOR and moderate phase lags in the HOKR, suggesting an inherent sensory-motor anomaly. Adaptability of the HOKR was examined in C57BL/6 mice by sustained screen oscillation. When the screen was oscillated by 10° at 0.17 Hz, which induced sufficient retinal slips, the gain of the HOKR increased by 0.08 in 1 h on average, whereas the stimuli that induced relatively small or no retinal slips affected the gain very little. Lesions of the flocculi induced by local applications of 0.1% ibotenic acid and lesions of the inferior olivary nuclei induced by i.p. injection of 3-acetylpyridine in C57BL/6 mice little affected the dynamic characteristics of the HVOR and HOKR, but abolished the adaptation of the HOKR. These results indicate that the olivo-floccular system plays an essential role in the adaptive control of the ocular reflex in mice, as suggested in other animal species. The data presented provide the basis for analyzing the reflex eye movements of genetically engineered mice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional visual servoing systems do not deal with the topic of moving objects tracking. When these systems are employed to track a moving object, depending on the object velocity, visual features can go out of the image, causing the fail of the tracking task. This occurs specially when the object and the robot are both stopped and then the object starts the movement. In this work, we have employed a retina camera based on Address Event Representation (AER) in order to use events as input in the visual servoing system. The events launched by the camera indicate a pixel movement. Event visual information is processed only at the moment it occurs, reducing the response time of visual servoing systems when they are used to track moving objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a method for fast calculation of the egomotion done by a robot using visual features. The method is part of a complete system for automatic map building and Simultaneous Localization and Mapping (SLAM). The method uses optical flow in order to determine if the robot has done a movement. If so, some visual features which do not accomplish several criteria (like intersection, unicity, etc,) are deleted, and then the egomotion is calculated. We use a state-of-the-art algorithm (TORO) in order to rectify the map and solve the SLAM problem. The proposed method provides better efficiency that other current methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Al giorno d’oggi quasi tutte le persone possiedono un mezzo motorizzato che utilizzano per spostarsi. Tale operazione, che risulta semplice per una persona, può essere compiuta da un robot o un autoveicolo in modo autonomo? La risposta a questa domanda è si, ma se ad una persona serve solo un po’ di pratica per guidare, questa azione non risulta altrettanto immediata per dei veicoli motorizzati. In soccorso ad essi vi è la Computer Vision, un ramo dell’informatica che, in un certo senso, rende un elaboratore elettronico in grado di percepire l’ambiente circostante, nel modo in cui una persona fa con i propri occhi. Oggi ci concentreremo su due campi della computer vision, lo SLAM o Simultaneous Localization and Mapping, che rende un robot in grado di mappare, attraverso una camera, il mondo in cui si trova ed allo stesso tempo di localizzare, istante per istante, la propria posizione all’interno di esso, e la Plane Detection, che permette di estrapolare i piani presenti all’interno di una data immagine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to determine the accuracy (and usability) of the Retinomax, a hand-held autorefractor, compared to measurements taken from hand-held retinoscopy (HHR) in a sample of normal 1-year-old children. The study was a method comparison set at four Community Child Health Clinics. Infants (n = 2079) of approximately 1 year of age were identified from birth/immunization records and their caregivers were contacted by mail. A total of 327 infants ranging in age from 46 weeks to 81 weeks (mean 61 weeks) participated in the study. The children underwent a full ophthalmic examination. Under cycloplegia, refraction was measured in each eye by streak retinoscopy (HHR) and then re-measured using the Retinomax autorefractor. Sphere, cylinder, axis of cylinder and spherical equivalent measurements were recorded for HHR and Retinomax instruments, and compared. Across the range of refractive errors measured, there was generally close agreement between the two examination methods, although the Retinomax consistently read around 0.3 D less hyperopic than HHR. Significantly more girls (72 infants, 47.7%), struggled during examination with the Retinomax than boys (52 infants, 29.5%) (P < 0.001). Agreement deteriorated between the two instruments if the patient struggled during the examination (P < 0.001). In general, the Retinomax would appear to be a useful screening instrument in early childhood. However, patient cooperation affects the accuracy of results and is an important con-sideration in determining whether this screening instrument should be adopted for measuring refractive errors in early infancy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the real time global vision system for the robot soccer team the RoboRoos. It has a highly optimised pipeline that includes thresholding, segmenting, colour normalising, object recognition and perspective and lens correction. It has a fast ‘paint’ colour calibration system that can calibrate in any face of the YUV or HSI cube. It also autonomously selects both an appropriate camera gain and colour gains robot regions across the field to achieve colour uniformity. Camera geometry calibration is performed automatically from selection of keypoints on the field. The system acheives a position accuracy of better than 15mm over a 4m × 5.5m field, and orientation accuracy to within 1°. It processes 614 × 480 pixels at 60Hz on a 2.0GHz Pentium 4 microprocessor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the implementation of a modified particle filter for vision-based simultaneous localization and mapping of an autonomous robot in a structured indoor environment. Through this method, artificial landmarks such as multi-coloured cylinders can be tracked with a camera mounted on the robot, and the position of the robot can be estimated at the same time. Experimental results in simulation and in real environments show that this approach has advantages over the extended Kalman filter with ambiguous data association and various levels of odometric noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes experiments conducted in order to simultaneously tune 15 joints of a humanoid robot. Two Genetic Algorithm (GA) based tuning methods were developed and compared against a hand-tuned solution. The system was tuned in order to minimise tracking error while at the same time achieve smooth joint motion. Joint smoothness is crucial for the accurate calculation of online ZMP estimation, a prerequisite for a closedloop dynamically stable humanoid walking gait. Results in both simulation and on a real robot are presented, demonstrating the superior smoothness performance of the GA based methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: The aim of this study was to assess the impact of hand washing regimes on lipid transference to contact lenses. The presence of lipids on contact lenses can affect visual acuity and enhance spoilation. Additionally, they may even mediate and foster microbial transfer and serve as a marker of potential dermal contamination. Methods and materials: A social hand wash and the Royal College of Nursing (RCN) hand wash were investigated. A 'no-wash regime' was used as control. The transfer of lipids from the hand was assessed by Thin Layer Chromatography (TLC). Lipid transference to the contact lenses was studied through fluorescence spectroscopy (FS). Results: Iodine staining, for presence of lipids, on TLC plates indicated the 'no-wash regime' score averaged at 3.4 ± 0.8, the social wash averaged at 2.2 ± 0.9 and the RCN averaged at 1.2 ± 0.3 on a scale of 1-4. The FS of lipids on contact lenses for 'no washing' presented an average of 28.47 ± 10.54 fluorescence units (FU), the social wash presented an average of 13.52 ± 11.12. FU and the RCN wash presented a much lower average 6.47 ± 4.26. FU. Conclusions: This work demonstrates how the method used for washing the hands can affect the concentration of lipids, and the transfer of these lipids onto contact lenses. A regime of hand washing for contact lens users should be standardised to help reduce potentially transferable species present on the hands. © 2011 British Contact Lens Association.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose an approach based on self-interested autonomous cameras, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to grow the vision graph during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online which permits the addition and removal cameras to the network during runtime and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multi-camera calibration can be avoided. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To determine the theoretical and clinical minimum image pixel resolution and maximum compression appropriate for anterior eye image storage. Methods: Clinical images of the bulbar conjunctiva, palpebral conjunctiva, and corneal staining were taken at the maximum resolution of Nikon:CoolPix990 (2048 × 1360 pixels), DVC:1312C (1280 × 811), and JAI:CV-S3200 (767 × 569) single chip cameras and the JVC:KYF58 (767 × 569) three chip camera. The images were stored in TIFF format and further copies created with reduced resolution or compressed. The images were then ranked for clarity on a 15 inch monitor (resolution 1280 × 1024) by 20 optometrists and analysed by objective image analysis grading. Theoretical calculation of the resolution necessary to detect the smallest objects of clinical interest was also conducted. Results: Theoretical calculation suggested that the minimum resolution should be ≥579 horizontal pixels at 25 × magnification. Image quality was perceived subjectively as being reduced when the pixel resolution was lower than 767 × 569 (p<0.005) or the image was compressed as a BMP or <50% quality JPEG (p<0.005). Objective image analysis techniques were less susceptible to changes in image quality, particularly when using colour extraction techniques. Conclusion: It is appropriate to store anterior eye images at between 1280 × 811 and 767 × 569 pixel resolution and at up to 1:70 JPEG compression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we present an approach to object tracking handover in a network of smart cameras, based on self-interested autonomous agents, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to learn the vision graph, that is, the camera neighbourhood relations, during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online, enabling efficient deployment in unknown scenarios and camera network topologies, and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multicamera calibration can be avoided. We have evaluated our approach both in a simulation study and in network of real distributed smart cameras.