966 resultados para human robot cooperation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta tesis se centra en desarrollo de tecnologías para la interacción hombre-robot en entornos nucleares de fusión. La problemática principal del sector de fusión nuclear radica en las condiciones ambientales tan extremas que hay en el interior del reactor, y la necesidad de que los equipos cumplan requisitos muy restrictivos para poder aguantar esos niveles de radiación, magnetismo, ultravacío, temperatura... Como no es viable la ejecución de tareas directamente por parte de humanos, habrá que utilizar dispositivos de manipulación remota para llevar a cabo los procesos de operación y mantenimiento. En las instalaciones de ITER es obligatorio tener un entorno controlado de extrema seguridad, que necesita de estándares validados. La definición y uso de protocolos es indispensable para regir su buen funcionamiento. Si nos centramos en la telemanipulación con algo grado de escalado, surge la necesidad de definir protocolos para sistemas abiertos que permitan la interacción entre equipos y dispositivos de diversa índole. En este contexto se plantea la definición del Protocolo de Teleoperación que permita la interconexión entre dispositivos maestros y esclavos de distinta tipología, pudiéndose comunicar bilateralmente entre sí y utilizar distintos algoritmos de control según la tarea a desempeñar. Este protocolo y su interconectividad se han puesto a prueba en la Plataforma Abierta de Teleoperación (P.A.T.) que se ha desarrollado e integrado en la ETSII UPM como una herramienta que permita probar, validar y realizar experimentos de telerrobótica. Actualmente, este Protocolo de Teleoperación se ha propuesto a través de AENOR al grupo ISO de Telerobotics como una solución válida al problema existente y se encuentra bajo revisión. Con el diseño de dicho protocolo se ha conseguido enlazar maestro y esclavo, sin embargo con los niveles de radiación tan altos que hay en ITER la electrónica del controlador no puede entrar dentro del tokamak. Por ello se propone que a través de una mínima electrónica convenientemente protegida se puedan multiplexar las señales de control que van a través del cableado umbilical desde el controlador hasta la base del robot. En este ejercicio teórico se demuestra la utilidad y viabilidad de utilizar este tipo de solución para reducir el volumen y peso del cableado umbilical en cifras aproximadas de un 90%, para ello hay que desarrollar una electrónica específica y con certificación RadHard para soportar los enormes niveles de radiación de ITER. Para este manipulador de tipo genérico y con ayuda de la Plataforma Abierta de Teleoperación, se ha desarrollado un algoritmo que mediante un sensor de fuerza/par y una IMU colocados en la muñeca del robot, y convenientemente protegidos ante la radiación, permiten calcular las fuerzas e inercias que produce la carga, esto es necesario para poder transmitirle al operador unas fuerzas escaladas, y que pueda sentir la carga que manipula, y no otras fuerzas que puedan influir en el esclavo remoto, como ocurre con otras técnicas de estimación de fuerzas. Como el blindaje de los sensores no debe ser grande ni pesado, habrá que destinar este tipo de tecnología a las tareas de mantenimiento de las paradas programadas de ITER, que es cuando los niveles de radiación están en sus valores mínimos. Por otro lado para que el operador sienta lo más fielmente posible la fuerza de carga se ha desarrollado una electrónica que mediante el control en corriente de los motores permita realizar un control en fuerza a partir de la caracterización de los motores del maestro. Además para aumentar la percepción del operador se han realizado unos experimentos que demuestran que al aplicar estímulos multimodales (visuales, auditivos y hápticos) aumenta su inmersión y el rendimiento en la consecución de la tarea puesto que influyen directamente en su capacidad de respuesta. Finalmente, y en referencia a la realimentación visual del operador, en ITER se trabaja con cámaras situadas en localizaciones estratégicas, si bien el humano cuando manipula objetos hace uso de su visión binocular cambiando constantemente el punto de vista adecuándose a las necesidades visuales de cada momento durante el desarrollo de la tarea. Por ello, se ha realizado una reconstrucción tridimensional del espacio de la tarea a partir de una cámara-sensor RGB-D, lo cual nos permite obtener un punto de vista binocular virtual móvil a partir de una cámara situada en un punto fijo que se puede proyectar en un dispositivo de visualización 3D para que el operador pueda variar el punto de vista estereoscópico según sus preferencias. La correcta integración de estas tecnologías para la interacción hombre-robot en la P.A.T. ha permitido validar mediante pruebas y experimentos para verificar su utilidad en la aplicación práctica de la telemanipulación con alto grado de escalado en entornos nucleares de fusión. Abstract This thesis focuses on developing technologies for human-robot interaction in nuclear fusion environments. The main problem of nuclear fusion sector resides in such extreme environmental conditions existing in the hot-cell, leading to very restrictive requirements for equipment in order to deal with these high levels of radiation, magnetism, ultravacuum, temperature... Since it is not feasible to carry out tasks directly by humans, we must use remote handling devices for accomplishing operation and maintenance processes. In ITER facilities it is mandatory to have a controlled environment of extreme safety and security with validated standards. The definition and use of protocols is essential to govern its operation. Focusing on Remote Handling with some degree of escalation, protocols must be defined for open systems to allow interaction among different kind of equipment and several multifunctional devices. In this context, a Teleoperation Protocol definition enables interconnection between master and slave devices from different typologies, being able to communicate bilaterally one each other and using different control algorithms depending on the task to perform. This protocol and its interconnectivity have been tested in the Teleoperation Open Platform (T.O.P.) that has been developed and integrated in the ETSII UPM as a tool to test, validate and conduct experiments in Telerobotics. Currently, this protocol has been proposed for Teleoperation through AENOR to the ISO Telerobotics group as a valid solution to the existing problem, and it is under review. Master and slave connection has been achieved with this protocol design, however with such high radiation levels in ITER, the controller electronics cannot enter inside the tokamak. Therefore it is proposed a multiplexed electronic board, that through suitable and RadHard protection processes, to transmit control signals through an umbilical cable from the controller to the robot base. In this theoretical exercise the utility and feasibility of using this type of solution reduce the volume and weight of the umbilical wiring approximate 90% less, although it is necessary to develop specific electronic hardware and validate in RadHard qualifications in order to handle huge levels of ITER radiation. Using generic manipulators does not allow to implement regular sensors for force feedback in ITER conditions. In this line of research, an algorithm to calculate the forces and inertia produced by the load has been developed using a force/torque sensor and IMU, both conveniently protected against radiation and placed on the robot wrist. Scaled forces should be transmitted to the operator, feeling load forces but not other undesirable forces in slave system as those resulting from other force estimation techniques. Since shielding of the sensors should not be large and heavy, it will be necessary to allocate this type of technology for programmed maintenance periods of ITER, when radiation levels are at their lowest levels. Moreover, the operator perception needs to feel load forces as accurate as possible, so some current control electronics were developed to perform a force control of master joint motors going through a correct motor characterization. In addition to increase the perception of the operator, some experiments were conducted to demonstrate applying multimodal stimuli (visual, auditory and haptic) increases immersion and performance in achieving the task since it is directly correlated with response time. Finally, referring to the visual feedback to the operator in ITER, it is usual to work with 2D cameras in strategic locations, while humans use binocular vision in direct object manipulation, constantly changing the point of view adapting it to the visual needs for performing manipulation during task procedures. In this line a three-dimensional reconstruction of non-structured scenarios has been developed using RGB-D sensor instead of cameras in the remote environment. Thus a mobile virtual binocular point of view could be generated from a camera at a fixed point, projecting stereoscopic images in 3D display device according to operator preferences. The successful integration of these technologies for human-robot interaction in the T.O.P., and validating them through tests and experiments, verify its usefulness in practical application of high scaling remote handling at nuclear fusion environments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

NOGUEIRA, Marcelo B. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. Pose Estimation of a Humanoid Robot Using Images from an Mobile Extern Camera. In: IFAC WORKSHOP ON MULTIVEHICLE SYSTEMS, 2006, Salvador, BA. Anais... Salvador: MVS 2006, 2006.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

NOGUEIRA, Marcelo B. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. Pose Estimation of a Humanoid Robot Using Images from an Mobile Extern Camera. In: IFAC WORKSHOP ON MULTIVEHICLE SYSTEMS, 2006, Salvador, BA. Anais... Salvador: MVS 2006, 2006.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En esta presentación se exponen la arquitectura general y el estado actual de desarrollo del sistema CLARK. Dicho sistema tiene como objetivo el despliegue de un asistente robótico para ayudar a un médico en la realización de procedimientos CGA (Comprehensive Geriatric Assessment), de forma que ciertas tareas, tales como la realización de cuestionarios o pruebas de movimiento, puedan ser realizadas por el robot de forma paralela al resto del procedimiento CGA, aumentando así su eficiencia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes technologies we have developed to perform autonomous large-scale off-world excavation. A scale dragline excavator of size similar to that required for lunar excavation was made capable of autonomous control. Systems have been put in place to allow remote operation of the machine from anywhere in the world. Algorithms have been developed for complete autonomous digging and dumping of material taking into account machine and terrain constraints and regolith variability. Experimental results are presented showing the ability to autonomously excavate and move large amounts of regolith and accurately place it at a specified location.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research reported in this paper explores autonomous technologies for agricultural farming application and is focused on the development of multiple-cooperative agricultural robots (AgBots). These are highly autonomous, small, lightweight, and unmanned machines that operate cooperatively (as opposed to a traditional single heavy machine) and are suited to work on broadacre land (large-scale crop operations on land parcels greater than 4,000m2). Since this is a new, and potentially disruptive technology, little is yet known about farmer attitudes towards robots, how robots might be incorporated into current farming practice, and how best to marry the capability of the robot with the work of the farmer. This paper reports preliminary insights (with a focus on farmer-robot control) gathered from field visits and contextual interviews with farmers, and contributes knowledge that will enable further work toward the design and application of agricultural robotics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research publication is one of the final steps in the research process, which begins with development of a research idea. Moving through the process of bringing together collaborators, design of the study protocol, securing of grant or study funding, and obtaining ethic(s) approval to conduct the research, and implementation of the research, analysis and drawing of conclusions based on the data leads to publication of the study results. Although a final step in the research process entails dissemination of the results, many studies go unreported or are improperly reported. Indeed, reviewers have suggested that many randomized controlled trials, observational studies, and qualitative studies lack crucial methodological features or details that lend credibility to study results (Simera et al., 2010).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes ongoing work on a system using spatial descriptions to construct abstract maps that can be used for goal-directed exploration in an unfamiliar office environment. Abstract maps contain membership, connectivity, and spatial layout information extracted from symbolic spatial information. In goal-directed exploration, the robot would then link this information with observed symbolic information and its grounded world representation. We demonstrate the ability of the system to extract and represent membership, connectivity, and spatial layout information from spatial descriptions of an office environment. In the planned study, the robot will navigate to the goal location using the abstract map to inform the best direction to explore in.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

介绍了一种利用人机合作技术在非结构环境引导机械手抓取静态目标的方法.分别介绍了将激光—CCD摄像机系统与操作者的经验相结合获得抓取目标位置的方法,及将虚拟现实技术与操作者的经验相结合获得抓取目标姿态的方法.继而利用基于模型的视觉引导技术,引导手臂完成抓取操作.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

针对基于网络的智能机器人遥操作系统中人机交互的主要难点和现有方法的不足,结合基于网络的多机器人遥操作系统的特点,应用多模式控制的方法丰富了操作者与机器人系统的交互途径,提高了操作效率.在此基础上,为解决网络时延给多机器人遥操作系统中的人机交互带来的问题,提出了一种带有时间标记的基于事件的方法,在保证系统稳定运行的同时提高了系统的效率和性能.实验证明了所提方法的有效性和优越性.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

对多机器人的体系结构进行了研究 .采用时空表和时间控制器相结合的方法 ,解决多机器人间的协调协作问题 .针对编队问题的具体特性 ,提出了基于环境的记忆学习方法 ,使多机器人编队系统具有较强的环境自适应能力 .最后 ,通过仿真实验实现了整个多机器人系统 ,进一步验证了各个算法的可行性和有效性 .

Relevância:

80.00% 80.00%

Publicador:

Resumo:

在多机器人系统中 ,评价一个机器人行为的好坏常常依赖于其它机器人的行为 ,此时必须采用组合动作以实现多机器人的协作 ,但采用组合动作的强化学习算法由于学习空间异常庞大而收敛得极慢 .本文提出的新方法通过预测各机器人执行动作的概率来降低学习空间的维数 ,并应用于多机器人协作任务之中 .实验结果表明 ,基于预测的加速强化学习算法可以比原始算法更快地获得多机器人的协作策略 .

Relevância:

80.00% 80.00%

Publicador:

Resumo:

分布、自主、协调与合作是多机器人系统的发展趋势。本文作者在研究易于协调合作的多机器人系统的基础上,采用分层递阶和多Agent概念,构造了一个装配系统-MROCAS系统。该系统具有任务自动建模分解,快速重组、良好柔性、友好人机界面,各机器人具有一定自主能力等特点,它实现了在较复杂环境下快速完成装配作业。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human-robot interaction is an interdisciplinary research area which aims at integrating human factors, cognitive psychology and robot technology. The ultimate goal is the development of social robots. These robots are expected to work in human environments, and to understand behavior of persons through gestures and body movements. In this paper we present a biological and realtime framework for detecting and tracking hands. This framework is based on keypoints extracted from cortical V1 end-stopped cells. Detected keypoints and the cells’ responses are used to classify the junction type. By combining annotated keypoints in a hierarchical, multi-scale tree structure, moving and deformable hands can be segregated, their movements can be obtained, and they can be tracked over time. By using hand templates with keypoints at only two scales, a hand’s gestures can be recognized.