934 resultados para Interaction human robot


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In previous work we have presented a model capable of generating human-like movements for a dual arm-hand robot involved in human-robot cooperative tasks. However, the focus was on the generation of reach-to-grasp and reach-to-regrasp bimanual movements and no synchrony in timing was taken into account. In this paper we extend the previous model in order to accomplish bimanual manipulation tasks by synchronously moving both arms and hands of an anthropomorphic robotic system. Specifically, the new extended model has been designed for two different tasks with different degrees of difficulty. Numerical results were obtained by the implementation of the IPOPT solver embedded in our MATLAB simulator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Resumen tomado del autor

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The 'Uncanny Valley' was conceived in 1970 by Prof Masahiro Mori and details a possible relationship between an object's appearance or motion and how people perceive the object. Initially this research was used without validation. Modern technology has enabled initial investigations, summarised here, that conclude further work is required. A good design guideline for humanoid robots is desired if humanoid robots are to assist with an increasingly elderly population, but not yet possible due to technological constraints. Prosthetics is considered a good resource as the user interaction is comparable to the anticipated level of human-robot interaction and there is a wide range of existing devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This project is comprised by an interactive mobile robotics’ environment, focused in human-robot interaction. The system was developed to work in a smartphone, with Android operating system, embedded in a small size mobile robot. Information provided by the smartphone’s camera and microp hone, as well as by proximity sensors embedded in the robot, is used as inputs of a control architecture, implemented in software. It is a behavior-based and receptive to human commands control architecture, to assist the robot’s navigation. The robot is controlled by its own behaviors or by commands em it ted by humans

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Shared attention is a type of communication very important among human beings. It is sometimes reserved for the more complex form of communication being constituted by a sequence of four steps: mutual gaze, gaze following, imperative pointing and declarative pointing. Some approaches have been proposed in Human-Robot Interaction area to solve part of shared attention process, that is, the most of works proposed try to solve the first two steps. Models based on temporal difference, neural networks, probabilistic and reinforcement learning are methods used in several works. In this article, we are presenting a robotic architecture that provides a robot or agent, the capacity of learning mutual gaze, gaze following and declarative pointing using a robotic head interacting with a caregiver. Three learning methods have been incorporated to this architecture and a comparison of their performance has been done to find the most adequate to be used in real experiment. The learning capabilities of this architecture have been analyzed by observing the robot interacting with the human in a controlled environment. The experimental results show that the robotic head is able to produce appropriate behavior and to learn from sociable interaction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device) as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user -independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human -robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ce projet de recherche, intitulé Téléopération d'un robot collaboratif par outil haptique traite un des problèmes contemporains de la robotique, à savoir la coopération entre l'humain et la machine. La robotique est en pleine expansion depuis maintenant deux décennies: les robots investissent de plus en plus l'industrie, les services ou encore l'assistance à la personne et se diversifient considérablement. Ces nouvelles tendances font sortir les robots des cages dans lesquelles ils étaient placés et ouvrent grand la porte vers de nouvelles applications. Parmi elles, la coopération et les interactions avec l'humain représentent une réelle opportunité pour soulager l'homme dans des tâches complexes, fastidieuses et répétitives. En parallèle de cela, la robotique moderne s'oriente vers un développement massif du domaine humanoïde. Effectivement, plusieurs expériences sociales ont montré que l'être humain, constamment en interaction avec les systèmes qui l'entourent, a plus de facilités à contribuer à la réalisation d'une tâche avec un robot d'apparence humaine plutôt qu'avec une machine. Le travail présenté dans ce projet de recherche s'intègre dans un contexte d'interaction homme-robot (IHR) qui repose sur la robotique humanoïde. Le système qui en découle doit permettre à un utilisateur d'interagir efficacement et de façon intuitive avec la machine, tout en respectant certains critères, notamment de sécurité. Par une mise en commun des compétences respectives de l'homme et du robot humanoïde, les interactions sont améliorées. En effet, le robot peut réaliser une grande quantité d'actions avec précision et sans se fatiguer, mais n'est pas nécessairement doté d'une prise de décision adaptée à la situation, contrairement à l'homme qui est capable d'ajuster son comportement naturellement ou en fonction de son expérience. En d'autres termes, ce système cherche à intégrer le savoir-faire et la capacité de réflexion humaine avec la robustesse, l'efficacité et la précision du robot. Dans le domaine de la robotique, le terme d'interaction intègre également la notion de contrôle. La grande majorité des robots reçoit des commandes machines qui sont généralement des consignes de trajectoire, qu'ils sont capables d'interpréter. Or, plusieurs interfaces de contrôle sont envisageables, notamment celles utilisant des outils haptiques, qui permettent à un utilisateur d'avoir un ressenti et une perception tactile. Ces outils comme tous ceux qui augmentent le degré de contrôle auprès de l'utilisateur, en ajoutant un volet sensoriel, sont parfaitement adaptés pour ce genre d'applications. Dans ce projet, deux outils haptiques sont assemblés puis intégrés à une interface de contrôle haptique dans le but de commander le bras d'un robot humanoïde. Ainsi, l'homme est capable de diriger le robot tout en ajustant ses commandes en fonction des informations en provenance des différents capteurs du robot, qui lui sont retranscrites visuellement ou sensoriellement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The industrial context is changing rapidly due to advancements in technology fueled by the Internet and Information Technology. The fourth industrial revolution counts integration, flexibility, and optimization as its fundamental pillars, and, in this context, Human-Robot Collaboration has become a crucial factor for manufacturing sustainability in Europe. Collaborative robots are appealing to many companies due to their low installation and running costs and high degree of flexibility, making them ideal for reshoring production facilities with a short return on investment. The ROSSINI European project aims to implement a true Human-Robot Collaboration by designing, developing, and demonstrating a modular and scalable platform for integrating human-centred robotic technologies in industrial production environments. The project focuses on safety concerns related to introducing a cobot in a shared working area and aims to lay the groundwork for a new working paradigm at the industrial level. The need for a software architecture suitable to the robotic platform employed in one of three use cases selected to deploy and test the new technology was the main trigger of this Thesis. The chosen application consists of the automatic loading and unloading of raw-material reels to an automatic packaging machine through an Autonomous Mobile Robot composed of an Autonomous Guided Vehicle, two collaborative manipulators, and an eye-on-hand vision system for performing tasks in a partially unstructured environment. The results obtained during the ROSSINI use case development were later used in the SENECA project, which addresses the need for robot-driven automatic cleaning of pharmaceutical bins in a very specific industrial context. The inherent versatility of mobile collaborative robots is evident from their deployment in the two projects with few hardware and software adjustments. The positive impact of Human-Robot Collaboration on diverse production lines is a motivation for future investments in research on this increasingly popular field by the industry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gaze estimation has gained interest in recent years for being an important cue to obtain information about the internal cognitive state of humans. Regardless of whether it is the 3D gaze vector or the point of gaze (PoG), gaze estimation has been applied in various fields, such as: human robot interaction, augmented reality, medicine, aviation and automotive. In the latter field, as part of Advanced Driver-Assistance Systems (ADAS), it allows the development of cutting-edge systems capable of mitigating road accidents by monitoring driver distraction. Gaze estimation can be also used to enhance the driving experience, for instance, autonomous driving. It also can improve comfort with augmented reality components capable of being commanded by the driver's eyes. Although, several high-performance real-time inference works already exist, just a few are capable of working with only a RGB camera on computationally constrained devices, such as a microcontroller. This work aims to develop a low-cost, efficient and high-performance embedded system capable of estimating the driver's gaze using deep learning and a RGB camera. The proposed system has achieved near-SOTA performances with about 90% less memory footprint. The capabilities to generalize in unseen environments have been evaluated through a live demonstration, where high performance and near real-time inference were obtained using a webcam and a Raspberry Pi4.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabalho de projeto apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Text based on the paper presented at the Conference "Autonomous systems: inter-relations of technical and societal issues" held at Monte de Caparica (Portugal), Universidade Nova de Lisboa, November, 5th and 6th 2009 and organized by IET-Research Centre on Enterprise and Work Innovation