867 resultados para Collision avoidance, Human robot cooperation, Mobile robot sensor placement
Resumo:
La robótica móvil constituye un área de desarrollo y explotación de interés creciente. Existen ejemplos de robótica móvil de relevancia destacada en el ámbito industrial y se estima un fuerte crecimiento en el terreno de la robótica de servicios. En la arquitectura software de todos los robots móviles suelen aparecer con frecuencia componentes que tienen asignadas competencias de gobierno, navegación, percepción, etcétera, todos ellos de importancia destacada. Sin embargo, existe un elemento, difícilmente prescindible en este tipo de robots, el cual se encarga del control de velocidad del dispositivo en sus desplazamientos. En el presente proyecto se propone desarrollar un controlador PID basado en el modelo y otro no basado en el modelo. Dichos controladores deberán operar en un robot con configuración de triciclo disponible en el Departamento de Sistemas Informáticos y deberán por tanto ser programados en lenguaje C para ejecutar en el procesador digital de señal destinado para esa actividad en el mencionado robot (dsPIC33FJ128MC802). ABSTRACT Mobile robotics constitutes an area of development and exploitation of increasing interest. There are examples of mobile robotics of outstanding importance in industry and strong growth is expected in the field of service robotics. In the software architecture of all mobile robots usually appear components which have assigned competences of government, navigation, perceptionetc., all of them of major importance. However, there is an essential element in this type of robots, which takes care of the speed control. The present project aims to develop a model-based and other non-model-based PID controller. These controllers must operate in a robot with tricycle settings, available from the Department of Computing Systems, and should therefore be programmed in C language to run on the digital signal processor dedicated to that activity in the robot (dsPIC33FJ128MC802).
Resumo:
Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.
Resumo:
Traditional visual servoing systems do not deal with the topic of moving objects tracking. When these systems are employed to track a moving object, depending on the object velocity, visual features can go out of the image, causing the fail of the tracking task. This occurs specially when the object and the robot are both stopped and then the object starts the movement. In this work, we have employed a retina camera based on Address Event Representation (AER) in order to use events as input in the visual servoing system. The events launched by the camera indicate a pixel movement. Event visual information is processed only at the moment it occurs, reducing the response time of visual servoing systems when they are used to track moving objects.
Resumo:
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.
Resumo:
The control and coordination of multiple mobile robots is a challenging task; particularly in environments with multiple, rapidly moving obstacles and agents. This paper describes a robust approach to multi-robot control, where robustness is gained from competency at every layer of robot control. The layers are: (i) a central coordination system (MAPS), (ii) an action system (AES), (iii) a navigation module, and (iv) a low level dynamic motion control system. The multi-robot coordination system assigns each robot a role and a sub-goal. Each robot’s action execution system then assumes the assigned role and attempts to achieve the specified sub-goal. The robot’s navigation system directs the robot to specific goal locations while ensuring that the robot avoids any obstacles. The motion system maps the heading and speed information from the navigation system to force-constrained motion. This multi-robot system has been extensively tested and applied in the robot soccer domain using both centralized and distributed coordination.
Resumo:
Tactile sensors are needed for many emerging robotic and telepresence applications such as keyhole surgery and robot operation in unstructured environments. We have proposed and demonstrated a tactile sensor consisting of a fibre Bragg grating embedded in a polymer "finger". When the sensor is placed in contact with a surface and translated tangentially across it measurements on the changes in the reflectivity spectrum of the grating provide a measurement of the spatial distribution of forces perpendicular to the surface and thus, through the elasticity of the polymer material, to the surface roughness. Using a sensor fabricated from a Poly Siloxane polymer (Methyl Vinyl Silicone rubber) spherical cap 50 mm in diameter, 6 mm deep with an embedded 10 mm long Bragg grating we have characterised the first and second moment of the grating spectral response when scanned across triangular and semicircular periodic structures both with a modulation depth of 1 mm and a period of 2 mm. The results clearly distinguish the periodicity of the surface structure and the differences between the two different surface profiles. For the triangular structure a central wavelength modulation of 4 pm is observed and includes a fourth harmonic component, the spectral width is modulated by 25 pm. Although crude in comparison to human senses these results clearly shown the potential of such a sensor for tactile imaging and we expect that with further development in optimising both the grating and polymer "finger" properties a much increased sensitivity and spatial resolution is achievable.
Resumo:
In article the mathematical model of the mobile robot actions planning at recognition of situations in extreme conditions of functioning is offered. The purpose of work is reduced to formation of a concrete plan of the robot actions by extrapolation of a situation and its concrete definition with the account a priori unpredictable features of current conditions.
Resumo:
This work proposes a new autonomous navigation strategy assisted by genetic algorithm with dynamic planning for terrestrial mobile robots, called DPNA-GA (Dynamic Planning Navigation Algorithm optimized with Genetic Algorithm). The strategy was applied in environments - both static and dynamic - in which the location and shape of the obstacles is not known in advance. In each shift event, a control algorithm minimizes the distance between the robot and the object and maximizes the distance from the obstacles, rescheduling the route. Using a spatial location sensor and a set of distance sensors, the proposed navigation strategy is able to dynamically plan optimal collision-free paths. Simulations performed in different environments demonstrated that the technique provides a high degree of flexibility and robustness. For this, there were applied several variations of genetic parameters such as: crossing rate, population size, among others. Finally, the simulation results successfully demonstrate the effectiveness and robustness of DPNA-GA technique, validating it for real applications in terrestrial mobile robots.
Resumo:
This work proposes a new autonomous navigation strategy assisted by genetic algorithm with dynamic planning for terrestrial mobile robots, called DPNA-GA (Dynamic Planning Navigation Algorithm optimized with Genetic Algorithm). The strategy was applied in environments - both static and dynamic - in which the location and shape of the obstacles is not known in advance. In each shift event, a control algorithm minimizes the distance between the robot and the object and maximizes the distance from the obstacles, rescheduling the route. Using a spatial location sensor and a set of distance sensors, the proposed navigation strategy is able to dynamically plan optimal collision-free paths. Simulations performed in different environments demonstrated that the technique provides a high degree of flexibility and robustness. For this, there were applied several variations of genetic parameters such as: crossing rate, population size, among others. Finally, the simulation results successfully demonstrate the effectiveness and robustness of DPNA-GA technique, validating it for real applications in terrestrial mobile robots.
Resumo:
[EN]In this paper we will present Eldi, a mobile robot that has been in opertation at the Elder Museum of Science and Technology at Las Palmas de Gran Canaria since december 1999. This is an ongoing project that was organized in three different stages of which only the first one has been accomplished. The initial phase, termed "The Player", the second stage, actually under develpment, has been called "The Cicerone" and in the final phase, termed "The Vagabond", in which Eldi will be allowed to move erratically across the Museum. This paper will focus on the accomplished first stage to succinctly describe the physical robot and the environment and demos developed. Finally we will summarize some important lessons learnt.
Resumo:
[EN]In this paper we will present Eldi, a mobile robot that has been in daily operation at the Elder Museum of Science and Technology at Las Palmas de Gran Canaria since December 1999. This is an ongoing project that was organized in three di erent stages, describing here the one that has been accomplished. The initial phase, termed \The Player", the second stage, actually under development, has been called "The Cicerone" and in a nal phase, termed \The Vagabond", Eldi will be allowed to move erratically across the Museum. This paper will focus on the accomplished rst stage to succinctly describe the physical robot and the environment and demos developed. Finally we will summarize some important lessons learnt.
Resumo:
In this paper we will present Eldi, a mobile robot that has been in daily operation at the Elder Museum of S ien e and Te hnology at Las Palmas de Gran Canaria sin e last De ember. This is an ongoing pro je t that was organized in three di erent stages of whi h only the rst one has been a omplished. The initial phase, termed \The Player", the se ond stage, a tually under development, has been alled "The Ci erone" and in a nal phase, termed \The Vagabond", Eldi will be allowed to move errati ally a ross the Museum...
Resumo:
L’interaction physique humain-robot est un domaine d’étude qui s’est vu porter beaucoup d’intérêt ces dernières années. Une optique de coopération entre les deux entités entrevoit le potentiel d’associer les forces de l’humain (comme son intelligence et son adaptabilité) à celle du robot (comme sa puissance et sa précision). Toutefois, la mise en service des applications développées reste une opération délicate tant les problèmes liés à la sécurité demeurent importants. Les robots constituent généralement de lourdes machines capables de déplacements très rapides qui peuvent blesser gravement un individu situé à proximité. Ce projet de recherche aborde le problème de sécurité en amont avec le développement d’une stratégie dite "pré-collision". Celle-ci se caractérise par la conception d’un système de planification de mouvements visant à optimiser la sécurité de l’individu lors de tâches d’interaction humain-robot dans un contexte industriel. Pour ce faire, un algorithme basé sur l’échantillonnage a été employé et adapté aux contraintes de l’application visée. Dans un premier temps, l’intégration d’une méthode exacte de détection de collision certifie que le chemin trouvé ne présente, a priori, aucun contact indésirable. Ensuite, l’évaluation de paramètres pertinents introduit notre notion de sécurité et définit un ensemble d’objectifs à optimiser. Ces critères prennent en compte la proximité par rapport aux obstacles, l’état de conscience des êtres humains inclus dans l’espace de travail ainsi que le potentiel de réaction du robot en cas d'évènement imprévu. Un système inédit de combinaison d’objectifs guide la recherche et mène à l’obtention du chemin jugé comme étant le plus sûr, pour une connaissance donnée de l’environnement. Le processus de contrôle se base sur une acquisition minimale de données environnementales (dispositif de surveillance visuelle) dans le but de nécessiter une installation matérielle qui se veut la plus simple possible. Le fonctionnement du système a été validé sur le robot industriel Baxter.
Resumo:
Resumen: En el siguiente trabajo se aborda un problema para solventar la comunicación con los robots del departamento MAPIR de la Universidad de Málaga, anteriormente sólo podían ser teleoperados mediante comandos escritos en Skype, así que se procede a diseñar un cliente móvil para Android que nos permite conectarse en tiempo real a un robot, obtener la imagen de lo que su cámara capta y además permitir su teleoperación. Por su parte, el robot corre un servidor que administra esos datos al cliente para trabajar conjuntamente. Dicho trabajo se desarrolla haciendo uso de nuevas tecnologias y protocolos como es WebRTC (de Google) para el intercambio de imágenes y del lado del servidor, se ha usado NodeJS.