56 resultados para Robótica y control automático
Resumo:
En los últimos años las plataformas de hardware libre han adquirido gran relevancia en el desarrollo de prototipos y en la educación en tecnología. Una plataforma de hardware libre es básicamente un diseño de sistema un electrónico microprocesador que sus autores difunden libremente y puede ser utilizado sin tener que pagar licencias. Ente la multitud de plataformas disponibles, destaca Arduino. Se caracteriza por su bajo precio, y que el software necesario para hacer funcionar la plataforma es libre y gratuito. Todo ello hace que estos dispositivos sean fácilmente accesibles por estudiantes. Este trabajo describe la aplicación de hardware libre a experimentos de laboratorio en asignaturas de ingeniería de la UA, especialmente del máster en Automática y Robótica, en las que se controlan sistemas industriales o robóticos. Esto contrasta con los experimentos clásicos en los que se emplean sistemas caros y difícilmente accesibles por el alumno. Además, los experimentos deben ser atractivos y de aplicaciones reales, para atraer el interés del alumno, con el objetivo principal de que aprenda más y mejor en el laboratorio.
Resumo:
Se presenta en esta memoria el trabajo desarrollado durante el curso 2013/14 por los componentes de la “Red de investigación en Visión Artificial y Robótica. Establecimiento de contenidos e implantación y seguimiento del plan de evaluación”. Código de Red ICE 3031. Este ha sido el primer curso en el que se imparte la asignatura a estudio y nuestros esfuerzos han estado orientados tanto a la valoración de los materiales elaborados en los años precedentes como al seguimiento y ponderación del sistema de evaluación propuesto para la asignatura de Visión Artificial y Robótica y que consiste en la evaluación continua de trabajos desarrollados por los estudiantes a lo largo de todo el cuatrimestre. Además, estos trabajos han de ser expuestos oralmente en el aula. Para ello, el alumno ha de desarrollar también las transparencias que le sirvan para apoyar su presentación.
Resumo:
The free hardware platforms have become very important in engineering education in recent years. Among these platforms, Arduino highlights, characterized by its versatility, popularity and low price. This paper describes the implementation of four laboratory experiments for Automatic Control and Robotics courses at the University of Alicante, which have been developed based on Arduino and other existing equipment. Results were evaluated taking into account the views of students, concluding that the proposed experiments have been attractive to them, and they have acquired the knowledge about hardware configuration and programming that was intended.
Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home
Resumo:
This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance.
Resumo:
Comunicación presentada en la VI Conferencia de la Asociación Española para la Inteligencia Artificial (CAEPIA'95), Alicante, 15-17 noviembre 1995.
Resumo:
Comunicación presentada en la VII Conferencia de la Asociación Española para la Inteligencia Artificial, CAEPIA, Málaga, 12-14 noviembre, 1997.
Resumo:
Comunicación presentada en el X Workshop of Physical Agents, Cáceres, 10-11 septiembre 2009.
Resumo:
This paper analyzes the learning experiences and opinions from a group of undergraduate students in a course about Robotics. The contents of this course were taught as a set of seminars. In each seminar, the student learned interdisciplinary knowledge of computer science, control engineering, electronics and other fields related to Robotics. The aim of this course is that the students are able to design and implement their own and custom robotic solution for a series of tests planned by the teachers. These tests measure the behavior and mechatronic features of the students' robots. Finally, the students' robots are confronted with some competitions. In this paper, the low-cost robotic architecture used by the students, the contents of the course, the tests to compare the solutions of students and the opinion of them are amply discussed.
Resumo:
En este artículo se describe el concepto de plataforma RASMA, Robot-Assisted Stop-Motion Animation, cuya finalidd es facilitar la tarea de generar los fotogramas necesarios para crear una secuencia animada en 2D. Se describen tanto la generación de trayectorias que deben seguir los objetos (en Unity 3D o en Adobe Flash Player), como la exportación/importación de los ficheros de datos en XML, la planificación de las trayectorias del robot, la toma de fotogramas y el ensamblado final de toda la secuencia.
Resumo:
Paper submitted to the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan, August 29-31, 2012.
Resumo:
Traditional visual servoing systems have been widely studied in the last years. These systems control the position of the camera attached to the robot end-effector guiding it from any position to the desired one. These controllers can be improved by using the event-based control paradigm. The system proposed in this paper is based on the idea of activating the visual controller only when something significant has occurred in the system (e.g. when any visual feature can be loosen because it is going outside the frame). Different event triggers have been defined in the image space in order to activate or deactivate the visual controller. The tests implemented to validate the proposal have proved that this new scheme avoids visual features to go out of the image whereas the system complexity is reduced considerably. Events can be used in the future to change different parameters of the visual servoing systems.
Resumo:
Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.
Resumo:
Paper submitted to ACE 2013, 10th IFAC Symposium on Advances in Control Education, University of Sheffield, UK, August 28-30, 2013.
Resumo:
Event-based visual servoing is a recently presented approach that performs the positioning of a robot using visual information only when it is required. From the basis of the classical image-based visual servoing control law, the scheme proposed in this paper can reduce the processing time at each loop iteration in some specific conditions. The proposed control method enters in action when an event deactivates the classical image-based controller (i.e. when there is no image available to perform the tracking of the visual features). A virtual camera is then moved through a straight line path towards the desired position. The virtual path used to guide the robot improves the behavior of the previous event-based visual servoing proposal.
Resumo:
New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.