55 resultados para VISIÓN ARTIFICIAL
em Universidad de Alicante
Resumo:
Se presenta en esta memoria el trabajo desarrollado durante el curso 2013/14 por los componentes de la “Red de investigación en Visión Artificial y Robótica. Establecimiento de contenidos e implantación y seguimiento del plan de evaluación”. Código de Red ICE 3031. Este ha sido el primer curso en el que se imparte la asignatura a estudio y nuestros esfuerzos han estado orientados tanto a la valoración de los materiales elaborados en los años precedentes como al seguimiento y ponderación del sistema de evaluación propuesto para la asignatura de Visión Artificial y Robótica y que consiste en la evaluación continua de trabajos desarrollados por los estudiantes a lo largo de todo el cuatrimestre. Además, estos trabajos han de ser expuestos oralmente en el aula. Para ello, el alumno ha de desarrollar también las transparencias que le sirvan para apoyar su presentación.
Resumo:
This paper analyzes the learning experiences and opinions obtained from a group of undergraduate students in their interaction with several on-line multimedia resources included in a free on-line course about Computer Networks. These new educational resources employed are based on the Web2.0 approach such as blogs, videos and virtual labs which have been added in a web-site for distance self-learning.
Resumo:
Virtual and remote laboratories(VRLs) are e-learning resources which enhance the accessibility of experimental setups providing a distance teaching framework which meets the student's hands-on learning needs. In addition, online collaborative communication represents a practical and a constructivist method to transmit the knowledge and experience from the teacher to students, overcoming physical distance and isolation. Thus, the integration of learning environments in the form of VRLs inside collaborative learning spaces is strongly desired. Considering these facts, the authors of this document present an original approach which enables user to share practical experiences while they work collaboratively through the Internet. This practical experimentation is based on VRLs, which have been integrated inside a synchronous collaborative e-learning framework. This article describes the main features of this system and its successful application for science and engineering subjects.
Resumo:
Comunicación presentada en la VII Conferencia de la Asociación Española para la Inteligencia Artificial, CAEPIA, Málaga, 12-14 noviembre, 1997.
Resumo:
Este artículo analiza diferentes experiencias docentes que tienen como finalidad el aprendizaje de la robótica en el mundo universitario. Estas experiencias se plasman en el desarrollo de varios cursos y asignaturas sobre robótica que se imparten en la Universidad de Alicante. Para el desarrollo de estos cursos, los autores han empleado varias plataformas educativas, algunas de implementación propia, otras de libre distribución y código abierto. El objetivo de estos cursos es enseñar el diseño e implementación de soluciones robóticas a diversos problemas que van desde el control, programación y manipulación de brazos robots de ámbito industrial hasta la construcción y/o programación de mini-robots con carácter educativo. Por un lado, se emplean herramientas didácticas de última generación como simuladores y laboratorios virtuales que flexibilizan el uso de brazos robots y, por otro lado, se hace uso de competiciones y concursos para motivar al alumno haciendo que ponga en práctica las destrezas aprendidas, mediante la construcción y programación de mini-robots de bajo coste.
Resumo:
This paper analyzes the learning experiences and opinions from a group of undergraduate students in a course about Robotics. The contents of this course were taught as a set of seminars. In each seminar, the student learned interdisciplinary knowledge of computer science, control engineering, electronics and other fields related to Robotics. The aim of this course is that the students are able to design and implement their own and custom robotic solution for a series of tests planned by the teachers. These tests measure the behavior and mechatronic features of the students' robots. Finally, the students' robots are confronted with some competitions. In this paper, the low-cost robotic architecture used by the students, the contents of the course, the tests to compare the solutions of students and the opinion of them are amply discussed.
Resumo:
En este artículo se describe el concepto de plataforma RASMA, Robot-Assisted Stop-Motion Animation, cuya finalidd es facilitar la tarea de generar los fotogramas necesarios para crear una secuencia animada en 2D. Se describen tanto la generación de trayectorias que deben seguir los objetos (en Unity 3D o en Adobe Flash Player), como la exportación/importación de los ficheros de datos en XML, la planificación de las trayectorias del robot, la toma de fotogramas y el ensamblado final de toda la secuencia.
Resumo:
This paper analyzes the learning experiences and opinions obtained from a group of undergraduate students in their interaction with several on-line multimedia resources included in a free on-line course about Computer Networks. These new educational resources employed are based on the Web 2.0 approach such as blogs, videos and virtual labs which have been added in a web-site for distance self-learning.
Resumo:
Paper submitted to the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan, August 29-31, 2012.
Resumo:
Traditional visual servoing systems have been widely studied in the last years. These systems control the position of the camera attached to the robot end-effector guiding it from any position to the desired one. These controllers can be improved by using the event-based control paradigm. The system proposed in this paper is based on the idea of activating the visual controller only when something significant has occurred in the system (e.g. when any visual feature can be loosen because it is going outside the frame). Different event triggers have been defined in the image space in order to activate or deactivate the visual controller. The tests implemented to validate the proposal have proved that this new scheme avoids visual features to go out of the image whereas the system complexity is reduced considerably. Events can be used in the future to change different parameters of the visual servoing systems.
Resumo:
Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.
Resumo:
Paper submitted to ACE 2013, 10th IFAC Symposium on Advances in Control Education, University of Sheffield, UK, August 28-30, 2013.
Resumo:
Event-based visual servoing is a recently presented approach that performs the positioning of a robot using visual information only when it is required. From the basis of the classical image-based visual servoing control law, the scheme proposed in this paper can reduce the processing time at each loop iteration in some specific conditions. The proposed control method enters in action when an event deactivates the classical image-based controller (i.e. when there is no image available to perform the tracking of the visual features). A virtual camera is then moved through a straight line path towards the desired position. The virtual path used to guide the robot improves the behavior of the previous event-based visual servoing proposal.
Resumo:
New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.
Resumo:
Tactile sensors play an important role in robotics manipulation to perform dexterous and complex tasks. This paper presents a novel control framework to perform dexterous manipulation with multi-fingered robotic hands using feedback data from tactile and visual sensors. This control framework permits the definition of new visual controllers which allow the path tracking of the object motion taking into account both the dynamics model of the robot hand and the grasping force of the fingertips under a hybrid control scheme. In addition, the proposed general method employs optimal control to obtain the desired behaviour in the joint space of the fingers based on an indicated cost function which determines how the control effort is distributed over the joints of the robotic hand. Finally, authors show experimental verifications on a real robotic manipulation system for some of the controllers derived from the control framework.