122 resultados para Robótica submarina
Resumo:
Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.
Resumo:
Nowadays, there is an increasing number of robotic applications that need to act in real three-dimensional (3D) scenarios. In this paper we present a new mobile robotics orientated 3D registration method that improves previous Iterative Closest Points based solutions both in speed and accuracy. As an initial step, we perform a low cost computational method to obtain descriptions for 3D scenes planar surfaces. Then, from these descriptions we apply a force system in order to compute accurately and efficiently a six degrees of freedom egomotion. We describe the basis of our approach and demonstrate its validity with several experiments using different kinds of 3D sensors and different 3D real environments.
Resumo:
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.
Resumo:
In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.
Resumo:
This paper presents a new dynamic visual control system for redundant robots with chaos compensation. In order to implement the visual servoing system, a new architecture is proposed that improves the system maintainability and traceability. Furthermore, high performance is obtained as a result of parallel execution of the different tasks that compose the architecture. The control component of the architecture implements a new visual servoing technique for resolving the redundancy at the acceleration level in order to guarantee the correct motion of both end-effector and joints. The controller generates the required torques for the tracking of image trajectories. However, in order to guarantee the applicability of this technique, a repetitive path tracked by the robot-end must produce a periodic joint motion. A chaos controller is integrated in the visual servoing system and the correct performance is observed in low and high velocities. Furthermore, a method to adjust the chaos controller is proposed and validated using a real three-link robot.
Resumo:
The use of RGB-D sensors for mapping and recognition tasks in robotics or, in general, for virtual reconstruction has increased in recent years. The key aspect of these kinds of sensors is that they provide both depth and color information using the same device. In this paper, we present a comparative analysis of the most important methods used in the literature for the registration of subsequent RGB-D video frames in static scenarios. The analysis begins by explaining the characteristics of the registration problem, dividing it into two representative applications: scene modeling and object reconstruction. Then, a detailed experimentation is carried out to determine the behavior of the different methods depending on the application. For both applications, we used standard datasets and a new one built for object reconstruction.
Resumo:
The use of 3D data in mobile robotics provides valuable information about the robot’s environment. Traditionally, stereo cameras have been used as a low-cost 3D sensor. However, the lack of precision and texture for some surfaces suggests that the use of other 3D sensors could be more suitable. In this work, we examine the use of two sensors: an infrared SR4000 and a Kinect camera. We use a combination of 3D data obtained by these cameras, along with features obtained from 2D images acquired from these cameras, using a Growing Neural Gas (GNG) network applied to the 3D data. The goal is to obtain a robust egomotion technique. The GNG network is used to reduce the camera error. To calculate the egomotion, we test two methods for 3D registration. One is based on an iterative closest points algorithm, and the other employs random sample consensus. Finally, a simultaneous localization and mapping method is applied to the complete sequence to reduce the global error. The error from each sensor and the mapping results from the proposed method are examined.
Resumo:
La percepción de profundidad se hace imprescindible en muchas tareas de manipulación, control visual y navegación de robots. Las cámaras de tiempo de vuelo (ToF: Time of Flight) generan imágenes de rango que proporcionan medidas de profundidad en tiempo real. No obstante, el parámetro distancia que calculan estas cámaras es fuertemente dependiente del tiempo de integración que se configura en el sensor y de la frecuencia de modulación empleada por el sistema de iluminación que integran. En este artículo, se presenta una metodología para el ajuste adaptativo del tiempo de integración y un análisis experimental del comportamiento de una cámara ToF cuando se modifica la frecuencia de modulación. Este método ha sido probado con éxito en algoritmos de control visual con arquitectura ‘eye-in-hand’ donde el sistema sensorial está compuesto por una cámara ToF. Además, la misma metodología puede ser aplicada en otros escenarios de trabajo.
Resumo:
A large part of the new generation of computer numerical control systems has adopted an architecture based on robotic systems. This architecture improves the implementation of many manufacturing processes in terms of flexibility, efficiency, accuracy and velocity. This paper presents a 4-axis robot tool based on a joint structure whose primary use is to perform complex machining shapes in some non-contact processes. A new dynamic visual controller is proposed in order to control the 4-axis joint structure, where image information is used in the control loop to guide the robot tool in the machining task. In addition, this controller eliminates the chaotic joint behavior which appears during tracking of the quasi-repetitive trajectories required in machining processes. Moreover, this robot tool can be coupled to a manipulator robot in order to form a multi-robot platform for complex manufacturing tasks. Therefore, the robot tool could perform a machining task using a piece grasped from the workspace by a manipulator robot. This manipulator robot could be guided by using visual information given by the robot tool, thereby obtaining an intelligent multi-robot platform controlled by only one camera.
Resumo:
This paper presents the use of immersive virtual reality systems in the educational intervention with Asperger students. The starting points of this study are features of these students' cognitive style that requires an explicit teaching style supported by visual aids and highly structured environments. The proposed immersive virtual reality system, not only to assess the student's behavior and progress, but also is able to adapt itself to the student's specific needs. Additionally, the immersive reality system is equipped with sensors that can determine certain behaviors of the students. This paper determines the possible inclusion of immersive virtual reality as a support tool and learning strategy in these particular students' intervention. With this objective two task protocols have been defined with which the behavior and interaction situations performed by participant students are recorded. The conclusions from this study talks in favor of the inclusion of these virtual immersive environments as a support tool in the educational intervention of Asperger syndrome students as their social competences and executive functions have improved.
Resumo:
Virtual and remote laboratories (VRLs) are e-learning resources that enhance the accessibility of experimental setups providing a distance teaching framework which meets the student's hands-on learning needs. In addition, online collaborative communication represents a practical and a constructivist method to transmit the knowledge and experience from the teacher to students, overcoming physical distance and isolation. This paper describes the extension of two open source tools: (1) the learning management system Moodle, and (2) the tool to create VRLs Easy Java Simulations (EJS). Our extension provides: (1) synchronous collaborative support to any VRL developed with EJS (i.e., any existing VRL written in EJS can be automatically converted into a collaborative lab with no cost), and (2) support to deploy synchronous collaborative VRLs into Moodle. Using our approach students and/or teachers can invite other users enrolled in a Moodle course to a real-time collaborative experimental session, sharing and/or supervising experiences at the same time they practice and explore experiments using VRLs.
Resumo:
This article presents an interactive Java software platform which enables any user to easily create advanced virtual laboratories (VLs) for Robotics. This novel tool provides both support for developing applications with full 3D interactive graphical interface and a complete functional framework for modelling and simulation of arbitrary serial-link manipulators. In addition, its software architecture contains a high number of functionalities included as high-level tools, with the advantage of allowing any user to easily develop complex interactive robotic simulations with a minimum of programming. In order to show the features of the platform, the article describes, step-by-step, the implementation methodology of a complete VL for Robotics education using the presented approach. Finally, some educational results about the experience of implementing this approach are reported.
Resumo:
Learning and teaching processes are continually changing. Therefore, design of learning technologies has gained interest in educators and educational institutions from secondary school to higher education. This paper describes the successfully use in education of social learning technologies and virtual laboratories designed by the authors, as well as videos developed by the students. These tools, combined with other open educational resources based on a blended-learning methodology, have been employed to teach the subject of Computer Networks. We have verified not only that the application of OERs into the learning process leads to a significantly improvement of the assessments, but also that the combination of several OERs enhances their effectiveness. These results are supported by, firstly, a study of both students’ opinion and students’ behaviour over five academic years, and, secondly, a correlation analysis between the use of OERs and the grades obtained by students.
Resumo:
The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.
Resumo:
Tool path generation is one of the most complex problems in Computer Aided Manufacturing. Although some efficient strategies have been developed, most of them are only useful for standard machining. However, the algorithms used for tool path computation demand a higher computation performance, which makes the implementation on many existing systems very slow or even impractical. Hardware acceleration is an incremental solution that can be cleanly added to these systems while keeping everything else intact. It is completely transparent to the user. The cost is much lower and the development time is much shorter than replacing the computers by faster ones. This paper presents an optimisation that uses a specific graphic hardware approach using the power of multi-core Graphic Processing Units (GPUs) in order to improve the tool path computation. This improvement is applied on a highly accurate and robust tool path generation algorithm. The paper presents, as a case of study, a fully implemented algorithm used for turning lathe machining of shoe lasts. A comparative study will show the gain achieved in terms of total computing time. The execution time is almost two orders of magnitude faster than modern PCs.