27 resultados para open robot control
Resumo:
Context. Several clusters of red supergiants have been discovered in a small region of the Milky Way close to the base of the Scutum-Crux Arm and the tip of the Long Bar. Population synthesis models indicate that they must be very massive to harbour so many supergiants. Amongst these clusters, Stephenson 2, with a core grouping of 26 red supergiants, is a strong candidate to be the most massive young cluster in the Galaxy. Aims. Stephenson 2 is located close to a region where a strong over-density of red supergiants had been found. We explore the actual cluster size and its possible connection to this over-density. Methods. Taking advantage of Virtual Observatory tools, we have performed a cross-match between the DENIS, USNO-B1 and 2MASS catalogues to identify candidate obscured luminous red stars around Stephenson 2, and in a control nearby region. More than 600 infrared bright stars fulfill our colour criteria, with the vast majority having a counterpart in the I band and >400 being sufficiently bright in I to allow observation with a 4-m class telescope. We observed a subsample of ~250 stars, using the multi-object, wide-field, fibre spectrograph AF2 on the WHT telescope in La Palma, obtaining intermediate-resolution spectroscopy in the 7500–9000 Å range. We derived spectral types and luminosity classes for all these objects and measured their radial velocities. Results. Our targets turned out to be G and K supergiants, late (≥ M4) M giants, and M-type bright giants (luminosity class II) and supergiants. We found ~35 red supergiants with radial velocities similar to Stephenson 2 members, spread over the two areas surveyed. In addition, we found ~40 red supergiants with radial velocities incompatible in principle with a physical association. Conclusions. Our results show that Stephenson 2 is not an isolated cluster, but part of a huge structure likely containing hundreds of red supergiants, with radial velocities compatible with the terminal velocity at this Galactic longitude (and a distance ~6 kpc). In addition, we found evidence of several populations of massive stars at different distances along this line of sight.
Resumo:
New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.
Resumo:
Humans and machines have shared the same physical space for many years. To share the same space, we want the robots to behave like human beings. This will facilitate their social integration, their interaction with humans and create an intelligent behavior. To achieve this goal, we need to understand how human behavior is generated, analyze tasks running our nerves and how they relate to them. Then and only then can we implement these mechanisms in robotic beings. In this study, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this study has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.
Resumo:
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.
Resumo:
A large part of the new generation of computer numerical control systems has adopted an architecture based on robotic systems. This architecture improves the implementation of many manufacturing processes in terms of flexibility, efficiency, accuracy and velocity. This paper presents a 4-axis robot tool based on a joint structure whose primary use is to perform complex machining shapes in some non-contact processes. A new dynamic visual controller is proposed in order to control the 4-axis joint structure, where image information is used in the control loop to guide the robot tool in the machining task. In addition, this controller eliminates the chaotic joint behavior which appears during tracking of the quasi-repetitive trajectories required in machining processes. Moreover, this robot tool can be coupled to a manipulator robot in order to form a multi-robot platform for complex manufacturing tasks. Therefore, the robot tool could perform a machining task using a piece grasped from the workspace by a manipulator robot. This manipulator robot could be guided by using visual information given by the robot tool, thereby obtaining an intelligent multi-robot platform controlled by only one camera.
Resumo:
Visual information is increasingly being used in a great number of applications in order to perform the guidance of joint structures. This paper proposes an image-based controller which allows the joint structure guidance when its number of degrees of freedom is greater than the required for the developed task. In this case, the controller solves the redundancy combining two different tasks: the primary task allows the correct guidance using image information, and the secondary task determines the most adequate joint structure posture solving the possible joint redundancy regarding the performed task in the image space. The method proposed to guide the joint structure also employs a smoothing Kalman filter not only to determine the moment when abrupt changes occur in the tracked trajectory, but also to estimate and compensate these changes using the proposed filter. Furthermore, a direct visual control approach is proposed which integrates the visual information provided by this smoothing Kalman filter. This last aspect permits the correct tracking when noisy measurements are obtained. All the contributions are integrated in an application which requires the tracking of the faces of Asperger children.
Resumo:
Traditional visual servoing systems do not deal with the topic of moving objects tracking. When these systems are employed to track a moving object, depending on the object velocity, visual features can go out of the image, causing the fail of the tracking task. This occurs specially when the object and the robot are both stopped and then the object starts the movement. In this work, we have employed a retina camera based on Address Event Representation (AER) in order to use events as input in the visual servoing system. The events launched by the camera indicate a pixel movement. Event visual information is processed only at the moment it occurs, reducing the response time of visual servoing systems when they are used to track moving objects.
Resumo:
New low cost sensors and the new open free libraries for 3D image processing are permitting to achieve important advances for robot vision applications such as tridimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a method to recognize the human hand and to track the fingers is proposed. This new method is based on point clouds from range images, RGBD. It does not require visual marks, camera calibration, environment knowledge and complex expensive acquisition systems. Furthermore, this method has been implemented to create a human interface in order to move a robot hand. The human hand is recognized and the movement of the fingers is analyzed. Afterwards, it is imitated from a Barret hand, using communication events programmed from ROS.
Resumo:
Purpose: To compare anterior and posterior corneal curvatures between eyes with primary open-angle glaucoma (POAG) and healthy eyes. Methods: This is a prospective, cross-sectional, observer-masked study. A total of 138 white subjects (one eye per patient) were consecutively recruited; 69 eyes had POAG (study group), and the other 69 comprised a group of healthy control eyes matched for age and central corneal pachymetry with the study ones. Exclusion criteria included any corneal or ocular inflammatory disease, previous ocular surgery, or treatment with carbonic anhydrase inhibitors. The same masked observer performed Goldmann applanation tonometry, ultrasound pachymetry, and Orbscan II topography in all cases. Central corneal thickness, intraocular pressure, and anterior and posterior topographic elevation maps were analyzed and compared between both groups. Results: Patients with POAG had greater forward shifting of the posterior corneal surface than that in healthy control eyes (p < 0.01). Significant differences in anterior corneal elevation between controls and POAG eyes were also found (p < 0.01). Conclusions: Primary open-angle glaucoma eyes have a higher elevation of the posterior corneal surface than that in central corneal thickness–matched nonglaucomatous eyes.
Resumo:
Este trabajo muestra cómo se realiza la enseñanza de robótica mediante un robot modular y los resultados educativos obtenidos en el Máster Universitario en Automática y Robótica de la Escuela Politécnica Superior de la Universidad de Alicante. En el artículo se describen los resultados obtenidos con el uso de este robot modular tanto en competencias genéricas como específicas, en las enseñanzas de electrónica, control y programación del Máster. En este artículo se exponen los objetivos de aprendizaje para cada uno de ellos, su aplicación a la enseñanza y los resultados educativos obtenidos. En los resultados del estudio, cabe destacar que el alumno ha mostrado mayor interés y ha fomentado su aprendizaje autónomo. Para ello, el robot modular se construyó con herramientas para fomentar este tipo de enseñanza y aprendizaje, tales como comunicaciones interactivas para monitorizar, cambiar y adaptar diversos parámetros de control y potencia del robot.
Resumo:
Este trabajo presenta el diseño, construcción y programación de un robot modular para el desarrollo tanto de competencias genéricas como específicas, en las enseñanzas de electrónica, control y programación del Master de Automática y Robótica de la Escuela Politécnica Superior de la Universidad de Alicante. En este trabajo se exponen los diferentes módulos propuestos, así como los objetivos de aprendizaje para cada uno de ellos. Uno de los factores más importantes a destacar en el presente estudio es el posible desarrollo de la creatividad y el aprendizaje autónomo. Para ello, se desarrollará especialmente un módulo de comunicación por bluetooth que servirá para monitorizar, cambiar y adaptar on-line diversos parámetros de control y potencia del robot. Además, dicha herramienta se ha introducido como parte de la metodología en las asignaturas del Máster de Electromecánica y Sistemas de Control Automático. En esta memoria se mostrarán los distintos resultados obtenidos durante y en la finalización de este trabajo.
Resumo:
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.