25 resultados para Biomimetic robotics
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
SOUZA, Anderson A. S. ; SANTANA, André M. ; BRITTO, Ricardo S. ; GONÇALVES, Luiz Marcos G. ; MEDEIROS, Adelardo A. D. Representation of Odometry Errors on Occupancy Grids. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.
Resumo:
SANTANA, André M.; SOUZA, Anderson A. S.; BRITTO, Ricardo S.; ALSINA, Pablo J.; MEDEIROS, Adelardo A. D. Localization of a mobile robot based on odometry and natural landmarks using extended Kalman Filter. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.
Resumo:
This paper presents a reflection on the use of robotics in education technology and the fostering of social and digital inclusion, unveiling a new field that has been outlined today. Robotics constitutes a tool still little known and not regulated at national level in education, there is little experience involving the tool in the Northeast. This research aims to reveal one of the first experiments with educational level robotics in Rio Grande do Norte. We present a field research conducted in a public school chancellor for a major institute of science and technology education of the state from seeking review of the robotics course, understand how they work and show their use in school and shows that contributions were generated for digital inclusion category students, based on speeches by teachers, engineers, management and students. As part of gathering information, we used the focus group technique, applied in two stages, one with groups of students, teachers and other school administration, as well as comments directed to the times when the robotics course was being finalized. As a result, we found that the school, through the robotics course is a provider of social and digital inclusion, since it awakens in the sample of students in this research knowledge enabler of social change. And that despite the student category do not understand the depth of meaning of inclusion, the same report in daily actions that integrate technology into their social context in harmony, enjoying its cultural citizenship in full
Resumo:
In this work, we propose a methodology for teaching robotics in elementary schools, based on the socio-historical Vygotsky theory. This methodology in conjunction with the Lego Mindstoms kit (R) and an educational software (an interface for control and programming of prototypes) are part of an educational robotics system named RoboEduc. For the practical development of this work, we have used the action-research strategy, being realized robotics activities with participation of children with age between 8 and 10 years, students of the elementary school level of Municipal School Ascendino de Almeida. This school is located at the city zone of Pitimbu, at the periphery of Natal, in Rio Grande do Norte state. The activities have focused on understanding the construction of robotic prototypes, their programming and control. At constructing prototypes, children develop zone of proximal development (ZPDs) that are learning spaces that, when well used, allow the construction not only of scientific concepts by the individuals but also of abilities and capabilities that are important for the social and cultural interactiond of each one and of the group. With the development of these practical workshops, it was possible to analyse the use of the Robot as the mediator element of the teaching-learning process and the contributions that the use of robotics may bring to teaching since elementary levels
Resumo:
This work presents a cooperative navigation systemof a humanoid robot and a wheeled robot using visual information, aiming to navigate the non-instrumented humanoid robot using information obtained from the instrumented wheeled robot. Despite the humanoid not having sensors to its navigation, it can be remotely controlled by infra-red signals. Thus, the wheeled robot can control the humanoid positioning itself behind him and, through visual information, find it and navigate it. The location of the wheeled robot is obtained merging information from odometers and from landmarks detection, using the Extended Kalman Filter. The marks are visually detected, and their features are extracted by image processing. Parameters obtained by image processing are directly used in the Extended Kalman Filter. Thus, while the wheeled robot locates and navigates the humanoid, it also simultaneously calculates its own location and maps the environment (SLAM). The navigation is done through heuristic algorithms based on errors between the actual and desired pose for each robot. The main contribution of this work was the implementation of a cooperative navigation system for two robots based on visual information, which can be extended to other robotic applications, as the ability to control robots without interfering on its hardware, or attaching communication devices
Resumo:
In this work, we propose a probabilistic mapping method with the mapped environment represented through a modified occupancy grid. The main idea of the proposed method is to allow a mobile robot to construct in a systematic and incremental way the geometry of the underlying space, obtaining at the end a complete environment map. As a consequence, the robot can move in the environment in a safe way, based on a confidence value of data obtained from its perceptive system. The map is represented in a coherent way, according to its sensory data, being these noisy or not, that comes from exterior and proprioceptive sensors of the robot. Characteristic noise incorporated in the data from these sensors are treated by probabilistic modeling in such a way that their effects can be visible in the final result of the mapping process. The results of performed experiments indicate the viability of the methodology and its applicability in the area of autonomous mobile robotics, thus being an contribution to the field
Resumo:
Robots are present each time more on several areas of our society, however they are still considered expensive equipments that are restricted to few people. This work con- sists on the development of control techniques and architectures that make possible the construction and programming of low cost robots with low programming and building complexity. One key aspect of the proposed architecture is the use of audio interfaces to control actuators and read sensors, thus allowing the usage of any device that can produce sounds as a control unit of a robot. The work also includes the development of web ba- sed programming environments that allow the usage of computers or mobile phones as control units of the robot, which can be remotely programmed and controlled. The work also includes possible applications of such low cost robotic platform, including mainly its educational usage, which was experimentally validated by teachers and students of seve- ral graduation courses. We also present an analysis of data obtained from interviews done with the students before and after the use of our platform, which confirms its acceptance as a teaching support tool
Resumo:
Because of social exclusion in Brazil and having as focus the digital inclusion, was started in Federal University of Rio Grande do Norte a project that could talk, at the same time, about concepts of collaborative learning and educational robotics , focused on children digitally excluded. In this context was created a methodology that approaches many subjects as technological elements (e. g. informatics and robotics) and school subjects (e. g. Portuguese, Mathematics, Geography, History), contextualized in everyday situations. We observed educational concepts of collaborative learning and the development of capacities from those students, as group work, logical knowledge and learning ability. This paper proposes an educational software for robotics teaching called RoboEduc, created to be used by children digitally excluded from primary school. Its introduction prioritizes a friendly interface, that makes the concepts of robotics and programming easy and fun to be taught. With this new tool, users without informatics or robotics previous knowledge are able to control a robot, previously set with Lego kits, or even program it to carry some activities out. This paper provides the implementation of the second version of the software. This version presents the control of the robot already used. After were implemented the different levels of programming linked to the many learning levels of the users and their different interfaces and functions. Nowadays, has been implemented the third version, with the improvement of each one of the mentioned stages. In order to validate, prove and test the efficience of the developed methodology to the RoboEduc, were made experiments, through practice of robotics, with children for fourth and fifth grades of primary school at the City School Professor Ascendino de Almeida, in the suburb of Natal (west zone), Rio Grande do Norte. As a preliminary result of the current technology, we verified that the use of robots associated with a well elaborated software can be spread to users that know very little about the subject, without the necessity of previous advanced technology knowledges. Therefore, they showed to be accessible and efficient tools in the process of digital inclusion
Resumo:
In this Thesis, the development of the dynamic model of multirotor unmanned aerial vehicle with vertical takeoff and landing characteristics, considering input nonlinearities and a full state robust backstepping controller are presented. The dynamic model is expressed using the Newton-Euler laws, aiming to obtain a better mathematical representation of the mechanical system for system analysis and control design, not only when it is hovering, but also when it is taking-off, or landing, or flying to perform a task. The input nonlinearities are the deadzone and saturation, where the gravitational effect and the inherent physical constrains of the rotors are related and addressed. The experimental multirotor aerial vehicle is equipped with an inertial measurement unit and a sonar sensor, which appropriately provides measurements of attitude and altitude. A real-time attitude estimation scheme based on the extended Kalman filter using quaternions was developed. Then, for robustness analysis, sensors were modeled as the ideal value with addition of an unknown bias and unknown white noise. The bounded robust attitude/altitude controller were derived based on globally uniformly practically asymptotically stable for real systems, that remains globally uniformly asymptotically stable if and only if their solutions are globally uniformly bounded, dealing with convergence and stability into a ball of the state space with non-null radius, under some assumptions. The Lyapunov analysis technique was used to prove the stability of the closed-loop system, compute bounds on control gains and guaranteeing desired bounds on attitude dynamics tracking errors in the presence of measurement disturbances. The controller laws were tested in numerical simulations and in an experimental hexarotor, developed at the UFRN Robotics Laboratory
Resumo:
Visual attention is a very important task in autonomous robotics, but, because of its complexity, the processing time required is significant. We propose an architecture for feature selection using foveated images that is guided by visual attention tasks and that reduces the processing time required to perform these tasks. Our system can be applied in bottom-up or top-down visual attention. The foveated model determines which scales are to be used on the feature extraction algorithm. The system is able to discard features that are not extremely necessary for the tasks, thus, reducing the processing time. If the fovea is correctly placed, then it is possible to reduce the processing time without compromising the quality of the tasks outputs. The distance of the fovea from the object is also analyzed. If the visual system loses the tracking in top-down attention, basic strategies of fovea placement can be applied. Experiments have shown that it is possible to reduce up to 60% the processing time with this approach. To validate the method, we tested it with the feature algorithm known as Speeded Up Robust Features (SURF), one of the most efficient approaches for feature extraction. With the proposed architecture, we can accomplish real time requirements of robotics vision, mainly to be applied in autonomous robotics
Resumo:
In this work, we propose methodologies and computer tools to insert robots in cultural environments. The basic idea is to have a robot in a real context (a cultural space) that can represent an user connected to the system through Internet (visitor avatar in the real space) and that the robot also have its representation in a Mixed Reality space (robot avatar in the virtual space). In this way, robot and avatar are not simply real and virtual objects. They play a more important role in the scenery, interfering in the process and taking decisions. In order to have this service running, we developed a module composed by a robot, communication tools and ways to provide integration of these with the virtual environment. As welI we implemented a set of behaviors with the purpose of controlling the robot in the real space. We studied available software and hardware tools for the robotics platform used in the experiments, as welI we developed test routines to determine their potentialities. Finally, we studied the behavior-based control model, we planned and implemented alI the necessary behaviors for the robot integration to the real and virtual cultural spaces. Several experiments were conducted, in order to validate the developed methodologies and tools
Resumo:
We propose a new approach to reduction and abstraction of visual information for robotics vision applications. Basically, we propose to use a multi-resolution representation in combination with a moving fovea for reducing the amount of information from an image. We introduce the mathematical formalization of the moving fovea approach and mapping functions that help to use this model. Two indexes (resolution and cost) are proposed that can be useful to choose the proposed model variables. With this new theoretical approach, it is possible to apply several filters, to calculate disparity and to obtain motion analysis in real time (less than 33ms to process an image pair at a notebook AMD Turion Dual Core 2GHz). As the main result, most of time, the moving fovea allows the robot not to perform physical motion of its robotics devices to keep a possible region of interest visible in both images. We validate the proposed model with experimental results
Resumo:
In recent years, the radio frequency identification technology (RFID) has gained great interest both industrial communities as scientific communities. Its ability to locate and monitor objects, animals and persons with active or passive tags allows easy development, with good cost-benefice and still presents undeniable benefits in applications ranging from logistics to healthcare, robotics, security, among others. Within this aspect what else comes excelling are RFID tags and the antennas used in RFID readers. Most tags have antennas omnidirectional and are usually manufactured as dipoles modified printed. The primary purpose of a project of antenna for tag is to achieve the required input impedance to perform a good marriage impedance with the load impedance of the chip. Already the objective principal in project of antennas for readers is to achieve reduced sizes and structures with good data transmission capacity. This work brings the numerical characterization of antennas for RFID applications, being these divided into tags RFID and antennas for RFID readers. Three tags RFID and two antennas for RFID readers, found in literature, are analyzed. The analysis of these structures is made using the Method of Waves - WCIP. Initial results found in the literature are compared with those obtained through simulations in WCIP with objective to show that the Method of Waves is able to analyze such structures. To illustrate the results obtained in simulations is presented the behavior of electric and magnetic fields. It also performed a literature review on the characteristics and principles of RFID technology. Suggestions for continuity to this work are presented
Resumo:
The development and refinement of techniques that make simultaneous localization and mapping (SLAM) for an autonomous mobile robot and the building of local 3-D maps from a sequence of images, is widely studied in scientific circles. This work presents a monocular visual SLAM technique based on extended Kalman filter, which uses features found in a sequence of images using the SURF descriptor (Speeded Up Robust Features) and determines which features can be used as marks by a technique based on delayed initialization from 3-D straight lines. For this, only the coordinates of the features found in the image and the intrinsic and extrinsic camera parameters are avaliable. Its possible to determine the position of the marks only on the availability of information of depth. Tests have shown that during the route, the mobile robot detects the presence of characteristics in the images and through a proposed technique for delayed initialization of marks, adds new marks to the state vector of the extended Kalman filter (EKF), after estimating the depth of features. With the estimated position of the marks, it was possible to estimate the updated position of the robot at each step, obtaining good results that demonstrate the effectiveness of monocular visual SLAM system proposed in this paper
Resumo:
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform