17 resultados para Navegação visual. Controle por servovisão. VANT s. HelicópteroQuadrirrotor. Visão computacional
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
This work deals with the development of a prototype of a helicopter quadrotor for monitoring applications in oil facilities. Anomaly detection problems can be resolved through monitoringmissions performed by a suitably instrumented quadrotor, i.e. infrared thermosensors should be embedded. The proposed monitoring system aims to reduce accidents as well as to make possible the use of non-destructive techniques for detection and location of leaks caused by corrosion. To this end, the implementation of a prototype, its stabilization and a navigation strategy have been proposed. The control strategy is based on dividing the problem into two control hierarchical levels: the lower level stabilizes the angles and the altitude of the vehicle at the desired values, while the higher one provide appropriate references signals to the lower level in order the quadrotor performs the desired movements. The navigation strategy for helicopter quadrotor is made using information provided by a acquisition image system (monocular camera) embedded onto the helicopter. Considering that the low-level control has been solved, the proposed vision-based navigation technique treats the problem as high level control strategies, such as, relative position control, trajectory generation and trajectory tracking. For the position control we use a control technique for visual servoing based on image features. The trajectory generation is done in a offline step, which is a visual trajectory composed of a sequence of images. For the trajectory tracking problem is proposed a control strategy by continuous servovision, thus enabling a navigation strategy without metric maps. Simulation and experimental results are presented to validate the proposal
Resumo:
This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose
Resumo:
In conventional robot manipulator control, the desired path is specified in cartesian space and converted to joint space through inverse kinematics mapping. The joint references generated by this mapping are utilized for dynamic control in joint space. Thus, the end-effector position is, in fact, controlled indirectly, in open-loop, and the accuracy of grip position control directly depends on the accuracy of the available kinematic model. In this report, a new scheme for redundant manipulator kinematic control, based on visual servoing is proposed. In the proposed system, a robot image acquired through a CCD camera is processed in order to compute the position and orientation of each link of the robot arm. The robot task is specified as a temporal sequence of reference images of the robot arm. Thus, both the measured pose and the reference pose are specified in the same image space, and its difference is utilized to generate a cartesian space error for kinematic control purposes. The proposed control scheme was applied in a four degree-of-freedom planar redundant robot arm, experimental results are shown
Resumo:
Large efforts have been maden by the scientific community on tasks involving locomotion of mobile robots. To execute this kind of task, we must develop to the robot the ability of navigation through the environment in a safe way, that is, without collisions with the objects. In order to perform this, it is necessary to implement strategies that makes possible to detect obstacles. In this work, we deal with this problem by proposing a system that is able to collect sensory information and to estimate the possibility for obstacles to occur in the mobile robot path. Stereo cameras positioned in parallel to each other in a structure coupled to the robot are employed as the main sensory device, making possible the generation of a disparity map. Code optimizations and a strategy for data reduction and abstraction are applied to the images, resulting in a substantial gain in the execution time. This makes possible to the high level decision processes to execute obstacle deviation in real time. This system can be employed in situations where the robot is remotely operated, as well as in situations where it depends only on itself to generate trajectories (the autonomous case)
Resumo:
Visual attention is a very important task in autonomous robotics, but, because of its complexity, the processing time required is significant. We propose an architecture for feature selection using foveated images that is guided by visual attention tasks and that reduces the processing time required to perform these tasks. Our system can be applied in bottom-up or top-down visual attention. The foveated model determines which scales are to be used on the feature extraction algorithm. The system is able to discard features that are not extremely necessary for the tasks, thus, reducing the processing time. If the fovea is correctly placed, then it is possible to reduce the processing time without compromising the quality of the tasks outputs. The distance of the fovea from the object is also analyzed. If the visual system loses the tracking in top-down attention, basic strategies of fovea placement can be applied. Experiments have shown that it is possible to reduce up to 60% the processing time with this approach. To validate the method, we tested it with the feature algorithm known as Speeded Up Robust Features (SURF), one of the most efficient approaches for feature extraction. With the proposed architecture, we can accomplish real time requirements of robotics vision, mainly to be applied in autonomous robotics
Resumo:
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Resumo:
This work uses computer vision algorithms related to features in the identification of medicine boxes for the visually impaired. The system is for people who have a disease that compromises his vision, hindering the identification of the correct medicine to be ingested. We use the camera, available in several popular devices such as computers, televisions and phones, to identify the box of the correct medicine and audio through the image, showing the poor information about the medication, such: as the dosage, indication and contraindications of the medication. We utilize a model of object detection using algorithms to identify the features in the boxes of drugs and playing the audio at the time of detection of feauteres in those boxes. Experiments carried out with 15 people show that where 93 % think that the system is useful and very helpful in identifying drugs for boxes. So, it is necessary to make use of this technology to help several people with visual impairments to take the right medicine, at the time indicated in advance by the physician
Resumo:
The main objective of this work was to enable the recognition of human gestures through the development of a computer program. The program created captures the gestures executed by the user through a camera attached to the computer and sends it to the robot command referring to the gesture. They were interpreted in total ve gestures made by human hand. The software (developed in C ++) widely used the computer vision concepts and open source library OpenCV that directly impact the overall e ciency of the control of mobile robots. The computer vision concepts take into account the use of lters to smooth/blur the image noise reduction, color space to better suit the developer's desktop as well as useful information for manipulating digital images. The OpenCV library was essential in creating the project because it was possible to use various functions/procedures for complete control lters, image borders, image area, the geometric center of borders, exchange of color spaces, convex hull and convexity defect, plus all the necessary means for the characterization of imaged features. During the development of the software was the appearance of several problems, as false positives (noise), underperforming the insertion of various lters with sizes oversized masks, as well as problems arising from the choice of color space for processing human skin tones. However, after the development of seven versions of the control software, it was possible to minimize the occurrence of false positives due to a better use of lters combined with a well-dimensioned mask size (tested at run time) all associated with a programming logic that has been perfected over the construction of the seven versions. After all the development is managed software that met the established requirements. After the completion of the control software, it was observed that the overall e ectiveness of the various programs, highlighting in particular the V programs: 84.75 %, with VI: 93.00 % and VII with: 94.67 % showed that the nal program performed well in interpreting gestures, proving that it was possible the mobile robot control through human gestures without the need for external accessories to give it a better mobility and cost savings for maintain such a system. The great merit of the program was to assist capacity in demystifying the man set/machine therefore uses an easy and intuitive interface for control of mobile robots. Another important feature observed is that to control the mobile robot is not necessary to be close to the same, as to control the equipment is necessary to receive only the address that the Robotino passes to the program via network or Wi-Fi.
Resumo:
The development and refinement of techniques that make simultaneous localization and mapping (SLAM) for an autonomous mobile robot and the building of local 3-D maps from a sequence of images, is widely studied in scientific circles. This work presents a monocular visual SLAM technique based on extended Kalman filter, which uses features found in a sequence of images using the SURF descriptor (Speeded Up Robust Features) and determines which features can be used as marks by a technique based on delayed initialization from 3-D straight lines. For this, only the coordinates of the features found in the image and the intrinsic and extrinsic camera parameters are avaliable. Its possible to determine the position of the marks only on the availability of information of depth. Tests have shown that during the route, the mobile robot detects the presence of characteristics in the images and through a proposed technique for delayed initialization of marks, adds new marks to the state vector of the extended Kalman filter (EKF), after estimating the depth of features. With the estimated position of the marks, it was possible to estimate the updated position of the robot at each step, obtaining good results that demonstrate the effectiveness of monocular visual SLAM system proposed in this paper
Resumo:
Este trabalho apresenta o desenvolvimento de um método de coordenação e cooperação para uma frota de mini-robôs móveis. O escopo do desenvolvimento é o futebol de robôs. Trata-se de uma plataforma bem estruturada, dinâmica e desenvolvida no mundo inteiro. O futebol de robôs envolve diversos campos do conhecimento incluindo: visão computacional, teoria de controle, desenvolvimento de circuitos microcontrolados, planejamento cooperativo, entre outros. A tÃtulo de organização os sistema foi dividido em cinco módulos: robô, visão, localização, planejamento e controle. O foco do trabalho se limita ao módulo de planejamento. Para auxiliar seu desenvolvimento um simulador do sistema foi implementado. O simulador funciona em tempo real e substitui os robôs reais. Dessa forma os outros módulos permanecem praticamente inalterados durante uma simulação ou execução com robôs reais. Para organizar o comportamento dos robôs e produzir a cooperação entre eles foi adotada uma arquitetura hierarquizada: no mais alto nÃvel está a escolha do estilo de jogo do time; logo abaixo decide-se o papel que cada jogador deve assumir; associado ao papel temos uma ação especÃfica e finalmente calcula-se a referência de movimento do robô. O papel de um robô dita o comportamento do robô na dada ocasião. Os papéis são alocados dinamicamente durante o jogo de forma que um mesmo robô pode assumir diferentes papéis no decorrer da partida
Resumo:
A challenge that remains in the robotics field is how to make a robot to react in real time to visual stimulus. Traditional computer vision algorithms used to overcome this problem are still very expensive taking too long when using common computer processors. Very simple algorithms like image filtering or even mathematical morphology operations may take too long. Researchers have implemented image processing algorithms in high parallelism hardware devices in order to cut down the time spent in the algorithms processing, with good results. By using hardware implemented image processing techniques and a platform oriented system that uses the Nios II Processor we propose an approach that uses the hardware processing and event based programming to simplify the vision based systems while at the same time accelerating some parts of the used algorithms
Resumo:
Registration of point clouds captured by depth sensors is an important task in 3D reconstruction applications based on computer vision. In many applications with strict performance requirements, the registration should be executed not only with precision, but also in the same frequency as data is acquired by the sensor. This thesis proposes theuse of the pyramidal sparse optical flow algorithm to incrementally register point clouds captured by RGB-D sensors (e.g. Microsoft Kinect) in real time. The accumulated errorinherent to the process is posteriorly minimized by utilizing a marker and pose graph optimization. Experimental results gathered by processing several RGB-D datasets validatethe system proposed by this thesis in visual odometry and simultaneous localization and mapping (SLAM) applications.
Resumo:
This work presents a cooperative navigation systemof a humanoid robot and a wheeled robot using visual information, aiming to navigate the non-instrumented humanoid robot using information obtained from the instrumented wheeled robot. Despite the humanoid not having sensors to its navigation, it can be remotely controlled by infra-red signals. Thus, the wheeled robot can control the humanoid positioning itself behind him and, through visual information, find it and navigate it. The location of the wheeled robot is obtained merging information from odometers and from landmarks detection, using the Extended Kalman Filter. The marks are visually detected, and their features are extracted by image processing. Parameters obtained by image processing are directly used in the Extended Kalman Filter. Thus, while the wheeled robot locates and navigates the humanoid, it also simultaneously calculates its own location and maps the environment (SLAM). The navigation is done through heuristic algorithms based on errors between the actual and desired pose for each robot. The main contribution of this work was the implementation of a cooperative navigation system for two robots based on visual information, which can be extended to other robotic applications, as the ability to control robots without interfering on its hardware, or attaching communication devices
Resumo:
A Quadrotor is an Unmanned Aerial Vehicle (UAV) equipped with four rotors distributed on a simple mechanical "X"form structure. The aim of this work is to build and stabilize a Quadrotor aircraft in the roll, pitch and yaw angles at a certain altitude. The stabilization control approach is based on a transformation in the input variables in order to perform a decoupled control. The proposed strategy is based on breaking the control problem into two hierarchical levels: A lower level, object of this work, maintains the desired altitude an angles of the vehicle while the higher level establishes appropriate references to the lower level, performing the desired movements. A hardware and software architecture was specially developed and implemented for an experimental prototype used to test and validate the proposed control approach
Resumo:
The main task and one of the major mobile robotics problems is its navigation process. Conceptualy, this process means drive the robot from an initial position and orientation to a goal position and orientation, along an admissible path respecting the temporal and velocity constraints. This task must be accomplished by some subtasks like robot localization in the workspace, admissible path planning, trajectory generation and motion control. Moreover, autonomous wheeled mobile robots have kinematics constraints, also called nonholonomic constraints, that impose the robot can not move everywhere freely in its workspace, reducing the number of feasible paths between two distinct positions. This work mainly approaches the path planning and trajectory generation problems applied to wheeled mobile robots acting on a robot soccer environment. The major dificulty in this process is to find a smooth function that respects the imposed robot kinematic constraints. This work proposes a path generation strategy based on parametric polynomials of third degree for the 'x' and 'y' axis. The 'theta' orientation is derived from the 'y' and 'x' relations in such a way that the generated path respects the kinematic constraint. To execute the trajectory, this work also shows a simple control strategy acting on the robot linear and angular velocities