6 resultados para Image Based Visual Servoing
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
This work deals with the development of a prototype of a helicopter quadrotor for monitoring applications in oil facilities. Anomaly detection problems can be resolved through monitoringmissions performed by a suitably instrumented quadrotor, i.e. infrared thermosensors should be embedded. The proposed monitoring system aims to reduce accidents as well as to make possible the use of non-destructive techniques for detection and location of leaks caused by corrosion. To this end, the implementation of a prototype, its stabilization and a navigation strategy have been proposed. The control strategy is based on dividing the problem into two control hierarchical levels: the lower level stabilizes the angles and the altitude of the vehicle at the desired values, while the higher one provide appropriate references signals to the lower level in order the quadrotor performs the desired movements. The navigation strategy for helicopter quadrotor is made using information provided by a acquisition image system (monocular camera) embedded onto the helicopter. Considering that the low-level control has been solved, the proposed vision-based navigation technique treats the problem as high level control strategies, such as, relative position control, trajectory generation and trajectory tracking. For the position control we use a control technique for visual servoing based on image features. The trajectory generation is done in a offline step, which is a visual trajectory composed of a sequence of images. For the trajectory tracking problem is proposed a control strategy by continuous servovision, thus enabling a navigation strategy without metric maps. Simulation and experimental results are presented to validate the proposal
Resumo:
This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose
Resumo:
In conventional robot manipulator control, the desired path is specified in cartesian space and converted to joint space through inverse kinematics mapping. The joint references generated by this mapping are utilized for dynamic control in joint space. Thus, the end-effector position is, in fact, controlled indirectly, in open-loop, and the accuracy of grip position control directly depends on the accuracy of the available kinematic model. In this report, a new scheme for redundant manipulator kinematic control, based on visual servoing is proposed. In the proposed system, a robot image acquired through a CCD camera is processed in order to compute the position and orientation of each link of the robot arm. The robot task is specified as a temporal sequence of reference images of the robot arm. Thus, both the measured pose and the reference pose are specified in the same image space, and its difference is utilized to generate a cartesian space error for kinematic control purposes. The proposed control scheme was applied in a four degree-of-freedom planar redundant robot arm, experimental results are shown
Resumo:
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Resumo:
The treatment of wastewaters contaminated with oil is of great practical interest and it is fundamental in environmental issues. A relevant process, which has been studied on continuous treatment of contaminated water with oil, is the equipment denominated MDIF® (a mixer-settler based on phase inversion). An important variable during the operation of MDIF® is the water-solvent interface level in the separation section. The control of this level is essential both to avoid the dragging of the solvent during the water removal and improve the extraction efficiency of the oil by the solvent. The measurement of oil-water interface level (in line) is still a hard task. There are few sensors able to measure oil-water interface level in a reliable way. In the case of lab scale systems, there are no interface sensors with compatible dimensions. The objective of this work was to implement a level control system to the organic solvent/water interface level on the equipment MDIF®. The detection of the interface level is based on the acquisition and treatment of images obtained dynamically through a standard camera (webcam). The control strategy was developed to operate in feedback mode, where the level measure obtained by image detection is compared to the desired level and an action is taken on a control valve according to an implemented PID law. A control and data acquisition program was developed in Fortran to accomplish the following tasks: image acquisition; water-solvent interface identification; to perform decisions and send control signals; and to record data in files. Some experimental runs in open-loop were carried out using the MDIF® and random pulse disturbances were applied on the input variable (water outlet flow). The responses of interface level permitted the process identification by transfer models. From these models, the parameters for a PID controller were tuned by direct synthesis and tests in closed-loop were performed. Preliminary results for the feedback loop demonstrated that the sensor and the control strategy developed in this work were suitable for the control of organic solvent-water interface level
Resumo:
This work presents a cooperative navigation systemof a humanoid robot and a wheeled robot using visual information, aiming to navigate the non-instrumented humanoid robot using information obtained from the instrumented wheeled robot. Despite the humanoid not having sensors to its navigation, it can be remotely controlled by infra-red signals. Thus, the wheeled robot can control the humanoid positioning itself behind him and, through visual information, find it and navigate it. The location of the wheeled robot is obtained merging information from odometers and from landmarks detection, using the Extended Kalman Filter. The marks are visually detected, and their features are extracted by image processing. Parameters obtained by image processing are directly used in the Extended Kalman Filter. Thus, while the wheeled robot locates and navigates the humanoid, it also simultaneously calculates its own location and maps the environment (SLAM). The navigation is done through heuristic algorithms based on errors between the actual and desired pose for each robot. The main contribution of this work was the implementation of a cooperative navigation system for two robots based on visual information, which can be extended to other robotic applications, as the ability to control robots without interfering on its hardware, or attaching communication devices