16 resultados para Robot vision
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.
Resumo:
AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.
Resumo:
NOGUEIRA, Marcelo B. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. Pose Estimation of a Humanoid Robot Using Images from an Mobile Extern Camera. In: IFAC WORKSHOP ON MULTIVEHICLE SYSTEMS, 2006, Salvador, BA. Anais... Salvador: MVS 2006, 2006.
Resumo:
BRITTO, Ricardo S.; MEDEIROS, Adelardo A. D.; ALSINA, Pablo J. Uma arquitetura distribuída de hardware e software para controle de um robô móvel autônomo. In: SIMPÓSIO BRASILEIRO DE AUTOMAÇÃO INTELIGENTE,8., 2007, Florianópolis. Anais... Florianópolis: SBAI, 2007.
Resumo:
SANTANA, André M.; SOUZA, Anderson A. S.; BRITTO, Ricardo S.; ALSINA, Pablo J.; MEDEIROS, Adelardo A. D. Localization of a mobile robot based on odometry and natural landmarks using extended Kalman Filter. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.
Resumo:
The goal of this work is to propose a SLAM (Simultaneous Localization and Mapping) solution based on Extended Kalman Filter (EKF) in order to make possible a robot navigates along the environment using information from odometry and pre-existing lines on the floor. Initially, a segmentation step is necessary to classify parts of the image in floor or non floor . Then the image processing identifies floor lines and the parameters of these lines are mapped to world using a homography matrix. Finally, the identified lines are used in SLAM as landmarks in order to build a feature map. In parallel, using the corrected robot pose, the uncertainty about the pose and also the part non floor of the image, it is possible to build an occupancy grid map and generate a metric map with the obstacle s description. A greater autonomy for the robot is attained by using the two types of obtained map (the metric map and the features map). Thus, it is possible to run path planning tasks in parallel with localization and mapping. Practical results are presented to validate the proposal
Resumo:
Large efforts have been maden by the scientific community on tasks involving locomotion of mobile robots. To execute this kind of task, we must develop to the robot the ability of navigation through the environment in a safe way, that is, without collisions with the objects. In order to perform this, it is necessary to implement strategies that makes possible to detect obstacles. In this work, we deal with this problem by proposing a system that is able to collect sensory information and to estimate the possibility for obstacles to occur in the mobile robot path. Stereo cameras positioned in parallel to each other in a structure coupled to the robot are employed as the main sensory device, making possible the generation of a disparity map. Code optimizations and a strategy for data reduction and abstraction are applied to the images, resulting in a substantial gain in the execution time. This makes possible to the high level decision processes to execute obstacle deviation in real time. This system can be employed in situations where the robot is remotely operated, as well as in situations where it depends only on itself to generate trajectories (the autonomous case)
Resumo:
In this work, we present a hardware-software architecture for controlling the autonomous mobile robot Kapeck. The hardware of the robot is composed of a set of sensors and actuators organized in a CAN bus. Two embedded computers and eigth microcontroller based boards are used in the system. One of the computers hosts the vision system, due to the significant processing needs of this kind of system. The other computer is used to coordinate and access the CAN bus and to accomplish the other activities of the robot. The microcontroller-based boards are used with the sensors and actuators. The robot has this distributed configuration in order to exhibit a good real-time behavior, where the response time and the temporal predictability of the system is important. We adopted the hybrid deliberative-reactive paradigm in the proposed architecture to conciliate the reactive behavior of the sensors-actuators net and the deliberative activities required to accomplish more complex tasks
Resumo:
This work introduces a new method for environment mapping with three-dimensional information from visual information for robotic accurate navigation. Many approaches of 3D mapping using occupancy grid typically requires high computacional effort to both build and store the map. We introduce an 2.5-D occupancy-elevation grid mapping, which is a discrete mapping approach, where each cell stores the occupancy probability, the height of the terrain at current place in the environment and the variance of this height. This 2.5-dimensional representation allows that a mobile robot to know whether a place in the environment is occupied by an obstacle and the height of this obstacle, thus, it can decide if is possible to traverse the obstacle. Sensorial informations necessary to construct the map is provided by a stereo vision system, which has been modeled with a robust probabilistic approach, considering the noise present in the stereo processing. The resulting maps favors the execution of tasks like decision making in the autonomous navigation, exploration, localization and path planning. Experiments carried out with a real mobile robots demonstrates that this proposed approach yields useful maps for robot autonomous navigation
Resumo:
This work proposes a method to determine the depth of objects in a scene using a combination between stereo vision and self-calibration techniques. Determining the rel- ative distance between visualized objects and a robot, with a stereo head, it is possible to navigate in unknown environments. Stereo vision techniques supply a depth measure by the combination of two or more images from the same scene. To achieve a depth estimates of the in scene objects a reconstruction of this scene geometry is necessary. For such reconstruction the relationship between the three-dimensional world coordi- nates and the two-dimensional images coordinates is necessary. Through the achievement of the cameras intrinsic parameters it is possible to make this coordinates systems relationship. These parameters can be gotten through geometric camera calibration, which, generally is made by a correlation between image characteristics of a calibration pattern with know dimensions. The cameras self-calibration allows the achievement of their intrinsic parameters without using a known calibration pattern, being possible their calculation and alteration during the displacement of the robot in an unknown environment. In this work a self-calibration method based in the three-dimensional polar coordinates to represent image features is presented. This representation is determined by the relationship between images features and horizontal and vertical opening cameras angles. Using the polar coordinates it is possible to geometrically reconstruct the scene. Through the proposed techniques combination it is possible to calculate a scene objects depth estimate, allowing the robot navigation in an unknown environment
Resumo:
We propose a new approach to reduction and abstraction of visual information for robotics vision applications. Basically, we propose to use a multi-resolution representation in combination with a moving fovea for reducing the amount of information from an image. We introduce the mathematical formalization of the moving fovea approach and mapping functions that help to use this model. Two indexes (resolution and cost) are proposed that can be useful to choose the proposed model variables. With this new theoretical approach, it is possible to apply several filters, to calculate disparity and to obtain motion analysis in real time (less than 33ms to process an image pair at a notebook AMD Turion Dual Core 2GHz). As the main result, most of time, the moving fovea allows the robot not to perform physical motion of its robotics devices to keep a possible region of interest visible in both images. We validate the proposed model with experimental results
Resumo:
The development and refinement of techniques that make simultaneous localization and mapping (SLAM) for an autonomous mobile robot and the building of local 3-D maps from a sequence of images, is widely studied in scientific circles. This work presents a monocular visual SLAM technique based on extended Kalman filter, which uses features found in a sequence of images using the SURF descriptor (Speeded Up Robust Features) and determines which features can be used as marks by a technique based on delayed initialization from 3-D straight lines. For this, only the coordinates of the features found in the image and the intrinsic and extrinsic camera parameters are avaliable. Its possible to determine the position of the marks only on the availability of information of depth. Tests have shown that during the route, the mobile robot detects the presence of characteristics in the images and through a proposed technique for delayed initialization of marks, adds new marks to the state vector of the extended Kalman filter (EKF), after estimating the depth of features. With the estimated position of the marks, it was possible to estimate the updated position of the robot at each step, obtaining good results that demonstrate the effectiveness of monocular visual SLAM system proposed in this paper
Resumo:
This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose
Resumo:
In conventional robot manipulator control, the desired path is specified in cartesian space and converted to joint space through inverse kinematics mapping. The joint references generated by this mapping are utilized for dynamic control in joint space. Thus, the end-effector position is, in fact, controlled indirectly, in open-loop, and the accuracy of grip position control directly depends on the accuracy of the available kinematic model. In this report, a new scheme for redundant manipulator kinematic control, based on visual servoing is proposed. In the proposed system, a robot image acquired through a CCD camera is processed in order to compute the position and orientation of each link of the robot arm. The robot task is specified as a temporal sequence of reference images of the robot arm. Thus, both the measured pose and the reference pose are specified in the same image space, and its difference is utilized to generate a cartesian space error for kinematic control purposes. The proposed control scheme was applied in a four degree-of-freedom planar redundant robot arm, experimental results are shown
Resumo:
A challenge that remains in the robotics field is how to make a robot to react in real time to visual stimulus. Traditional computer vision algorithms used to overcome this problem are still very expensive taking too long when using common computer processors. Very simple algorithms like image filtering or even mathematical morphology operations may take too long. Researchers have implemented image processing algorithms in high parallelism hardware devices in order to cut down the time spent in the algorithms processing, with good results. By using hardware implemented image processing techniques and a platform oriented system that uses the Nios II Processor we propose an approach that uses the hardware processing and event based programming to simplify the vision based systems while at the same time accelerating some parts of the used algorithms