983 resultados para Robot Vision


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a completely autonomous solution to participate in the Indoor Challenge of the 2013 International Micro Air Vehicle Competition (IMAV 2013). Our proposal is a multi-robot system with no centralized coordination whose robotic agents share their position estimates. The capability of each agent to navigate avoiding collisions is a consequence of the resulting emergent behavior. Each agent consists of a ground station running an instance of the proposed architecture that communicates over WiFi with an AR Drone 2.0 quadrotor. Visual markers are employed to sense and map obstacles and to improve the pose estimation based on Inertial Measurement Unit (IMU) and ground optical flow data. Based on our architecture, each robotic agent can navigate avoiding obstacles and other members of the multi-robot system. The solution is demonstrated and the achieved navigation performance is evaluated by means of experimental flights. This work also analyzes the capabilities of the presented solution in simulated flights of the IMAV 2013 Indoor Challenge. The performance of the CVG UPM team was awarded with the First Prize in the Indoor Autonomy Challenge of the IMAV 2013 competition.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes the real time global vision system for the robot soccer team the RoboRoos. It has a highly optimised pipeline that includes thresholding, segmenting, colour normalising, object recognition and perspective and lens correction. It has a fast ‘paint’ colour calibration system that can calibrate in any face of the YUV or HSI cube. It also autonomously selects both an appropriate camera gain and colour gains robot regions across the field to achieve colour uniformity. Camera geometry calibration is performed automatically from selection of keypoints on the field. The system acheives a position accuracy of better than 15mm over a 4m × 5.5m field, and orientation accuracy to within 1°. It processes 614 × 480 pixels at 60Hz on a 2.0GHz Pentium 4 microprocessor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a method for vision only topological simultaneous localisation and mapping (SLAM). Our approach does not use motion or odometric information but a sequence of colour histograms from visited places. In particular, we address the perceptual aliasing problem which occurs using external observations only in topological navigation. We propose a Bayesian inference method to incrementally build a topological map by inferring spatial relations from the sequence of observations while simultaneously estimating the robot's location. The algorithm aims to build a small map which is consistent with local adjacency information extracted from the sequence measurements. Local adjacency information is incorporated to disambiguate places which otherwise would appear to be the same. Experiments in an indoor environment show that the proposed technique is capable of dealing with perceptual aliasing using visual observations only and successfully performs topological SLAM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To navigate successfully in a previously unexplored environment, a mobile robot must be able to estimate the spatial relationships of the objects of interest accurately. A Simultaneous Localization and Mapping (SLAM) sys- tem employs its sensors to build incrementally a map of its surroundings and to localize itself in the map simultaneously. The aim of this research project is to develop a SLAM system suitable for self propelled household lawnmowers. The proposed bearing-only SLAM system requires only an omnidirec- tional camera and some inexpensive landmarks. The main advantage of an omnidirectional camera is the panoramic view of all the landmarks in the scene. Placing landmarks in a lawn field to define the working domain is much easier and more flexible than installing the perimeter wire required by existing autonomous lawnmowers. The common approach of existing bearing-only SLAM methods relies on a motion model for predicting the robot’s pose and a sensor model for updating the pose. In the motion model, the error on the estimates of object positions is cumulated due mainly to the wheel slippage. Quantifying accu- rately the uncertainty of object positions is a fundamental requirement. In bearing-only SLAM, the Probability Density Function (PDF) of landmark position should be uniform along the observed bearing. Existing methods that approximate the PDF with a Gaussian estimation do not satisfy this uniformity requirement. This thesis introduces both geometric and proba- bilistic methods to address the above problems. The main novel contribu- tions of this thesis are: 1. A bearing-only SLAM method not requiring odometry. The proposed method relies solely on the sensor model (landmark bearings only) without relying on the motion model (odometry). The uncertainty of the estimated landmark positions depends on the vision error only, instead of the combination of both odometry and vision errors. 2. The transformation of the spatial uncertainty of objects. This thesis introduces a novel method for translating the spatial un- certainty of objects estimated from a moving frame attached to the robot into the global frame attached to the static landmarks in the environment. 3. The characterization of an improved PDF for representing landmark position in bearing-only SLAM. The proposed PDF is expressed in polar coordinates, and the marginal probability on range is constrained to be uniform. Compared to the PDF estimated from a mixture of Gaussians, the PDF developed here has far fewer parameters and can be easily adopted in a probabilistic framework, such as a particle filtering system. The main advantages of our proposed bearing-only SLAM system are its lower production cost and flexibility of use. The proposed system can be adopted in other domestic robots as well, such as vacuum cleaners or robotic toys when terrain is essentially 2D.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper, which serves as an introduction to the mini-symposium on Real-Time Vision, Tracking and Control, provides a broad sketch of visual servoing, the application of real-time vision, tracking and control for robot guidance. It outlines the basic theoretical approaches to the problem, describes a typical architecture, and discusses major milestones, applications and the significant vision sub-problems that must be solved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the implementation of a modified particle filter for vision-based simultaneous localization and mapping of an autonomous robot in a structured indoor environment. Through this method, artificial landmarks such as multi-coloured cylinders can be tracked with a camera mounted on the robot, and the position of the robot can be estimated at the same time. Experimental results in simulation and in real environments show that this approach has advantages over the extended Kalman filter with ambiguous data association and various levels of odometric noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an autonomous docking system and web interface that allows long-term unaided use of a sophisticated robot by untrained web users. These systems have been applied to the biologically inspired RatSLAM system as a foundation for testing both its long-term stability and its practicality. While docking and web interface systems already exist, this system allows for a significantly larger margin of error in docking accuracy due to the mechanical design, thereby increasing robustness against navigational errors. Also a standard vision sensor is used for both long-range and short-range docking, compared to the many systems that require both omni-directional cameras and high resolution Laser range finders for navigation. The web interface has been designed to accommodate the significant delays experienced on the Internet, and to facilitate the non- Cartesian operation of the RatSLAM system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simultaneous Localization And Mapping (SLAM) is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision only approaches. This paper presents a method for generating approximate rotational and translation velocity information from a single vehicle-mounted consumer camera, without the computationally expensive process of tracking landmarks. The method is tested by employing it to provide the odometric and visual information for the RatSLAM system while mapping a complex suburban road network. RatSLAM generates a coherent map of the environment during an 18 km long trip through suburban traffic at speeds of up to 60 km/hr. This result demonstrates the potential of ground based vision-only SLAM using low cost sensing and computational hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this column, Dr. Peter Corke of CSIRO, Australia, gives us a description of MATLAB Toolboxes he has developed. He has been passionately developing tools to enable students and teachers to better understand the theoretical concepts behind classical robotics and computer vision through easy and intuitive simulation and visualization. The results of this labor of love have been packaged as MATLAB Toolboxes: the Robotics Toolbox and the Vision Toolbox. –Daniela Rus, RAS Education Cochair

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a technique for high-dynamic range stereo for outdoor mobile robot applications. Stereo pairs are captured at a number of different exposures (exposure bracketing), and combined by projecting the 3D points into a common coordinate frame, and building a 3D occupancy map. We present experimental results for static scenes with constant and dynamic lighting as well as outdoor operation with variable and high contrast lighting conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing reliable localisation and navigation within highly unstructured underwater coral reef environments is a difficult task at the best of times. Typical research and commercial underwater vehicles use expensive acoustic positioning and sonar systems which require significant external infrastructure to operate effectively. This paper is focused on the development of a robust vision-based motion estimation technique using low-cost sensors for performing real-time autonomous and untethered environmental monitoring tasks in the Great Barrier Reef without the use of acoustic positioning. The technique is experimentally shown to provide accurate odometry and terrain profile information suitable for input into the vehicle controller to perform a range of environmental monitoring tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing reliable localisation and navigation within highly unstructured underwater coral reef environments is a difficult task at the best of times. Typical research and commercial underwater vehicles use expensive acoustic positioning and sonar systems which require significant external infrastructure to operate effectively. This paper is focused on the development of a robust vision-based motion estimation technique using low-cost sensors for performing real-time autonomous and untethered environmental monitoring tasks in the Great Barrier Reef without the use of acoustic positioning. The technique is experimentally shown to provide accurate odometry and terrain profile information suitable for input into the vehicle controller to perform a range of environmental monitoring tasks.