71 resultados para slam
Resumo:
This paper describes the current state of RatSLAM, a Simultaneous Localisation and Mapping (SLAM) system based on models of the rodent hippocampus. RatSLAM uses a competitive attractor network to fuse visual and odometry information. Energy packets in the network represent pose hypotheses, which are updated by odometry and can be enhanced or inhibited by visual input. This paper shows the effectiveness of the system in real robot tests in unmodified indoor environments using a learning vision system. Results are shown for two test environments; a large corridor loop and the complete floor of an office building.
Resumo:
RatSLAM is a system for vision-based Simultaneous Localisation and Mapping (SLAM) inspired by models of the rodent hippocampus. The system can produce stable representations of large complex environments during robot experiments in both indoor and outdoor environments. These representations are both topological and metric in nature, and can involve multiple representations of the same place as well as discontinuities. In this paper we describe a new technique known as experience mapping that can be used online with the RatSLAM system to produce world representations known as experience maps. These maps group together multiple place representations and are spatially continuous. A number of experiments have been conducted in simulation and a real world office environment. These experiments demonstrate the high degree to which experience maps are representative of the spatial arrangement of the environment.
Resumo:
RatSLAM is a vision-based SLAM system based on extended models of the rodent hippocampus. RatSLAM creates environment representations that can be processed by the experience mapping algorithm to produce maps suitable for goal recall. The experience mapping algorithm also allows RatSLAM to map environments many times larger than could be achieved with a one to one correspondence between the map and environment, by reusing the RatSLAM maps to represent multiple sections of the environment. This paper describes experiments investigating the effects of the environment-representation size ratio and visual ambiguity on mapping and goal navigation performance. The experiments demonstrate that system performance is weakly dependent on either parameter in isolation, but strongly dependent on their joint values.
Resumo:
This paper investigates the use of the FAB-MAP appearance-only SLAM algorithm as a method for performing visual data association for RatSLAM, a semi-metric full SLAM system. While both systems have shown the ability to map large (60-70km) outdoor locations of approximately the same scale, for either larger areas or across longer time periods both algorithms encounter difficulties with false positive matches. By combining these algorithms using a mapping between appearance and pose space, both false positives and false negatives generated by FAB-MAP are significantly reduced during outdoor mapping using a forward-facing camera. The hybrid FAB-MAP-RatSLAM system developed demonstrates the potential for successful SLAM over large periods of time.
Resumo:
Calibration of movement tracking systems is a difficult problem faced by both animals and robots. The ability to continuously calibrate changing systems is essential for animals as they grow or are injured, and highly desirable for robot control or mapping systems due to the possibility of component wear, modification, damage and their deployment on varied robotic platforms. In this paper we use inspiration from the animal head direction tracking system to implement a self-calibrating, neurally-based robot orientation tracking system. Using real robot data we demonstrate how the system can remove tracking drift and learn to consistently track rotation over a large range of velocities. The neural tracking system provides the first steps towards a fully neural SLAM system with improved practical applicability through selftuning and adaptation.
Resumo:
The implementation of a robotic security solution generally requires one algorithm to route the robot around the environment and another algorithm to perform anomaly detection. Solutions to the routing problem require the robot to have a good estimate of its own pose. We present a novel security system that uses metrics generated by the localisation algorithm to perform adaptive anomaly detection. The localisation algorithm is a vision-based SLAM solution called RatSLAM, based on mechanisms within the hippocampus. The anomaly detection algorithm is based on the mechanisms used by the immune system to identify threats to the body. The system is explored using data gathered within an unmodified office environment. It is shown that the algorithm successfully reacts to the presence of people and objects in areas where they are not usually present and is tolerised against the presence of people in environments that are usually dynamic.
Resumo:
The RatSLAM system can perform vision based SLAM using a computational model of the rodent hippocampus. When the number of pose cells used to represent space in RatSLAM is reduced, artifacts are introduced that hinder its use for goal directed navigation. This paper describes a new component for the RatSLAM system called an experience map, which provides a coherent representation for goal directed navigation. Results are presented for two sets of real world experiments, including comparison with the original goal memory system's performance in the same environment. Preliminary results are also presented demonstrating the ability of the experience map to adapt to simple short term changes in the environment.
Resumo:
RatSLAM is a system for vision based Simultaneous Localization and Mapping (SLAM) that has been shown to be capable of building stable representations of real world environments. In this paper we describe a method for using RatSLAM representations as the basis for navigation to designated goal locations. The method uses a new component, goal memory, to learn the temporal gradient between places. Paths are recalled or inferred from the goal memory by following the temporal gradient from the robot’s current position to the goal location. Experimental results have been gathered in a combined office and laboratory environment using a Pioneer robot. The experiments show that the robot can perform vision based SLAM on-line and in real time, and then use those representations immediately to navigate directly to designated goal locations.
Resumo:
This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.
Resumo:
In this article some basic laboratory bench experiments are described that are useful for teaching high school students some of the basic principles of stellar astrophysics. For example, in one experiment, students slam a plastic water-filled bottle down onto a bench, ejecting water towards the ceiling illustrating the physics associated with a type II supernova explosion. In another experiment, students roll marbles up and down a double ramp in an attempt to get a marble to enter a tube half way up the slope, which illustrates quantum tunnelling in stellar cores. The experiments are reasonably low cost to either purchase or manufacture.
Resumo:
In this paper, we present recent results with using range from radio for mobile robot localization. In previous work we have shown how range readings from radio tags placed in the environment can be used to localize a robot. We have extended previous work to consider robustness. Specifically, we are interested in the case where range readings are very noisy and available intermittently. Also, we consider the case where the location of the radio tags is not known at all ahead of time and must be solved for simultaneously along with the position of the moving robot. We present results from a mobile robot that is equipped with GPS for ground truth, operating over several km.
Resumo:
The work presents a new approach to the problem of simultaneous localization and mapping - SLAM - inspired by computational models of the hippocampus of rodents. The rodent hippocampus has been extensively studied with respect to navigation tasks, and displays many of the properties of a desirable SLAM solution. RatSLAM is an implementation of a hippocampal model that can perform SLAM in real time on a real robot. It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment. Experimental results show that RatSLAM can operate with ambiguous landmark information and recover from both minor and major path integration errors.
Resumo:
Probabilistic robotics, most often applied to the problem of simultaneous localisation and mapping (SLAM), requires measures of uncertainly to accompany observations of the environment. This paper describes how uncertainly can be characterised for a vision system that locates coloured landmark in a typical laboratory environment. The paper describes a model of the uncertainly in segmentation, the internal camera model and the mounting of the camera on the robot. It =plains the implementation of the system on a laboratory robot, and provides experimental results that show the coherence of the uncertainly model,
Resumo:
This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.
Resumo:
This paper presents the development of a low-cost sensor platform for use in ground-based visual pose estimation and scene mapping tasks. We seek to develop a technical solution using low-cost vision hardware that allows us to accurately estimate robot position for SLAM tasks. We present results from the application of a vision based pose estimation technique to simultaneously determine camera poses and scene structure. The results are generated from a dataset gathered traversing a local road at the St Lucia Campus of the University of Queensland. We show the accuracy of the pose estimation over a 1.6km trajectory in relation to GPS ground truth.