159 resultados para Robot localization
em Queensland University of Technology - ePrints Archive
Resumo:
This paper presents the implementation of a modified particle filter for vision-based simultaneous localization and mapping of an autonomous robot in a structured indoor environment. Through this method, artificial landmarks such as multi-coloured cylinders can be tracked with a camera mounted on the robot, and the position of the robot can be estimated at the same time. Experimental results in simulation and in real environments show that this approach has advantages over the extended Kalman filter with ambiguous data association and various levels of odometric noise.
Resumo:
This paper describes a novel experiment in which two very different methods of underwater robot localization are compared. The first method is based on a geometric approach in which a mobile node moves within a field of static nodes, and all nodes are capable of estimating the range to their neighbours acoustically. The second method uses visual odometry, from stereo cameras, by integrating scaled optical flow. The fundamental algorithmic principles of each localization technique is described. We also present experimental results comparing acoustic localization with GPS for surface operation, and a comparison of acoustic and visual methods for underwater operation.
Resumo:
In this paper, we present recent results with using range from radio for mobile robot localization. In previous work we have shown how range readings from radio tags placed in the environment can be used to localize a robot. We have extended previous work to consider robustness. Specifically, we are interested in the case where range readings are very noisy and available intermittently. Also, we consider the case where the location of the radio tags is not known at all ahead of time and must be solved for simultaneously along with the position of the moving robot. We present results from a mobile robot that is equipped with GPS for ground truth, operating over several km.
Resumo:
This paper presents an approach to mobile robot localization, place recognition and loop closure using a monostatic ultra-wide band (UWB) radar system. The UWB radar is a time-of-flight based range measurement sensor that transmits short pulses and receives reflected waves from objects in the environment. The main idea of the poposed localization method is to treat the received waveform as a signature of place. The resulting echo waveform is very complex and highly depends on the position of the sensor with respect to surrounding objects. On the other hand, the sensor receives similar waveforms from the same positions.Moreover, the directional characteristics of dipole antenna is almost omnidirectional. Therefore, we can localize the sensor position to find similar waveform from waveform database. This paper proposes a place recognitionmethod based on waveform matching, presents a number of experiments that illustrate the high positon estimation accuracy of our UWB radar-based localization system, and shows the resulting loop detection performance in a typical indoor office environment and a forest.
Resumo:
A major challenge for robot localization and mapping systems is maintaining reliable operation in a changing environment. Vision-based systems in particular are susceptible to changes in illumination and weather, and the same location at another time of day may appear radically different to a system using a feature-based visual localization system. One approach for mapping changing environments is to create and maintain maps that contain multiple representations of each physical location in a topological framework or manifold. However, this requires the system to be able to correctly link two or more appearance representations to the same spatial location, even though the representations may appear quite dissimilar. This paper proposes a method of linking visual representations from the same location without requiring a visual match, thereby allowing vision-based localization systems to create multiple appearance representations of physical locations. The most likely position on the robot path is determined using particle filter methods based on dead reckoning data and recent visual loop closures. In order to avoid erroneous loop closures, the odometry-based inferences are only accepted when the inferred path's end point is confirmed as correct by the visual matching system. Algorithm performance is demonstrated using an indoor robot dataset and a large outdoor camera dataset.
Resumo:
This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.
Resumo:
This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.
Resumo:
In this paper we discuss how a network of sensors and robots can cooperate to solve important robotics problems such as localization and navigation. We use a robot to localize sensor nodes, and we then use these localized nodes to navigate robots and humans through the sensorized space. We explore these novel ideas with results from two large-scale sensor network and robot experiments involving 50 motes, two types of flying robot: an autonomous helicopter and a large indoor cable array robot, and a human-network interface. We present the distributed algorithms for localization, geographic routing, path definition and incremental navigation. We also describe how a human can be guided using a simple hand-held device that interfaces to this same environmental infrastructure.
Resumo:
We consider multi-robot systems that include sensor nodes and aerial or ground robots networked together. We describe two cooperative algorithms that allow robots and sensors to enhance each other's performance. In the first algorithm, an aerial robot assists the localization of the sensors. In the second algorithm, a localized sensor network controls the navigation of an aerial robot. We present physical experiments with an flying robot and a large Mica Mote sensor network.
Resumo:
This paper introduces the application of a sensor network to navigate a flying robot. We have developed distributed algorithms and efficient geographic routing techniques to incrementally guide one or more robots to points of interest based on sensor gradient fields, or along paths defined in terms of Cartesian coordinates. The robot itself is an integral part of the localization process which establishes the positions of sensors which are not known a priori. We use this system in a large-scale outdoor experiment with Mote sensors to guide an autonomous helicopter along a path encoded in the network. A simple handheld device, using this same environmental infrastructure, is used to guide humans.
Resumo:
This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.
Resumo:
The work presents a new approach to the problem of simultaneous localization and mapping - SLAM - inspired by computational models of the hippocampus of rodents. The rodent hippocampus has been extensively studied with respect to navigation tasks, and displays many of the properties of a desirable SLAM solution. RatSLAM is an implementation of a hippocampal model that can perform SLAM in real time on a real robot. It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment. Experimental results show that RatSLAM can operate with ambiguous landmark information and recover from both minor and major path integration errors.
Resumo:
Probabilistic robotics, most often applied to the problem of simultaneous localisation and mapping (SLAM), requires measures of uncertainly to accompany observations of the environment. This paper describes how uncertainly can be characterised for a vision system that locates coloured landmark in a typical laboratory environment. The paper describes a model of the uncertainly in segmentation, the internal camera model and the mounting of the camera on the robot. It =plains the implementation of the system on a laboratory robot, and provides experimental results that show the coherence of the uncertainly model,
Resumo:
For a mobile robot to operate autonomously in real-world environments, it must have an effective control system and a navigation system capable of providing robust localization, path planning and path execution. In this paper we describe the work investigating synergies between mapping and control systems. We have integrated development of a control system for navigating mobile robots and a robot SLAM system. The control system is hybrid in nature and tightly coupled with the SLAM system; it uses a combination of high and low level deliberative and reactive control processes to perform obstacle avoidance, exploration, global navigation and recharging, and draws upon the map learning and localization capabilities of the SLAM system. The effectiveness of this hybrid, multi-level approach was evaluated in the context of a delivery robot scenario. Over a period of two weeks the robot performed 1143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), travelled a total distance of more than 40km, and recharged autonomously a total of 23 times. In this paper we describe the combined control and SLAM system and discuss insights gained from its successful application in a real-world context.