717 resultados para Localization system
em Queensland University of Technology - ePrints Archive
Resumo:
Visual localization systems that are practical for autonomous vehicles in outdoor industrial applications must perform reliably in a wide range of conditions. Changing outdoor conditions cause difficulty by drastically altering the information available in the camera images. To confront the problem, we have developed a visual localization system that uses a surveyed three-dimensional (3D)-edge map of permanent structures in the environment. The map has the invariant properties necessary to achieve long-term robust operation. Previous 3D-edge map localization systems usually maintain a single pose hypothesis, making it difficult to initialize without an accurate prior pose estimate and also making them susceptible to misalignment with unmapped edges detected in the camera image. A multihypothesis particle filter is employed here to perform the initialization procedure with significant uncertainty in the vehicle's initial pose. A novel observation function for the particle filter is developed and evaluated against two existing functions. The new function is shown to further improve the abilities of the particle filter to converge given a very coarse estimate of the vehicle's initial pose. An intelligent exposure control algorithm is also developed that improves the quality of the pertinent information in the image. Results gathered over an entire sunny day and also during rainy weather illustrate that the localization system can operate in a wide range of outdoor conditions. The conclusion is that an invariant map, a robust multihypothesis localization algorithm, and an intelligent exposure control algorithm all combine to enable reliable visual localization through challenging outdoor conditions.
Resumo:
A new system is described for estimating volume from a series of multiplanar 2D ultrasound images. Ultrasound images are captured using a personal computer video digitizing card and an electromagnetic localization system is used to record the pose of the ultrasound images. The accuracy of the system was assessed by scanning four groups of ten cadaveric kidneys on four different ultrasound machines. Scan image planes were oriented either radially, in parallel or slanted at 30 C to the vertical. The cross-sectional images of the kidneys were traced using a mouse and the outline points transformed to 3D space using the Fastrak position and orientation data. Points on adjacent region of interest outlines were connected to form a triangle mesh and the volume of the kidneys estimated using the ellipsoid, planimetry, tetrahedral and ray tracing methods. There was little difference between the results for the different scan techniques or volume estimation algorithms, although, perhaps as expected, the ellipsoid results were the least precise. For radial scanning and ray tracing, the mean and standard deviation of the percentage errors for the four different machines were as follows: Hitachi EUB-240, −3.0 ± 2.7%; Tosbee RM3, −0.1 ± 2.3%; Hitachi EUB-415, 0.2 ± 2.3%; Acuson, 2.7 ± 2.3%.
Resumo:
Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.
Resumo:
This paper presents an approach to mobile robot localization, place recognition and loop closure using a monostatic ultra-wide band (UWB) radar system. The UWB radar is a time-of-flight based range measurement sensor that transmits short pulses and receives reflected waves from objects in the environment. The main idea of the poposed localization method is to treat the received waveform as a signature of place. The resulting echo waveform is very complex and highly depends on the position of the sensor with respect to surrounding objects. On the other hand, the sensor receives similar waveforms from the same positions.Moreover, the directional characteristics of dipole antenna is almost omnidirectional. Therefore, we can localize the sensor position to find similar waveform from waveform database. This paper proposes a place recognitionmethod based on waveform matching, presents a number of experiments that illustrate the high positon estimation accuracy of our UWB radar-based localization system, and shows the resulting loop detection performance in a typical indoor office environment and a forest.
Resumo:
A major challenge for robot localization and mapping systems is maintaining reliable operation in a changing environment. Vision-based systems in particular are susceptible to changes in illumination and weather, and the same location at another time of day may appear radically different to a system using a feature-based visual localization system. One approach for mapping changing environments is to create and maintain maps that contain multiple representations of each physical location in a topological framework or manifold. However, this requires the system to be able to correctly link two or more appearance representations to the same spatial location, even though the representations may appear quite dissimilar. This paper proposes a method of linking visual representations from the same location without requiring a visual match, thereby allowing vision-based localization systems to create multiple appearance representations of physical locations. The most likely position on the robot path is determined using particle filter methods based on dead reckoning data and recent visual loop closures. In order to avoid erroneous loop closures, the odometry-based inferences are only accepted when the inferred path's end point is confirmed as correct by the visual matching system. Algorithm performance is demonstrated using an indoor robot dataset and a large outdoor camera dataset.
Resumo:
A coverage algorithm is an algorithm that deploys a strategy as to how to cover all points in terms of a given area using some set of sensors. In the past decades a lot of research has gone into development of coverage algorithms. Initially, the focus was coverage of structured and semi-structured indoor areas, but with time and development of better sensors and introduction of GPS, the focus has turned to outdoor coverage. Due to the unstructured nature of an outdoor environment, covering an outdoor area with all its obstacles and simultaneously performing reliable localization is a difficult task. In this paper, two path planning algorithms suitable for solving outdoor coverage tasks are introduced. The algorithms take into account the kinematic constraints of an under-actuated car-like vehicle, minimize trajectory curvatures, and dynamically avoid detected obstacles in the vicinity, all in real-time. We demonstrate the performance of the coverage algorithm in the field by achieving 95% coverage using an autonomous tractor mower without the aid of any absolute localization system or constraints on the physical boundaries of the area.
Resumo:
Seagoing vessels have to undergo regular inspections, which are currently performed manually by ship surveyors. The main cost factor in a ship inspection is to provide access to the different areas of the ship, since the surveyor has to be close to the inspected parts, usually within arm's reach, either to perform a visual analysis or to take thickness measurements. The access to the structural elements in cargo holds, e.g., bulkheads, is normally provided by staging or by 'cherry-picking' cranes. To make ship inspections safer and more cost-efficient, we have introduced new inspection methods, tools, and systems, which have been evaluated in field trials, particularly focusing on cargo holds. More precisely, two magnetic climbing robots and a micro-aerial vehicle, which are able to assist the surveyor during the inspection, are introduced. Since localization of inspection data is mandatory for the surveyor, we also introduce an external localization system that has been verified in field trials, using a climbing inspection robot. Furthermore, the inspection data collected by the robotic systems are organized and handled by a spatial content management system that enables us to compare the inspection data of one survey with those from another, as well as to document the ship inspection when the robot team is used. Image-based defect detection is addressed by proposing an integrated solution for detecting corrosion and cracks. The systems' performance is reported, as well as conclusions on their usability, all in accordance with the output of field trials performed onboard two different vessels under real inspection conditions.
Resumo:
To navigate successfully in a previously unexplored environment, a mobile robot must be able to estimate the spatial relationships of the objects of interest accurately. A Simultaneous Localization and Mapping (SLAM) sys- tem employs its sensors to build incrementally a map of its surroundings and to localize itself in the map simultaneously. The aim of this research project is to develop a SLAM system suitable for self propelled household lawnmowers. The proposed bearing-only SLAM system requires only an omnidirec- tional camera and some inexpensive landmarks. The main advantage of an omnidirectional camera is the panoramic view of all the landmarks in the scene. Placing landmarks in a lawn field to define the working domain is much easier and more flexible than installing the perimeter wire required by existing autonomous lawnmowers. The common approach of existing bearing-only SLAM methods relies on a motion model for predicting the robot’s pose and a sensor model for updating the pose. In the motion model, the error on the estimates of object positions is cumulated due mainly to the wheel slippage. Quantifying accu- rately the uncertainty of object positions is a fundamental requirement. In bearing-only SLAM, the Probability Density Function (PDF) of landmark position should be uniform along the observed bearing. Existing methods that approximate the PDF with a Gaussian estimation do not satisfy this uniformity requirement. This thesis introduces both geometric and proba- bilistic methods to address the above problems. The main novel contribu- tions of this thesis are: 1. A bearing-only SLAM method not requiring odometry. The proposed method relies solely on the sensor model (landmark bearings only) without relying on the motion model (odometry). The uncertainty of the estimated landmark positions depends on the vision error only, instead of the combination of both odometry and vision errors. 2. The transformation of the spatial uncertainty of objects. This thesis introduces a novel method for translating the spatial un- certainty of objects estimated from a moving frame attached to the robot into the global frame attached to the static landmarks in the environment. 3. The characterization of an improved PDF for representing landmark position in bearing-only SLAM. The proposed PDF is expressed in polar coordinates, and the marginal probability on range is constrained to be uniform. Compared to the PDF estimated from a mixture of Gaussians, the PDF developed here has far fewer parameters and can be easily adopted in a probabilistic framework, such as a particle filtering system. The main advantages of our proposed bearing-only SLAM system are its lower production cost and flexibility of use. The proposed system can be adopted in other domestic robots as well, such as vacuum cleaners or robotic toys when terrain is essentially 2D.
Automation of an underground mining vehicle using reactive navigation and opportunistic localization
Resumo:
This paper describes the implementation of an autonomous navigation system onto a 30 tonne Load-Haul-Dump truck. The control architecture is based on a robust reactive wall-following behaviour. To make it purposeful we provide driving hints derived from an approximate nodal-map. For most of the time, the vehicle is driven with weak localization (odometry). This need only be improved at intersections where decisions must be made - a technique we refer to as opportunistic localization. The truck has achieved full-speed autonomous operation at an artificial test mine, and subsequently, at a operational underground mine.
Resumo:
This paper describes a biologically inspired approach to vision-only simultaneous localization and mapping (SLAM) on ground-based platforms. The core SLAM system, dubbed RatSLAM, is based on computational models of the rodent hippocampus, and is coupled with a lightweight vision system that provides odometry and appearance information. RatSLAM builds a map in an online manner, driving loop closure and relocalization through sequences of familiar visual scenes. Visual ambiguity is managed by maintaining multiple competing vehicle pose estimates, while cumulative errors in odometry are corrected after loop closure by a map correction algorithm. We demonstrate the mapping performance of the system on a 66 km car journey through a complex suburban road network. Using only a web camera operating at 10 Hz, RatSLAM generates a coherent map of the entire environment at real-time speed, correctly closing more than 51 loops of up to 5 km in length.
Resumo:
The challenge of persistent navigation and mapping is to develop an autonomous robot system that can simultaneously localize, map and navigate over the lifetime of the robot with little or no human intervention. Most solutions to the simultaneous localization and mapping (SLAM) problem aim to produce highly accurate maps of areas that are assumed to be static. In contrast, solutions for persistent navigation and mapping must produce reliable goal-directed navigation outcomes in an environment that is assumed to be in constant flux. We investigate the persistent navigation and mapping problem in the context of an autonomous robot that performs mock deliveries in a working office environment over a two-week period. The solution was based on the biologically inspired visual SLAM system, RatSLAM. RatSLAM performed SLAM continuously while interacting with global and local navigation systems, and a task selection module that selected between exploration, delivery, and recharging modes. The robot performed 1,143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), traveled a total distance of more than 40 km over 37 hours of active operation, and recharged autonomously a total of 23 times.
Resumo:
This paper illustrates a method for finding useful visual landmarks for performing simultaneous localization and mapping (SLAM). The method is based loosely on biological principles, using layers of filtering and pooling to create learned templates that correspond to different views of the environment. Rather than using a set of landmarks and reporting range and bearing to the landmark, this system maps views to poses. The challenge is to produce a system that produces the same view for small changes in robot pose, but provides different views for larger changes in pose. The method has been developed to interface with the RatSLAM system, a biologically inspired method of SLAM. The paper describes the method of learning and recalling visual landmarks in detail, and shows the performance of the visual system in real robot tests.
Resumo:
This paper describes an autonomous navigation system for a large underground mining vehicle. The control architecture is based on a robust reactive wall-following behaviour. To make it purposeful we provide driving hints derived from an approximate nodal-map. For most of the time, the vehicle is driven with weak localization (odometry). This need only be improved at intersections where decisions must be made – a technique we refer to as opportunistic localization. The paper briefly reviews absolute and relative navigation strategies, and describes an implementation of a reactive navigation system on a 30 tonne Load-Haul-Dump truck. This truck has achieved full-speed autonomous operation at an artificial test mine, and subsequently, at a operational underground mine.
Resumo:
Probabilistic robotics, most often applied to the problem of simultaneous localisation and mapping (SLAM), requires measures of uncertainly to accompany observations of the environment. This paper describes how uncertainly can be characterised for a vision system that locates coloured landmark in a typical laboratory environment. The paper describes a model of the uncertainly in segmentation, the internal camera model and the mounting of the camera on the robot. It =plains the implementation of the system on a laboratory robot, and provides experimental results that show the coherence of the uncertainly model,
Resumo:
Systemic acquired resistance (SAR) is a broad-spectrum resistance in plants that involves the upregulation of a battery of pathogenesis-related (PR) genes. NPR1 is a key regulator in the signal transduction pathway that leads to SAR. Mutations in NPR1 result in a failure to induce PR genes in systemic tissues and a heightened susceptibility to pathogen infection, whereas overexpression of the NPR1 protein leads to increased induction of the PR genes and enhanced disease resistance. We analyzed the subcellular localization of NPR1 to gain insight into the mechanism by which this protein regulates SAR. An NPR1–green fluorescent protein fusion protein, which functions the same as the endogenous NPR1 protein, was shown to accumulate in the nucleus in response to activators of SAR. To control the nuclear transport of NPR1, we made a fusion of NPR1 with the glucocorticoid receptor hormone binding domain. Using this steroid-inducible system, we clearly demonstrate that nuclear localization of NPR1 is essential for its activity in inducing PR genes.