897 resultados para Concurrent localization and mapping
Resumo:
This paper presents the implementation of a modified particle filter for vision-based simultaneous localization and mapping of an autonomous robot in a structured indoor environment. Through this method, artificial landmarks such as multi-coloured cylinders can be tracked with a camera mounted on the robot, and the position of the robot can be estimated at the same time. Experimental results in simulation and in real environments show that this approach has advantages over the extended Kalman filter with ambiguous data association and various levels of odometric noise.
Resumo:
The challenge of persistent appearance-based navigation and mapping is to develop an autonomous robotic vision system that can simultaneously localize, map and navigate over the lifetime of the robot. However, the computation time and memory requirements of current appearance-based methods typically scale not only with the size of the environment but also with the operation time of the platform; also, repeated revisits to locations will develop multiple competing representations which reduce recall performance. In this paper we present a solution to the persistent localization, mapping and global path planning problem in the context of a delivery robot in an office environment over a one-week period. Using a graphical appearance-based SLAM algorithm, CAT-Graph, we demonstrate constant time and memory loop closure detection with minimal degradation during repeated revisits to locations, along with topological path planning that improves over time without using a global metric representation. We compare the localization performance of CAT-Graph to openFABMAP, an appearance-only SLAM algorithm, and the path planning performance to occupancy-grid based metric SLAM. We discuss the limitations of the algorithm with regard to environment change over time and illustrate how the topological graph representation can be coupled with local movement behaviors for persistent autonomous robot navigation.
Resumo:
This paper addresses the problem of localizing the sources of contaminants spread in the environment, and mapping the boundary of the affected region using an innovative swarm intelligence based technique. Unlike most work in this area the algorithm is capable of localizing multiple sources simultaneously while also mapping the boundary of the contaminant spread. At the same time the algorithm is suitable for implementation using a mobile robotic sensor network. Two types of agents, called the source localization agents (or S-agents) and boundary mapping agents (or B-agents) are used for this purpose. The paper uses the basic glowworm swarm optimization (GSO) algorithm, which has been used only for multiple signal source localization, and modifies it considerably to make it suitable for both these tasks. This requires the definition of new behaviour patterns for the agents based on their terminal performance as well as interactions between them that helps the swarm to split into subgroups easily and identify contaminant sources as well as spread along the boundary to map its full length. Simulations results are given to demonstrate the efficacy of the algorithm.
Resumo:
The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.
Resumo:
For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.
Resumo:
For a mobile robot to operate autonomously in real-world environments, it must have an effective control system and a navigation system capable of providing robust localization, path planning and path execution. In this paper we describe the work investigating synergies between mapping and control systems. We have integrated development of a control system for navigating mobile robots and a robot SLAM system. The control system is hybrid in nature and tightly coupled with the SLAM system; it uses a combination of high and low level deliberative and reactive control processes to perform obstacle avoidance, exploration, global navigation and recharging, and draws upon the map learning and localization capabilities of the SLAM system. The effectiveness of this hybrid, multi-level approach was evaluated in the context of a delivery robot scenario. Over a period of two weeks the robot performed 1143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), travelled a total distance of more than 40km, and recharged autonomously a total of 23 times. In this paper we describe the combined control and SLAM system and discuss insights gained from its successful application in a real-world context.
Resumo:
Appearance-based localization is increasingly used for loop closure detection in metric SLAM systems. Since it relies only upon the appearance-based similarity between images from two locations, it can perform loop closure regardless of accumulated metric error. However, the computation time and memory requirements of current appearance-based methods scale linearly not only with the size of the environment but also with the operation time of the platform. These properties impose severe restrictions on longterm autonomy for mobile robots, as loop closure performance will inevitably degrade with increased operation time. We present a set of improvements to the appearance-based SLAM algorithm CAT-SLAM to constrain computation scaling and memory usage with minimal degradation in performance over time. The appearance-based comparison stage is accelerated by exploiting properties of the particle observation update, and nodes in the continuous trajectory map are removed according to minimal information loss criteria. We demonstrate constant time and space loop closure detection in a large urban environment with recall performance exceeding FAB-MAP by a factor of 3 at 100% precision, and investigate the minimum computational and memory requirements for maintaining mapping performance.
Resumo:
Zhikong scallop Chlamys farreri(Jones et Preston) is an economically important species in China. Understanding its immune system would be of great help in controlling diseases. In the present study, an important immunity-related gene, the Lipopolysaccharide and Beta-1,3-glucan Binding Protein (LGBP) gene, was located on C. farreri chromosomes by mapping several lgbp-containing BAC clones through fluorescence in situ hybridization (FISH). Through the localization of various BAC clones, it was shown that only one locus of this gene existed in the genome of C. farreri, and that this was located on the long arm of a pair of homologous chromosomes. Molecular markers, consisting of eight single nucleotide polymorphism (SNPs) markers and one insertion-deletion (indel), were developed from the LGBP gene. Indel marker testing in an F1 family revealed slightly distorted segregation (p = 0.0472). These markers can be used to map the LGBP gene to the linkage map and assign the linkage group to the corresponding chromosome. Segregation distortion of the indel marker indicated genes with deleterious alleles might exist in the surrounding region of the LGBP gene.
Resumo:
The current understanding of hormonal regulation of matrix metalloproteinase-26 (MMP-26) in the primate endometrium is incomplete. The goal of this work was to clarify estrogen and progesterone regulation of MMP-26 in the endometrium of ovariectomized, hormone-treated rhesus macaques.Ovariectomized rhesus macaques (n 66) were treated with estradiol (E-2), E-2 plus progesterone, E-2 followed by progesterone alone or no hormone. Endometrium was collected from the hormone-treated animals during the early, mid- and late proliferative and secretory phases of the artificial menstrual cycle. MMP-26 expression was quantified by real-time PCR, and MMP-26 transcript and protein were localized by in situ hybridization and immunohistochemistry and correlated with estrogen receptor 1 and progesterone receptor (PGR).MMP-26 was localized to glandular epithelium and was undetectable in the endometrial stroma and vasculature. MMP-26 transcript levels were minimal in the hormone-deprived macaques and treatment with E-2 alone did not affect MMP-26 levels. Treatment with progesterone both in the presence and absence of E-2 stimulated MMP-26 expression in the early and mid-secretory phases (P 0.001). MMP-26 expression preceded decidualization of endometrial stroma. MMP-26 levels then declined to baseline in the late secretory phase (P 0.01) despite continued E-2 plus progesterone treatment. Loss of detectable MMP-26 expression in the late secretory phase was correlated with late secretory phase loss of glandular epithelial PGR.Endometrial MMP-26 expression is dependent on the presence of progesterone in the early secretory phase and then gradually becomes refractory to progesterone stimulation in the late secretory phase. In the macaque, MMP-26 is a marker of the pre-decidual, secretory endometrium. During the second half of the late secretory phase, and during decidualization, MMP-26 loses its response to progesterone concurrent with the loss of epithelial PGR. The decline in MMP-26 levels between the mid- and late secretory phases may play a role in the receptive window for embryo implantation.
Resumo:
A system for simultaneous 2D estimation of rectangular room and transceiver localization is proposed. The system is based on two radio transceivers, both capable of full duplex operations (simultaneous transmission and reception). This property enables measurements of channel impulse response (CIR) at the same place the signal is transmitted (generated), commonly known as self-to-self CIR. Another novelty of the proposed system is the spatial CIR discrimination that is possible with the receiver antenna design which consists of eight sectorized antennas with 45° aperture in the horizontal plane and total coverage equal to the isotropic one. The dimensions of a rectangular room are reconstructed directly from spatial radio impulse responses by extracting the information regarding round trip time (RTT). Using radar approach estimation of walls and corners positions is derived. Tests using measured data were performed, and the simulation results confirm the feasibility of the approach.
Resumo:
This paper describes a biologically inspired approach to vision-only simultaneous localization and mapping (SLAM) on ground-based platforms. The core SLAM system, dubbed RatSLAM, is based on computational models of the rodent hippocampus, and is coupled with a lightweight vision system that provides odometry and appearance information. RatSLAM builds a map in an online manner, driving loop closure and relocalization through sequences of familiar visual scenes. Visual ambiguity is managed by maintaining multiple competing vehicle pose estimates, while cumulative errors in odometry are corrected after loop closure by a map correction algorithm. We demonstrate the mapping performance of the system on a 66 km car journey through a complex suburban road network. Using only a web camera operating at 10 Hz, RatSLAM generates a coherent map of the entire environment at real-time speed, correctly closing more than 51 loops of up to 5 km in length.
Resumo:
To navigate successfully in a novel environment a robot needs to be able to Simultaneously Localize And Map (SLAM) its surroundings. The most successful solutions to this problem so far have involved probabilistic algorithms, but there has been much promising work involving systems based on the workings of part of the rodent brain known as the hippocampus. In this paper we present a biologically plausible system called RatSLAM that uses competitive attractor networks to carry out SLAM in a probabilistic manner. The system can effectively perform parameter self-calibration and SLAM in one dimension. Tests in two dimensional environments revealed the inability of the RatSLAM system to maintain multiple pose hypotheses in the face of ambiguous visual input. These results support recent rat experimentation that suggest current competitive attractor models are not a complete solution to the hippocampal modelling problem.
Resumo:
This paper describes the current state of RatSLAM, a Simultaneous Localisation and Mapping (SLAM) system based on models of the rodent hippocampus. RatSLAM uses a competitive attractor network to fuse visual and odometry information. Energy packets in the network represent pose hypotheses, which are updated by odometry and can be enhanced or inhibited by visual input. This paper shows the effectiveness of the system in real robot tests in unmodified indoor environments using a learning vision system. Results are shown for two test environments; a large corridor loop and the complete floor of an office building.
Resumo:
This paper describes a novel experiment in which two very different methods of underwater robot localization are compared. The first method is based on a geometric approach in which a mobile node moves within a field of static nodes, and all nodes are capable of estimating the range to their neighbours acoustically. The second method uses visual odometry, from stereo cameras, by integrating scaled optical flow. The fundamental algorithmic principles of each localization technique is described. We also present experimental results comparing acoustic localization with GPS for surface operation, and a comparison of acoustic and visual methods for underwater operation.