931 resultados para Nonholonomic mobile robot
Resumo:
This paper presents a robust place recognition algorithm for mobile robots that can be used for planning and navigation tasks. The proposed framework combines nonlinear dimensionality reduction, nonlinear regression under noise, and Bayesian learning to create consistent probabilistic representations of places from images. These generative models are incrementally learnt from very small training sets and used for multi-class place recognition. Recognition can be performed in near real-time and accounts for complexity such as changes in illumination, occlusions, blurring and moving objects. The algorithm was tested with a mobile robot in indoor and outdoor environments with sequences of 1579 and 3820 images, respectively. This framework has several potential applications such as map building, autonomous navigation, search-rescue tasks and context recognition.
Resumo:
Robots currently recognise and use objects through algorithms that are hand-coded or specifically trained. Such robots can operate in known, structured environments but cannot learn to recognise or use novel objects as they appear. This thesis demonstrates that a robot can develop meaningful object representations by learning the fundamental relationship between action and change in sensory state; the robot learns sensorimotor coordination. Methods based on Markov Decision Processes are experimentally validated on a mobile robot capable of gripping objects, and it is found that object recognition and manipulation can be learnt as an emergent property of sensorimotor coordination.
Resumo:
This paper presents a method to enable a mobile robot working in non-stationary environments to plan its path and localize within multiple map hypotheses simultaneously. The maps are generated using a long-term and short-term memory mechanism that ensures only persistent configurations in the environment are selected to create the maps. In order to evaluate the proposed method, experimentation is conducted in an office environment. Compared to navigation systems that use only one map, our system produces superior path planning and navigation in a non-stationary environment where paths can be blocked periodically, a common scenario which poses significant challenges for typical planners.
Resumo:
The ability to build high-fidelity 3D representations of the environment from sensor data is critical for autonomous robots. Multi-sensor data fusion allows for more complete and accurate representations. Furthermore, using distinct sensing modalities (i.e. sensors using a different physical process and/or operating at different electromagnetic frequencies) usually leads to more reliable perception, especially in challenging environments, as modalities may complement each other. However, they may react differently to certain materials or environmental conditions, leading to catastrophic fusion. In this paper, we propose a new method to reliably fuse data from multiple sensing modalities, including in situations where they detect different targets. We first compute distinct continuous surface representations for each sensing modality, with uncertainty, using Gaussian Process Implicit Surfaces (GPIS). Second, we perform a local consistency test between these representations, to separate consistent data (i.e. data corresponding to the detection of the same target by the sensors) from inconsistent data. The consistent data can then be fused together, using another GPIS process, and the rest of the data can be combined as appropriate. The approach is first validated using synthetic data. We then demonstrate its benefit using a mobile robot, equipped with a laser scanner and a radar, which operates in an outdoor environment in the presence of large clouds of airborne dust and smoke.
Resumo:
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment.
Resumo:
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metrictopological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability.
Resumo:
This paper evaluates the performance of different text recognition techniques for a mobile robot in an indoor (university campus) environment. We compared four different methods: our own approach using existing text detection methods (Minimally Stable Extremal Regions detector and Stroke Width Transform) combined with a convolutional neural network, two modes of the open source program Tesseract, and the experimental mobile app Google Goggles. The results show that a convolutional neural network combined with the Stroke Width Transform gives the best performance in correctly matched text on images with single characters whereas Google Goggles gives the best performance on images with multiple words. The dataset used for this work is released as well.
Resumo:
This paper presents an approach to mobile robot localization, place recognition and loop closure using a monostatic ultra-wide band (UWB) radar system. The UWB radar is a time-of-flight based range measurement sensor that transmits short pulses and receives reflected waves from objects in the environment. The main idea of the poposed localization method is to treat the received waveform as a signature of place. The resulting echo waveform is very complex and highly depends on the position of the sensor with respect to surrounding objects. On the other hand, the sensor receives similar waveforms from the same positions.Moreover, the directional characteristics of dipole antenna is almost omnidirectional. Therefore, we can localize the sensor position to find similar waveform from waveform database. This paper proposes a place recognitionmethod based on waveform matching, presents a number of experiments that illustrate the high positon estimation accuracy of our UWB radar-based localization system, and shows the resulting loop detection performance in a typical indoor office environment and a forest.
Resumo:
One of the main challenges facing online and offline path planners is the uncertainty in the magnitude and direction of the environmental energy because it is dynamic, changeable with time, and hard to forecast. This thesis develops an artificial intelligence for a mobile robot to learn from historical or forecasted data of environmental energy available in the area of interest which will help for a persistence monitoring under uncertainty using the developed algorithm.
Resumo:
Semantic perception and object labeling are key requirements for robots interacting with objects on a higher level. Symbolic annotation of objects allows the usage of planning algorithms for object interaction, for instance in a typical fetchand-carry scenario. In current research, perception is usually based on 3D scene reconstruction and geometric model matching, where trained features are matched with a 3D sample point cloud. In this work we propose a semantic perception method which is based on spatio-semantic features. These features are defined in a natural, symbolic way, such as geometry and spatial relation. In contrast to point-based model matching methods, a spatial ontology is used where objects are rather described how they "look like", similar to how a human would described unknown objects to another person. A fuzzy based reasoning approach matches perceivable features with a spatial ontology of the objects. The approach provides a method which is able to deal with senor noise and occlusions. Another advantage is that no training phase is needed in order to learn object features. The use-case of the proposed method is the detection of soil sample containers in an outdoor environment which have to be collected by a mobile robot. The approach is verified using real world experiments.
Resumo:
This paper describes ongoing work on a system using spatial descriptions to construct abstract maps that can be used for goal-directed exploration in an unfamiliar office environment. Abstract maps contain membership, connectivity, and spatial layout information extracted from symbolic spatial information. In goal-directed exploration, the robot would then link this information with observed symbolic information and its grounded world representation. We demonstrate the ability of the system to extract and represent membership, connectivity, and spatial layout information from spatial descriptions of an office environment. In the planned study, the robot will navigate to the goal location using the abstract map to inform the best direction to explore in.
Resumo:
The world is rich with information such as signage and maps to assist humans to navigate. We present a method to extract topological spatial information from a generic bitmap floor plan and build a topometric graph that can be used by a mobile robot for tasks such as path planning and guided exploration. The algorithm first detects and extracts text in an image of the floor plan. Using the locations of the extracted text, flood fill is used to find the rooms and hallways. Doors are found by matching SURF features and these form the connections between rooms, which are the edges of the topological graph. Our system is able to automatically detect doors and differentiate between hallways and rooms, which is important for effective navigation. We show that our method can extract a topometric graph from a floor plan and is robust against ambiguous cases most commonly seen in floor plans including elevators and stairwells.
Resumo:
Robotic vision is limited by line of sight and onboard camera capabilities. Robots can acquire video or images from remote cameras, but processing additional data has a computational burden. This paper applies the Distributed Robotic Vision Service, DRVS, to robot path planning using data outside line-of-sight of the robot. DRVS implements a distributed visual object detection service to distributes the computation to remote camera nodes with processing capabilities. Robots request task-specific object detection from DRVS by specifying a geographic region of interest and object type. The remote camera nodes perform the visual processing and send the high-level object information to the robot. Additionally, DRVS relieves robots of sensor discovery by dynamically distributing object detection requests to remote camera nodes. Tested over two different indoor path planning tasks DRVS showed dramatic reduction in mobile robot compute load and wireless network utilization.
Resumo:
This paper shows that by using only symbolic language phrases, a mobile robot can purposefully navigate to specified rooms in previously unexplored environments. The robot intelligently organises a symbolic language description of the unseen environment and “imagines” a representative map, called the abstract map. The abstract map is an internal representation of the topological structure and spatial layout of symbolically defined locations. To perform goal-directed exploration, the abstract map creates a high-level semantic plan to reason about spaces beyond the robot’s known world. While completing the plan, the robot uses the metric guidance provided by a spatial layout, and grounded observations of door labels, to efficiently guide its navigation. The system is shown to complete exploration in unexplored spaces by travelling only 13.3% further than the optimal path.
Resumo:
In this paper, we propose a vision based mobile robot localization strategy. Local scale-invariant features are used as natural landmarks in unstructured and unmodified environment. The local characteristics of the features we use prove to be robust to occlusion and outliers. In addition, the invariance of the features to viewpoint change makes them suitable landmarks for mobile robot localization. Scale-invariant features detected in the first exploration are indexed into a location database. Indexing and voting allow efficient recognition of global localization. The localization result is verified by epipolar geometry between the representative view in database and the view to be localized, thus the probability of false localization will be decreased. The localization system can recover the pose of the camera mounted on the robot by essential matrix decomposition. Then the position of the robot can be computed easily. Both calibrated and un-calibrated cases are discussed and relative position estimation based on calibrated camera turns out to be the better choice. Experimental results show that our approach is effective and reliable in the case of illumination changes, similarity transformations and extraneous features. © 2004 IEEE.