256 resultados para Robots.
Resumo:
This paper proposes an approach to obtain a localisation that is robust to smoke by exploiting multiple sensing modalities: visual and infrared (IR) cameras. This localisation is based on a state-of-the-art visual SLAM algorithm. First, we show that a reasonably accurate localisation can be obtained in the presence of smoke by using only an IR camera, a sensor that is hardly affected by smoke, contrary to a visual camera (operating in the visible spectrum). Second, we demonstrate that improved results can be obtained by combining the information from the two sensor modalities (visual and IR cameras). Third, we show that by detecting the impact of smoke on the visual images using a data quality metric, we can anticipate and mitigate the degradation in performance of the localisation by discarding the most affected data. The experimental validation presents multiple trajectories estimated by the various methods considered, all thoroughly compared to an accurate dGPS/INS reference.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.
Resumo:
In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.
Resumo:
This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.
Resumo:
This work aims to promote reliability and integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicle (UGV) autonomy. For this purpose, a comprehensive UGV system, comprising many different exteroceptive and proprioceptive sensors has been built. The first contribution of this work is a large, accurately calibrated and synchronised, multi-modal data-set, gathered in controlled environmental conditions, including the presence of dust, smoke and rain. The data have then been used to analyse the effects of such challenging conditions on perception and to identify common perceptual failures. The second contribution is a presentation of methods for mitigating these failures to promote perceptual integrity in adverse environmental conditions.
Resumo:
This paper presents an approach to autonomously monitor the behavior of a robot endowed with several navigation and locomotion modes, adapted to the terrain to traverse. The mode selection process is done in two steps: the best suited mode is firstly selected on the basis of initial information or a qualitative map built on-line by the robot. Then, the motions of the robot are monitored by various processes that update mode transition probabilities in a Markov system. The paper focuses on this latter selection process: the overall approach is depicted, and preliminary experimental results are presented
Resumo:
This article presents an approach to improve and monitor the behavior of a skid-steering rover on rough terrains. An adaptive locomotion control generates speeds references to avoid slipping situations. An enhanced odometry provides a better estimation of the distance travelled. A probabilistic classification procedure provides an evaluation of the locomotion efficiency on-line, with a detection of locomotion faults. Results obtained with a Marsokhod rover are presented throughout the paper
Resumo:
Covertly tracking mobile targets, either animal or human, in previously unmapped outdoor natural environments using off-road robotic platforms requires both visual and acoustic stealth. Whilst the use of robots for stealthy surveillance is not new, the majority only consider navigation for visual covertness. However, most fielded robotic systems have a non-negligible acoustic footprint arising from the onboard sensors, motors, computers and cooling systems, and also from the wheels interacting with the terrain during motion. This time-varying acoustic signature can jeopardise any visual covertness and needs to be addressed in any stealthy navigation strategy. In previous work, we addressed the initial concepts for acoustically masking a tracking robot’s movements as it travels between observation locations selected to minimise its detectability by a dynamic natural target and ensuring con- tinuous visual tracking of the target. This work extends the overall concept by examining the utility of real-time acoustic signature self-assessment and exploiting shadows as hiding locations for use in a combined visual and acoustic stealth framework.
Resumo:
This work is motivated by the desire to covertly track mobile targets, either animal or human, in previously unmapped outdoor natural environments using off-road robotic platforms with a non-negligible acoustic signature. The use of robots for stealthy surveillance is not new. Many studies exist but only consider the navigation problem to maintain visual covertness. However, robotic systems also have a significant acoustic footprint from the onboard sensors, motors, computers and cooling systems, and also from the wheels interacting with the terrain during motion. All these can jepordise any visual covertness. In this work, we experimentally explore the concepts of opportunistically utilizing naturally occurring sounds within outdoor environments to mask the motion of a robot, and being visually covert whilst maintaining constant observation of the target. Our experiments in a constrained outdoor built environment demonstrate the effectiveness of the concept by showing a reduced acoustic signature as perceived by a mobile target allowing the robot to covertly navigate to opportunistic vantage points for observation.
Resumo:
This paper describes the experimental evaluation of a novel Autonomous Surface Vehicle capable of navigating complex inland water reservoirs and measuring a range of water quality properties and greenhouse gas emissions. The 16 ft long solar powered catamaran is capable of collecting water column profiles whilst in motion. It is also directly integrated with a reservoir scale floating sensor network to allow remote mission uploads, data download and adaptive sampling strategies. This paper describes the onboard vehicle navigation and control algorithms as well as obstacle avoidance strategies. Experimental results are shown demonstrating its ability to maintain track and avoid obstacles on a variety of large-scale missions and under differing weather conditions, as well as its ability to continuously collect various water quality parameters complimenting traditional manual monitoring campaigns.
Resumo:
This paper describes the development of a novel vision-based autonomous surface vehicle with the purpose of performing coordinated docking manoeuvres with a target, such as an autonomous underwater vehicle, at the water's surface. The system architecture integrates two small processor units; the first performs vehicle control and implements a virtual force based docking strategy, with the second performing vision-based target segmentation and tracking. Furthermore, the architecture utilises wireless sensor network technology allowing the vehicle to be observed by, and even integrated within an ad-hoc sensor network. Simulated and experimental results are presented demonstrating the autonomous vision- based docking strategy on a proof-of-concept vehicle.
Resumo:
This paper describes the development of small low-cost cooperative robots for sustainable broad-acre agriculture to increase broad-acre crop production and reduce environmental impact. The current focus of the project is to use robotics to deal with resistant weeds, a critical problem for Australian farmers. To keep the overall system affordable our robot uses low-cost cameras and positioning sensors to perform a large scale coverage task while also avoiding obstacles. A multi-robot coordinator assigns parts of a given field to individual robots. The paper describes the modification of an electric vehicle for autonomy and experimental results from one real robot and twelve simulated robots working in coordination for approximately two hours on a 55 hectare field in Emerald Australia. Over this time the real robot 'sprayed' 6 hectares missing 2.6% and overlapping 9.7% within its assigned field partition, and successfully avoided three obstacles.
Resumo:
This paper describes a novel obstacle detection system for autonomous robots in agricultural field environments that uses a novelty detector to inform stereo matching. Stereo vision alone erroneously detects obstacles in environments with ambiguous appearance and ground plane such as in broad-acre crop fields with harvested crop residue. The novelty detector estimates the probability density in image descriptor space and incorporates image-space positional understanding to identify potential regions for obstacle detection using dense stereo matching. The results demonstrate that the system is able to detect obstacles typical to a farm at day and night. This system was successfully used as the sole means of obstacle detection for an autonomous robot performing a long term two hour coverage task travelling 8.5 km.
Resumo:
In this paper we describe the benefits of a performance-based approach to modeling biological systems for use in robotics. Specifically, we describe the RatSLAM system, a computational model of the navigation processes thought to drive navigation in a part of the rodent brain called the hippocampus. Unlike typical computational modeling approaches, which focus on biological fidelity, RatSLAM’s development cycle has been driven primarily by performance evaluation on robots navigating in a wide variety of challenging, real world environments. We briefly describe three seminal results, two in robotics and one in biology. In addition, we present current research on brain-inspired learning algorithms with the aim of enabling a robot to autonomously learn how best to use its sensor suite to navigate, without requiring any specific knowledge of the robot, sensor types or environment characteristics. Our aim is to drive discussion on the merits of practical, performance-focused implementations of biological models in robotics.
Resumo:
This paper presents a robust place recognition algorithm for mobile robots that can be used for planning and navigation tasks. The proposed framework combines nonlinear dimensionality reduction, nonlinear regression under noise, and Bayesian learning to create consistent probabilistic representations of places from images. These generative models are incrementally learnt from very small training sets and used for multi-class place recognition. Recognition can be performed in near real-time and accounts for complexity such as changes in illumination, occlusions, blurring and moving objects. The algorithm was tested with a mobile robot in indoor and outdoor environments with sequences of 1579 and 3820 images, respectively. This framework has several potential applications such as map building, autonomous navigation, search-rescue tasks and context recognition.