532 resultados para swd: Image processing
Resumo:
Considering the wide spectrum of situations that it may encounter, a robot navigating autonomously in outdoor environments needs to be endowed with several operating modes, for robustness and efficiency reasons. Indeed, the terrain it has to traverse may be composed of flat or rough areas, low cohesive soils such as sand dunes, concrete road etc... Traversing these various kinds of environment calls for different navigation and/or locomotion functionalities, especially if the robot is endowed with different locomotion abilities, such as the robots WorkPartner, Hylos [4], Nomad or the Marsokhod rovers.
Resumo:
Considering the wide spectrum of situations that it may encounter, a robot navigating autonomously in outdoor environments needs to be endowed with several operating modes, for robustness and efficiency reasons. Indeed, the terrain it has to traverse may be composed of flat or rough areas, low cohesive soils such as sand dunes, concrete road etc. . .Traversing these various kinds of environment calls for different navigation and/or locomotion functionalities, especially if the robot is endowed with different locomotion abilities, such as the robots WorkPartner, Hylos [4], Nomad or the Marsokhod rovers. Numerous rover navigation techniques have been proposed, each of them being suited to a particular environment context (e.g. path following, obstacle avoidance in more or less cluttered environments, rough terrain traverses...). However, seldom contributions in the literature tackle the problem of selecting autonomously the most suited mode [3]. Most of the existing work is indeed devoted to the passive analysis of a single navigation mode, as in [2]. Fault detection is of course essential: one can imagine that a proper monitoring of the Mars Exploration Rover Opportunity could have avoided the rover to be stuck during several weeks in a dune, by detecting non-nominal behavior of some parameters. But the ability to recover the anticipated problem by switching to a better suited navigation mode would bring higher autonomy abilities, and therefore a better overall efficiency. We propose here a probabilistic framework to achieve this, which fuses environment related and robot related information in order to actively control the rover operations.
Resumo:
Autonomous navigation and locomotion of a mobile robot in natural environments remain a rather open issue. Several functionalities are required to complete the usual perception/decision/action cycle. They can be divided in two main categories : navigation (perception and decision about the movement) and locomotion (movement execution). In order to be able to face the large range of possible situations in natural environments, it is essential to make use of various kinds of complementary functionalities, defining various navigation and locomotion modes. Indeed, a number of navigation and locomotion approaches have been proposed in the literature for the last years, but none can pretend being able to achieve autonomous navigation and locomotion in every situation. Thus, it seems relevant to endow an outdoor mobile robot with several complementary navigation and locomotion modes. Accordingly, the robot must also have means to select the most appropriate mode to apply. This thesis proposes the development of such a navigation/locomotion mode selection system, based on two types of data: an observation of the context to determine in what kind of situation the robot has to achieve its movement and an evaluation of the behavior of the current mode, made by monitors which influence the transitions towards other modes when the behavior of the current one is considered as non satisfying. Hence, this document introduces a probabilistic framework for the estimation of the mode to be applied, some navigation and locomotion modes used, a qualitative terrain representation method (based on the evaluation of a difficulty computed from the placement of the robot's structure on a digital elevation map), and monitors that check the behavior of the modes used (evaluation of rolling locomotion efficiency, robot's attitude and configuration watching. . .). Some experimental results obtained with those elements integrated on board two different outdoor robots are presented and discussed.
Resumo:
It is well recognized that many scientifically interesting sites on Mars are located in rough terrains. Therefore, to enable safe autonomous operation of a planetary rover during exploration, the ability to accurately estimate terrain traversability is critical. In particular, this estimate needs to account for terrain deformation, which significantly affects the vehicle attitude and configuration. This paper presents an approach to estimate vehicle configuration, as a measure of traversability, in deformable terrain by learning the correlation between exteroceptive and proprioceptive information in experiments. We first perform traversability estimation with rigid terrain assumptions, then correlate the output with experienced vehicle configuration and terrain deformation using a multi-task Gaussian Process (GP) framework. Experimental validation of the proposed approach was performed on a prototype planetary rover and the vehicle attitude and configuration estimate was compared with state-of-the-art techniques. We demonstrate the ability of the approach to accurately estimate traversability with uncertainty in deformable terrain.
Resumo:
Operating in vegetated environments is a major challenge for autonomous robots. Obstacle detection based only on geometric features causes the robot to consider foliage, for example, small grass tussocks that could be easily driven through, as obstacles. Classifying vegetation does not solve this problem since there might be an obstacle hidden behind the vegetation. In addition, dense vegetation typically needs to be considered as an obstacle. This paper addresses this problem by augmenting probabilistic traversability map constructed from laser data with ultra-wideband radar measurements. An adaptive detection threshold and a probabilistic sensor model are developed to convert the radar data to occupancy probabilities. The resulting map captures the fine resolution of the laser map but clears areas from the traversability map that are induced by obstacle-free foliage. Experimental results validate that this method is able to improve the accuracy of traversability maps in vegetated environments.
Resumo:
Motion planning for planetary rovers must consider control uncertainty in order to maintain the safety of the platform during navigation. Modelling such control uncertainty is difficult due to the complex interaction between the platform and its environment. In this paper, we propose a motion planning approach whereby the outcome of control actions is learned from experience and represented statistically using a Gaussian process regression model. This model is used to construct a control policy for navigation to a goal region in a terrain map built using an on-board RGB-D camera. The terrain includes flat ground, small rocks, and non-traversable rocks. We report the results of 200 simulated and 35 experimental trials that validate the approach and demonstrate the value of considering control uncertainty in maintaining platform safety.
Resumo:
This paper proposes an approach to obtain a localisation that is robust to smoke by exploiting multiple sensing modalities: visual and infrared (IR) cameras. This localisation is based on a state-of-the-art visual SLAM algorithm. First, we show that a reasonably accurate localisation can be obtained in the presence of smoke by using only an IR camera, a sensor that is hardly affected by smoke, contrary to a visual camera (operating in the visible spectrum). Second, we demonstrate that improved results can be obtained by combining the information from the two sensor modalities (visual and IR cameras). Third, we show that by detecting the impact of smoke on the visual images using a data quality metric, we can anticipate and mitigate the degradation in performance of the localisation by discarding the most affected data. The experimental validation presents multiple trajectories estimated by the various methods considered, all thoroughly compared to an accurate dGPS/INS reference.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.
Resumo:
In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.
Resumo:
This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.
Resumo:
This work aims to promote reliability and integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicle (UGV) autonomy. For this purpose, a comprehensive UGV system, comprising many different exteroceptive and proprioceptive sensors has been built. The first contribution of this work is a large, accurately calibrated and synchronised, multi-modal data-set, gathered in controlled environmental conditions, including the presence of dust, smoke and rain. The data have then been used to analyse the effects of such challenging conditions on perception and to identify common perceptual failures. The second contribution is a presentation of methods for mitigating these failures to promote perceptual integrity in adverse environmental conditions.
Resumo:
This paper presents an approach to autonomously monitor the behavior of a robot endowed with several navigation and locomotion modes, adapted to the terrain to traverse. The mode selection process is done in two steps: the best suited mode is firstly selected on the basis of initial information or a qualitative map built on-line by the robot. Then, the motions of the robot are monitored by various processes that update mode transition probabilities in a Markov system. The paper focuses on this latter selection process: the overall approach is depicted, and preliminary experimental results are presented
Resumo:
This article presents an approach to improve and monitor the behavior of a skid-steering rover on rough terrains. An adaptive locomotion control generates speeds references to avoid slipping situations. An enhanced odometry provides a better estimation of the distance travelled. A probabilistic classification procedure provides an evaluation of the locomotion efficiency on-line, with a detection of locomotion faults. Results obtained with a Marsokhod rover are presented throughout the paper
Resumo:
Many applications can benefit from the accurate surface temperature estimates that can be made using a passive thermal-infrared camera. However, the process of radiometric calibration which enables this can be both expensive and time consuming. An ad hoc approach for performing radiometric calibration is proposed which does not require specialized equipment and can be completed in a fraction of the time of the conventional method. The proposed approach utilizes the mechanical properties of the camera to estimate scene temperatures automatically, and uses these target temperatures to model the effect of sensor temperature on the digital output. A comparison with a conventional approach using a blackbody radiation source shows that the accuracy of the method is sufficient for many tasks requiring temperature estimation. Furthermore, a novel visualization method is proposed for displaying the radiometrically calibrated images to human operators. The representation employs an intuitive coloring scheme and allows the viewer to perceive a large variety of temperatures accurately.
Resumo:
Energy auditing is an effective but costly approach for reducing the long-term energy consumption of buildings. When well-executed, energy loss can be quickly identified in the building structure and its subsystems. This then presents opportunities for improving energy efficiency. We present a low-cost, portable technology called "HeatWave" which allows non-experts to generate detailed 3D surface temperature models for energy auditing. This handheld 3D thermography system consists of two commercially available imaging sensors and a set of software algorithms which can be run on a laptop. The 3D model can be visualized in real-time by the operator so that they can monitor their degree of coverage as the sensors are used to capture data. In addition, results can be analyzed offline using the proposed "Spectra" multispectral visualization toolbox. The presence of surface temperature data in the generated 3D model enables the operator to easily identify and measure thermal irregularities such as thermal bridges, insulation leaks, moisture build-up and HVAC faults. Moreover, 3D models generated from subsequent audits of the same environment can be automatically compared to detect temporal changes in conditions and energy use over time.