220 resultados para Autonomous navigation
em Queensland University of Technology - ePrints Archive
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
Resumo:
In this paper, we present a monocular vision based autonomous navigation system for Micro Aerial Vehicles (MAVs) in GPS-denied environments. The major drawback of monocular systems is that the depth scale of the scene can not be determined without prior knowledge or other sensors. To address this problem, we minimize a cost function consisting of a drift-free altitude measurement and up-to-scale position estimate obtained using the visual sensor. We evaluate the scale estimator, state estimator and controller performance by comparing with ground truth data acquired using a motion capture system. All resources including source code, tutorial documentation and system models are available online.
Resumo:
This paper describes current research at the Australian Centre for Field Robotics (ACFR) in collaboration with the Commonwealth Scientific and Industrial Research Organisation (CSIRO) within the Cooperative Research Centre (CRC) for Mining Technology and Equipment (CMTE) towards achieving autonomous navigation of underground vehicles, like a Load-Haul-Dump (LHD) truck. This work is being sponsored by the mining industry through the Australian Mineral Industries Research Association Limited (AMIRA). Robust and reliable autonomous navigation can only be realised by achieving high level tasks such as path-planning and obstacle avoidance. This requires determining the pose (position and orientation) of the vehicle at all times. A minimal infrastructure localisation algorithm that has been developed for this purpose is outlined and the corresponding results are presented. Further research issues that are under investigation are also outlined briefly.
Resumo:
This paper describes an autonomous navigation system for a large underground mining vehicle. The control architecture is based on a robust reactive wall-following behaviour. To make it purposeful we provide driving hints derived from an approximate nodal-map. For most of the time, the vehicle is driven with weak localization (odometry). This need only be improved at intersections where decisions must be made – a technique we refer to as opportunistic localization. The paper briefly reviews absolute and relative navigation strategies, and describes an implementation of a reactive navigation system on a 30 tonne Load-Haul-Dump truck. This truck has achieved full-speed autonomous operation at an artificial test mine, and subsequently, at a operational underground mine.
Resumo:
This paper reports work on the automation of a hot metal carrier, which is a 20 tonne forklift-type vehicle used to move molten metal in aluminium smelters. To achieve efficient vehicle operation, issues of autonomous navigation and materials handling must be addressed. We present our complete system and experiments demonstrating reliable operation. One of the most significant experiments was five-hours of continuous operation where the vehicle travelled over 8 km and conducted 60 load handling operations. Finally, an experiment where the vehicle and autonomous operation were supervised from the other side of the world via a satellite phone network are described.
Resumo:
This paper presents an unmanned aircraft system (UAS) that uses a probabilistic model for autonomous front-on environmental sensing or photography of a target. The system is based on low-cost and readily-available sensor systems in dynamic environments and with the general intent of improving the capabilities of dynamic waypoint-based navigation systems for a low-cost UAS. The behavioural dynamics of target movement for the design of a Kalman filter and Markov model-based prediction algorithm are included. Geometrical concepts and the Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of a target, thus delivering a new waypoint for autonomous navigation. The results of the application to aerial filming with low-cost UAS are presented, achieving the desired goal of maintained front-on perspective without significant constraint to the route or pace of target movement.
Resumo:
There is an increased interest on the use of Unmanned Aerial Vehicles (UAVs) for wildlife and feral animal monitoring around the world. This paper describes a novel system which uses a predictive dynamic application that places the UAV ahead of a user, with a low cost thermal camera, a small onboard computer that identifies heat signatures of a target animal from a predetermined altitude and transmits that target’s GPS coordinates. A map is generated and various data sets and graphs are displayed using a GUI designed for easy use. The paper describes the hardware and software architecture and the probabilistic model for downward facing camera for the detection of an animal. Behavioral dynamics of target movement for the design of a Kalman filter and Markov model based prediction algorithm are used to place the UAV ahead of the user. Geometrical concepts and Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of the user, thus delivering a new way point for autonomous navigation. Results show that the system is capable of autonomously locating animals from a predetermined height and generate a map showing the location of the animals ahead of the user.
Automation of an underground mining vehicle using reactive navigation and opportunistic localization
Resumo:
This paper describes the implementation of an autonomous navigation system onto a 30 tonne Load-Haul-Dump truck. The control architecture is based on a robust reactive wall-following behaviour. To make it purposeful we provide driving hints derived from an approximate nodal-map. For most of the time, the vehicle is driven with weak localization (odometry). This need only be improved at intersections where decisions must be made - a technique we refer to as opportunistic localization. The truck has achieved full-speed autonomous operation at an artificial test mine, and subsequently, at a operational underground mine.
Resumo:
This thesis demonstrates that robots can learn about how the world changes, and can use this information to recognise where they are, even when the appearance of the environment has changed a great deal. The ability to localise in highly dynamic environments using vision only is a key tool for achieving long-term, autonomous navigation in unstructured outdoor environments. The proposed learning algorithms are designed to be unsupervised, and can be generated by the robot online in response to its observations of the world, without requiring information from a human operator or other external source.
Resumo:
This paper reports work involved with the automation of a Hot Metal Carrier — a 20 tonne forklift-type vehicle used to move molten metal in aluminium smelters. To achieve efficient vehicle operation, issues of autonomous navigation and materials handling must be addressed. We present our complete system and experiments demontrating reliable operation. One of the most significant experiments was five-hours of continuous operation where the vehicle travelled over 8 km and conducted 60 load handling operations. We also describe an experiment where the vehicle and autonomous operation were supervised from the other side of the world via a satellite phone network.
Resumo:
This paper presents a robust place recognition algorithm for mobile robots. The framework proposed combines nonlinear dimensionality reduction, nonlinear regression under noise, and variational Bayesian learning to create consistent probabilistic representations of places from images. These generative models are learnt from a few images and used for multi-class place recognition where classification is computed from a set of feature-vectors. Recognition can be performed in near real-time and accounts for complexity such as changes in illumination, occlusions and blurring. The algorithm was tested with a mobile robot in indoor and outdoor environments with sequences of 1579 and 3820 images respectively. This framework has several potential applications such as map building, autonomous navigation, search-rescue tasks and context recognition.
Resumo:
Autonomous navigation and locomotion of a mobile robot in natural environments remain a rather open issue. Several functionalities are required to complete the usual perception/decision/action cycle. They can be divided in two main categories : navigation (perception and decision about the movement) and locomotion (movement execution). In order to be able to face the large range of possible situations in natural environments, it is essential to make use of various kinds of complementary functionalities, defining various navigation and locomotion modes. Indeed, a number of navigation and locomotion approaches have been proposed in the literature for the last years, but none can pretend being able to achieve autonomous navigation and locomotion in every situation. Thus, it seems relevant to endow an outdoor mobile robot with several complementary navigation and locomotion modes. Accordingly, the robot must also have means to select the most appropriate mode to apply. This thesis proposes the development of such a navigation/locomotion mode selection system, based on two types of data: an observation of the context to determine in what kind of situation the robot has to achieve its movement and an evaluation of the behavior of the current mode, made by monitors which influence the transitions towards other modes when the behavior of the current one is considered as non satisfying. Hence, this document introduces a probabilistic framework for the estimation of the mode to be applied, some navigation and locomotion modes used, a qualitative terrain representation method (based on the evaluation of a difficulty computed from the placement of the robot's structure on a digital elevation map), and monitors that check the behavior of the modes used (evaluation of rolling locomotion efficiency, robot's attitude and configuration watching. . .). Some experimental results obtained with those elements integrated on board two different outdoor robots are presented and discussed.
Resumo:
A critical requirement for safe autonomous navigation of a planetary rover is the ability to accurately estimate the traversability of the terrain. This work considers the problem of predicting the attitude and configuration angles of the platform from terrain representations that are often incomplete due to occlusions and sensor limitations. Using Gaussian Processes (GP) and exteroceptive data as training input, we can provide a continuous and complete representation of terrain traversability, with uncertainty in the output estimates. In this paper, we propose a novel method that focuses on exploiting the explicit correlation in vehicle attitude and configuration during operation by learning a kernel function from vehicle experience to perform GP regression. We provide an extensive experimental validation of the proposed method on a planetary rover. We show significant improvement in the accuracy of our estimation compared with results obtained using standard kernels (Squared Exponential and Neural Network), and compared to traversability estimation made over terrain models built using state-of-the-art GP techniques.
Resumo:
This paper presents a robust place recognition algorithm for mobile robots that can be used for planning and navigation tasks. The proposed framework combines nonlinear dimensionality reduction, nonlinear regression under noise, and Bayesian learning to create consistent probabilistic representations of places from images. These generative models are incrementally learnt from very small training sets and used for multi-class place recognition. Recognition can be performed in near real-time and accounts for complexity such as changes in illumination, occlusions, blurring and moving objects. The algorithm was tested with a mobile robot in indoor and outdoor environments with sequences of 1579 and 3820 images, respectively. This framework has several potential applications such as map building, autonomous navigation, search-rescue tasks and context recognition.
Resumo:
Autonomous navigation and picture compilation tasks require robust feature descriptions or models. Given the non Gaussian nature of sensor observations, it will be shown that Gaussian mixture models provide a general probabilistic representation allowing analytical solutions to the update and prediction operations in the general Bayesian filtering problem. Each operation in the Bayesian filter for Gaussian mixture models multiplicatively increases the number of parameters in the representation leading to the need for a re-parameterisation step. A computationally efficient re-parameterisation step will be demonstrated resulting in a compact and accurate estimate of the true distribution.