220 resultados para Autonomous navigation
Resumo:
We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model’s flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat’s point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.
Resumo:
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment.
Resumo:
This paper provides a three-layered framework to monitor the positioning performance requirements of Real-time Relative Positioning (RRP) systems of the Cooperative Intelligent Transport Systems (C-ITS) that support Cooperative Collision Warning (CCW) applications. These applications exploit state data of surrounding vehicles obtained solely from the Global Positioning System (GPS) and Dedicated Short-Range Communications (DSRC) units without using other sensors. To this end, the paper argues the need for the GPS/DSRC-based RRP systems to have an autonomous monitoring mechanism, since the operation of CCW applications is meant to augment safety on roads. The advantages of autonomous integrity monitoring are essential and integral to any safety-of-life system. The autonomous integrity monitoring framework proposed necessitates the RRP systems to detect/predict the unavailability of their sub-systems and of the integrity monitoring module itself, and, if available, to account for effects of data link delays and breakages of DSRC links, as well as of faulty measurement sources of GPS and/or integrated augmentation positioning systems, before the information used for safety warnings/alarms becomes unavailable, unreliable, inaccurate or misleading. Hence, a monitoring framework using a tight integration and correlation approach is proposed for instantaneous reliability assessment of the RRP systems. Ultimately, using the proposed framework, the RRP systems will provide timely alerts to users when the RRP solutions cannot be trusted or used for the intended operation.
Resumo:
This paper addresses the topic of real-time decision making by autonomous city vehicles. Beginning with an overview of the state of research, the paper presents the vehicle decision making & control systemarchitecture, explains the subcomponents which are relevant for decision making (World Model and Driving Maneuver subsystem), and presents the decision making process. Experimental test results confirmthe suitability of the developed approach to deal with the complex real-world urban traffic.
Resumo:
This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e. the autonomous vehicles’ ability to make appropriate driving decisions in city road traffic situations. After decomposing the problem into two consecutive decision making stages, and giving a short overview about previous work, the paper explains how Multiple Criteria Decision Making (MCDM) can be used in the process of selecting the most appropriate driving maneuver.
Resumo:
This thesis presents an approach for a vertical infrastructure inspection using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structure such as light and power distribution poles is a difficult task. There are challenges involved with developing such an inspection system, such as flying in close proximity to a target while maintaining a fixed stand-off distance from it. The contributions of this thesis fall into three main areas. Firstly, an approach to vehicle dynamic modeling is evaluated in simulation and experiments. Secondly, EKF-based state estimators are demonstrated, as well as estimator-free approaches such as image based visual servoing (IBVS) validated with motion capture ground truth data. Thirdly, an integrated pole inspection system comprising a VTOL platform with human-in-the-loop control, (shared autonomy) is demonstrated. These contributions are comprehensively explained through a series of published papers.
Resumo:
Mobile robots and animals alike must effectively navigate their environments in order to achieve their goals. For animals goal-directed navigation facilitates finding food, seeking shelter or migration; similarly robots perform goal-directed navigation to find a charging station, get out of the rain or guide a person to a destination. This similarity in tasks extends to the environment as well; increasingly, mobile robots are operating in the same underwater, ground and aerial environments that animals do. Yet despite these similarities, goal-directed navigation research in robotics and biology has proceeded largely in parallel, linked only by a small amount of interdisciplinary research spanning both areas. Most state-of-the-art robotic navigation systems employ a range of sensors, world representations and navigation algorithms that seem far removed from what we know of how animals navigate; their navigation systems are shaped by key principles of navigation in ‘real-world’ environments including dealing with uncertainty in sensing, landmark observation and world modelling. By contrast, biomimetic animal navigation models produce plausible animal navigation behaviour in a range of laboratory experimental navigation paradigms, typically without addressing many of these robotic navigation principles. In this paper, we attempt to link robotics and biology by reviewing the current state of the art in conventional and biomimetic goal-directed navigation models, focusing on the key principles of goal-oriented robotic navigation and the extent to which these principles have been adapted by biomimetic navigation models and why.
Resumo:
This paper details the initial design and planning of a Field Programmable Gate Array (FPGA) implemented control system that will enable a path planner to interact with a MAVLink based flight computer. The design is aimed at small Unmanned Aircraft Vehicles (UAV) under autonomous operation which are typically subject to constraints arising from limited on-board processing capabilities, power and size. An FPGA implementation for the de- sign is chosen for its potential to address such limitations through low power and high speed in-hardware computation. The MAVLink protocol offers a low bandwidth interface for the FPGA implemented path planner to communicate with an on-board flight computer. A control system plan is presented that is capable of accepting a string of GPS waypoints generated on-board from a previously developed in- hardware Genetic Algorithm (GA) path planner and feeding them to the open source PX4 autopilot, while simultaneously respond- ing with flight status information.
Resumo:
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows implementation onboard an Autonomous Underwater Vehicle for improving navigation and obstacle avoidance performance.
Resumo:
This paper overviews the development of a vision-based AUV along with a set of complementary operational strategies to allow reliable autonomous data collection in relatively shallow water and coral reef environments. The development of the AUV, called Starbug, encountered many challenges in terms of vehicle design, navigation and control. Some of these challenges are discussed with focus on operational strategies for estimating and reducing the total navigation error when using lower-resolution sensing modalities. Results are presented from recent field trials which illustrate the ability of the vehicle and associated operational strategies to enable rapid collection of visual data sets suitable for marine research applications.
Resumo:
There is a need for systems which can autonomously perform coverage tasks on large outdoor areas. Unfortunately, the state-of-the-art is to use GPS based localization, which is not suitable for precise operations near trees and other obstructions. In this paper we present a robotic platform for autonomous coverage tasks. The system architecture integrates laser based localization and mapping using the Atlas Framework with Rapidly-Exploring Random Trees path planning and Virtual Force Field obstacle avoidance. We demonstrate the performance of the system in simulation as well as with real world experiments.
Resumo:
This paper describes the development and experimental evaluation of a novel vision-based Autonomous Surface Vehicle with the purpose of performing coordinated docking manoeuvres with a target, such as an Autonomous Underwater Vehicle, on the water’s surface. The system architecture integrates two small processor units; the first performs vehicle control and implements a virtual force obstacle avoidance and docking strategy, with the second performing vision-based target segmentation and tracking. Furthermore, the architecture utilises wireless sensor network technology allowing the vehicle to be observed by, and even integrated within an ad-hoc sensor network. The system performance is demonstrated through real-world experiments.