936 resultados para Robot control
Resumo:
We consider multi-robot systems that include sensor nodes and aerial or ground robots networked together. Such networks are suitable for tasks such as large-scale environmental monitoring or for command and control in emergency situations. We present a sensor network deployment method using autonomous aerial vehicles and describe in detail the algorithms used for deployment and for measuring network connectivity and provide experimental data collected from field trials. A particular focus is on determining gaps in connectivity of the deployed network and generating a plan for repair, to complete the connectivity. This project is the result of a collaboration between three robotics labs (CSIRO, USC, and Dartmouth). © Springer-Verlag Berlin/Heidelberg 2006.
Resumo:
This paper, which serves as an introduction to the mini-symposium on Real-Time Vision, Tracking and Control, provides a broad sketch of visual servoing, the application of real-time vision, tracking and control for robot guidance. It outlines the basic theoretical approaches to the problem, describes a typical architecture, and discusses major milestones, applications and the significant vision sub-problems that must be solved.
Resumo:
Describes how many of the navigation techniques developed by the robotics research community over the last decade may be applied to a class of underground mining vehicles (LHDs and haul trucks). We review the current state-of-the-art in this area and conclude that there are essentially two basic methods of navigation applicable. We describe an implementation of a reactive navigation system on a 30 tonne LHD which has achieved full-speed operation at a production mine.
Resumo:
We present a novel, simple and effective approach for tele-operation of aerial robotic vehicles with haptic feedback. Such feedback provides the remote pilot with an intuitive feel of the robot’s state and perceived local environment that will ensure simple and safe operation in cluttered 3D environments common in inspection and surveillance tasks. Our approach is based on energetic considerations and uses the concepts of network theory and port-Hamiltonian systems. We provide a general framework for addressing problems such as mapping the limited stroke of a ‘master’ joystick to the infinite stroke of a ‘slave’ vehicle, while preserving passivity of the closed-loop system in the face of potential time delays in communications links and limited sensor data
Resumo:
This paper describes a novel experiment in which two very different methods of underwater robot localization are compared. The first method is based on a geometric approach in which a mobile node moves within a field of static nodes, and all nodes are capable of estimating the range to their neighbours acoustically. The second method uses visual odometry, from stereo cameras, by integrating scaled optical flow. The fundamental algorithmic principles of each localization technique is described. We also present experimental results comparing acoustic localization with GPS for surface operation, and a comparison of acoustic and visual methods for underwater operation.
Resumo:
This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.
Resumo:
This paper considers the question of designing a fully image-based visual servo control for a class of dynamic systems. The work is motivated by the ongoing development of image-based visual servo control of small aerial robotic vehicles. The kinematics and dynamics of a rigid-body dynamical system (such as a vehicle airframe) maneuvering over a flat target plane with observable features are expressed in terms of an unnormalized spherical centroid and an optic flow measurement. The image-plane dynamics with respect to force input are dependent on the height of the camera above the target plane. This dependence is compensated by introducing virtual height dynamics and adaptive estimation in the proposed control. A fully nonlinear adaptive control design is provided that ensures asymptotic stability of the closed-loop system for all feasible initial conditions. The choice of control gains is based on an analysis of the asymptotic dynamics of the system. Results from a realistic simulation are presented that demonstrate the performance of the closed-loop system. To the author's knowledge, this paper documents the first time that an image-based visual servo control has been proposed for a dynamic system using vision measurement for both position and velocity.
Resumo:
Performing reliable localisation and navigation within highly unstructured underwater coral reef environments is a difficult task at the best of times. Typical research and commercial underwater vehicles use expensive acoustic positioning and sonar systems which require significant external infrastructure to operate effectively. This paper is focused on the development of a robust vision-based motion estimation technique using low-cost sensors for performing real-time autonomous and untethered environmental monitoring tasks in the Great Barrier Reef without the use of acoustic positioning. The technique is experimentally shown to provide accurate odometry and terrain profile information suitable for input into the vehicle controller to perform a range of environmental monitoring tasks.
Resumo:
Performing reliable localisation and navigation within highly unstructured underwater coral reef environments is a difficult task at the best of times. Typical research and commercial underwater vehicles use expensive acoustic positioning and sonar systems which require significant external infrastructure to operate effectively. This paper is focused on the development of a robust vision-based motion estimation technique using low-cost sensors for performing real-time autonomous and untethered environmental monitoring tasks in the Great Barrier Reef without the use of acoustic positioning. The technique is experimentally shown to provide accurate odometry and terrain profile information suitable for input into the vehicle controller to perform a range of environmental monitoring tasks.
Resumo:
The development of autonomous air vehicles can be an expensive research pursuit. To alleviate some of the financial burden of this process, we have constructed a system consisting of four winches each attached to a central pod (the simulated air vehicle) via cables - a cable-array robot. The system is capable of precisely controlling the three dimensional position of the pod allowing effective testing of sensing and control strategies before experimentation on a free-flying vehicle. In this paper, we present a brief overview of the system and provide a practical control strategy for such a system.
Resumo:
A vast amount of research into autonomous underwater navigation has, and is, being conducted around the world. However, typical research and commercial platforms have limited autonomy and are generally unable to navigate efficiently within coral reef environments without tethers and significant external infrastructure. This paper outlines the development and presents experimental results into the performance evaluation of a new robotic vehicle for underwater monitoring and surveying in highly unstructured environments. The hybrid AUV design developed by the CSIRO robotic reef monitoring team realises a compromise between endurance, manoeuvrability and functionality. The vehicle represents a new era in AUV design specifically focused at providing a truly lowcost research capability that will progress environmental monitoring through unaided navigation, cooperative robotics, sensor network distribution and data harvesting.
Resumo:
Visual servoing has been a viable method of robot manipulator control for more than a decade. Initial developments involved positionbased visual servoing (PBVS), in which the control signal exists in Cartesian space. The younger method, image-based visual servoing (IBVS), has seen considerable development in recent years. PBVS and IBVS offer tradeoffs in performance, and neither can solve all tasks that may confront a robot. In response to these issues, several methods have been devised that partition the control scheme, allowing some motions to be performed in the manner of a PBVS system, while the remaining motions are performed using an IBVS approach. To date, there has been little research that explores the relative strengths and weaknesses of these methods. In this paper we present such an evaluation. We have chosen three recent visual servo approaches for evaluation in addition to the traditional PBVS and IBVS approaches. We posit a set of performance metrics that measure quantitatively the performance of a visual servo controller for a specific task. We then evaluate each of the candidate visual servo methods for four canonical tasks with simulations and with experiments in a robotic work cell.
Resumo:
This paper introduces the application of a sensor network to navigate a flying robot. We have developed distributed algorithms and efficient geographic routing techniques to incrementally guide one or more robots to points of interest based on sensor gradient fields, or along paths defined in terms of Cartesian coordinates. The robot itself is an integral part of the localization process which establishes the positions of sensors which are not known a priori. We use this system in a large-scale outdoor experiment with Mote sensors to guide an autonomous helicopter along a path encoded in the network. A simple handheld device, using this same environmental infrastructure, is used to guide humans.
Resumo:
In this paper we describe a low-cost flight control system for a small (60 class) helicopter which is part of a larger project to develop an autonomous flying vehicle. Our approach differs from that of others in not using an expensive inertial/GPS sensing system. The primary sensors for vehicle stabilization are a low-cost inertial sensor and a pair of CMOS cameras. We describe the architecture of our flight control system, the inertial and visual sensing subsystems and present some flight control results.
Resumo:
This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.