480 resultados para bilateral haptic robot control
Resumo:
This thesis presents an approach for a vertical infrastructure inspection using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structure such as light and power distribution poles is a difficult task. There are challenges involved with developing such an inspection system, such as flying in close proximity to a target while maintaining a fixed stand-off distance from it. The contributions of this thesis fall into three main areas. Firstly, an approach to vehicle dynamic modeling is evaluated in simulation and experiments. Secondly, EKF-based state estimators are demonstrated, as well as estimator-free approaches such as image based visual servoing (IBVS) validated with motion capture ground truth data. Thirdly, an integrated pole inspection system comprising a VTOL platform with human-in-the-loop control, (shared autonomy) is demonstrated. These contributions are comprehensively explained through a series of published papers.
Resumo:
Locomotion and autonomy in humanoid robots is of utmost importance in integrating them into social and community service type roles. However, the limited range and speed of these robots severely limits their ability to be deployed in situations where fast response is necessary. While the ability for a humanoid to drive a vehicle would aide in increasing their overall mobility, the ability to mount and dismount a vehicle designed for human occupants is a non-trivial problem. To address this issue, this paper presents an innovative approach to enabling a humanoid robot to mount and dismount a vehicle by proposing a simple mounting bracket involving no moving parts. In conjunction with a purpose built robotic vehicle, the mounting bracket successfully allowed a humanoid Nao robot to mount, dismount and drive the vehicle.
Resumo:
We present our work on tele-operating a complex humanoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their electromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasping of objects) on the 7 DOF arm.
Resumo:
The design and fabrication of a proto-type four-rotor vertical take-off and landing (VTOL) aerial robot for use as indoor experimental robotics platform is presented. The flyer is termed an X4-flyer. A development of the dynamic model of the system is presented and a pilot augmentation control design is proposed.
Resumo:
Electric walking draglines are physically large and powerful machines used in the mining industry. However with the addition of suitable sensors and a controller a dragline can be considered as a numerically controlled machine or robot which can then perform parts of the operating cycle automatically. This paper presents an analysis of the electromechanical system necessary precursor to automatic control
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology since the work is both arduous and dangerous. Visual servoing is a means of integrating noncontact visual sensing with machine control to augment or replace operator based control. This article describes two of our current mining automation projects in order to demonstrate some, perhaps unusual, applications of visual servoing, and also to illustrate some very real problems with robust computer vision
Resumo:
Power line inspection is a vital function for electricity supply companies but it involves labor-intensive and expensive procedures which are tedious and error-prone for humans to perform. A possible solution is to use an unmanned aerial vehicle (UAV) equipped with video surveillance equipment to perform the inspection. This paper considers how a small, electrically driven rotorcraft conceived for this application could be controlled by visually tracking the overhead supply lines. A dynamic model for a ducted-fan rotorcraft is presented and used to control the action of an Air Vehicle Simulator (AVS), consisting of a cable-array robot. Results show how visual data can be used to determine, and hence regulate in closed loop, the simulated vehicle’s position relative to the overhead lines.
Resumo:
A large range of underground mining equipment makes use of compliant hydraulic arms for tasks such as rock-bolting, rock breaking, explosive charging and shotcreting. This paper describes a laboratory model electo-hydraulic manipulator which is used to prototype novel control and sensing techniques. The research is aimed at improving the safety and productivity of these mining tasks through automation, in particular the application of closed-loop visual positioning of the machine's end-effector.
Resumo:
This paper discusses some of the sensing technologies and control approaches available for guiding robot manipulators for a class of underground mining tasks including drilling jumbos, bolting arms, shotcreters or explosive chargers. Data acquired with such sensors, in the laboratory and underground, is presented.
Resumo:
We address the problem of the rangefinder-based avoidance of unforeseen static obstacles during a visual navigation task. We extend previous strategies which are efficient in most cases but remain still hampered by some drawbacks (e.g., risks of collisions or of local minima in some particular cases, etc.). The key idea is to complete the control strategy by adding a controller providing the robot some anticipative skills to guarantee non collision and by defining more general transition conditions to deal with local minima. Simulation results show the proposed strategy efficiency.
Resumo:
This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was demonstrated to perform target reaching after training in simulation. Transferring the network to real hardware and real observation in a naive approach failed, but experiments show that the network works when replacing camera images with synthetic images.
Resumo:
Scene understanding has been investigated from a mainly visual information point of view. Recently depth has been provided an extra wealth of information, allowing more geometric knowledge to fuse into scene understanding. Yet to form a holistic view, especially in robotic applications, one can create even more data by interacting with the world. In fact humans, when growing up, seem to heavily investigate the world around them by haptic exploration. We show an application of haptic exploration on a humanoid robot in cooperation with a learning method for object segmentation. The actions performed consecutively improve the segmentation of objects in the scene.