879 resultados para robot tasks
Resumo:
Performing reliable localisation and navigation within highly unstructured underwater coral reef environments is a difficult task at the best of times. Typical research and commercial underwater vehicles use expensive acoustic positioning and sonar systems which require significant external infrastructure to operate effectively. This paper is focused on the development of a robust vision-based motion estimation technique using low-cost sensors for performing real-time autonomous and untethered environmental monitoring tasks in the Great Barrier Reef without the use of acoustic positioning. The technique is experimentally shown to provide accurate odometry and terrain profile information suitable for input into the vehicle controller to perform a range of environmental monitoring tasks.
Resumo:
Performing reliable localisation and navigation within highly unstructured underwater coral reef environments is a difficult task at the best of times. Typical research and commercial underwater vehicles use expensive acoustic positioning and sonar systems which require significant external infrastructure to operate effectively. This paper is focused on the development of a robust vision-based motion estimation technique using low-cost sensors for performing real-time autonomous and untethered environmental monitoring tasks in the Great Barrier Reef without the use of acoustic positioning. The technique is experimentally shown to provide accurate odometry and terrain profile information suitable for input into the vehicle controller to perform a range of environmental monitoring tasks.
Resumo:
For a mobile robot to operate autonomously in real-world environments, it must have an effective control system and a navigation system capable of providing robust localization, path planning and path execution. In this paper we describe the work investigating synergies between mapping and control systems. We have integrated development of a control system for navigating mobile robots and a robot SLAM system. The control system is hybrid in nature and tightly coupled with the SLAM system; it uses a combination of high and low level deliberative and reactive control processes to perform obstacle avoidance, exploration, global navigation and recharging, and draws upon the map learning and localization capabilities of the SLAM system. The effectiveness of this hybrid, multi-level approach was evaluated in the context of a delivery robot scenario. Over a period of two weeks the robot performed 1143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), travelled a total distance of more than 40km, and recharged autonomously a total of 23 times. In this paper we describe the combined control and SLAM system and discuss insights gained from its successful application in a real-world context.
Resumo:
This paper presents a low-bandwidth multi-robot communication system designed to serve as a backup communication channel in the event a robot suffers a network device fault. While much research has been performed in the area of distributing network communication across multiple robots within a system, individual robots are still susceptible to hardware failure. In the past, such robots would simply be removed from service, and their tasks re-allocated to other members. However, there are times when a faulty robot might be crucial to a mission, or be able to contribute in a less communication intensive area. By allowing robots to encode and decode messages into unique sequences of DTMF symbols, called words, our system is able to facilitate continued low-bandwidth communication between robots without access to network communication. Our results have shown that the system is capable of permitting robots to negotiate task initiation and termination, and is flexible enough to permit a pair of robots to perform a simple turn taking task.
Resumo:
Mobile robots and animals alike must effectively navigate their environments in order to achieve their goals. For animals goal-directed navigation facilitates finding food, seeking shelter or migration; similarly robots perform goal-directed navigation to find a charging station, get out of the rain or guide a person to a destination. This similarity in tasks extends to the environment as well; increasingly, mobile robots are operating in the same underwater, ground and aerial environments that animals do. Yet despite these similarities, goal-directed navigation research in robotics and biology has proceeded largely in parallel, linked only by a small amount of interdisciplinary research spanning both areas. Most state-of-the-art robotic navigation systems employ a range of sensors, world representations and navigation algorithms that seem far removed from what we know of how animals navigate; their navigation systems are shaped by key principles of navigation in ‘real-world’ environments including dealing with uncertainty in sensing, landmark observation and world modelling. By contrast, biomimetic animal navigation models produce plausible animal navigation behaviour in a range of laboratory experimental navigation paradigms, typically without addressing many of these robotic navigation principles. In this paper, we attempt to link robotics and biology by reviewing the current state of the art in conventional and biomimetic goal-directed navigation models, focusing on the key principles of goal-oriented robotic navigation and the extent to which these principles have been adapted by biomimetic navigation models and why.
Resumo:
This thesis develops a novel approach to robot control that learns to account for a robot's dynamic complexities while executing various control tasks using inspiration from biological sensorimotor control and machine learning. A robot that can learn its own control system can account for complex situations and adapt to changes in control conditions to maximise its performance and reliability in the real world. This research has developed two novel learning methods, with the aim of solving issues with learning control of non-rigid robots that incorporate additional dynamic complexities. The new learning control system was evaluated on a real three degree-of-freedom elastic joint robot arm with a number of experiments: initially validating the learning method and testing its ability to generalise to new tasks, then evaluating the system during a learning control task requiring continuous online model adaptation.
Resumo:
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.
Resumo:
In contrast to single robotic agent, multi-robot systems are highly dependent on reliable communication. Robots have to synchronize tasks or to share poses and sensor readings with other agents, especially for co-operative mapping task where local sensor readings are incorporated into a global map. The drawback of existing communication frameworks is that most are based on a central component which has to be constantly within reach. Additionally, they do not prevent data loss between robots if a failure occurs in the communication link. During a distributed mapping task, loss of data is critical because it will corrupt the global map. In this work, we propose a cloud-based publish/subscribe mechanism which enables reliable communication between agents during a cooperative mission using the Data Distribution Service (DDS) as a transport layer. The usability of our approach is verified by several experiments taking into account complete temporary communication loss.
Resumo:
In this paper we focus on the challenging problem of place categorization and semantic mapping on a robot with-out environment-specific training. Motivated by their ongoing success in various visual recognition tasks, we build our system upon a state-of-the-art convolutional network. We overcome its closed-set limitations by complementing the network with a series of one-vs-all classifiers that can learn to recognize new semantic classes online. Prior domain knowledge is incorporated by embedding the classification system into a Bayesian filter framework that also ensures temporal coherence. We evaluate the classification accuracy of the system on a robot that maps a variety of places on our campus in real-time. We show how semantic information can boost robotic object detection performance and how the semantic map can be used to modulate the robot’s behaviour during navigation tasks. The system is made available to the community as a ROS module.
Resumo:
This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.
A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.
Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.
This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.
Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.
Resumo:
[EU]Gradu amaierako lan honetan, Industria Ingenieritza graduan zehar emandako zenbait irakasgaietan landutako kontzeptuak oinarri bezala hartuta SCARA robot industrialaren analisia egin da. Lau askatasun gradu dituzten robot hauek oso erabiliak dira industrian beraien lan ziklo azkar, karga handiak jasateko kapazitate, errepikortasun eta aplikazio ugariengatik. Proiektua, bi zati nagusitan banandu da: SCARA motako robot baten analisi zinematikoa, robotaren irudikapena eta ibilbideen sorkuntza Matlab programa erabiliz. Adept Cobra e-Vario 600, SCARA motako prototipo errealarekin interakzioa. Proiektuan zehar garatuko diren edukien egitura ondorengoa izango da: Lehenik eta behin, proiektuko lehen atalean proiektua kokatuta dagoen testuinguruaren azalpena emango da, baita ere testuinguru horrek gaur egun, sail desberdinetan, duen garrantzia. Proiektuko bigarren atalean, lana egiterakoan proposatu diren helburuak eta proiektuak eskaintzen dituen onurak adieraziko dira. Bestetik, proiektuko hirugarren atalean, gaiaren egoeran, SCARA robotera heldu arte robotek historian zehar izan duten garapena eta hauen sailkapen desberdinak azalduko dira. Laugarren atalean, metodologian, proiektu honetan garatuko diren bi zati nagusien deskribapen zehatza emango da. Azkenik, proiektuko azken atalean, lana burutzeko bete behar izan diren ataza bakoitzaren deskribapena eta iraupena, aurrekontua, arriskuen analisia eta proiektu honetatik atera ditugun ondorioak ematen dira.
Resumo:
In virtual assembly verification or remote maintenance tasks, bimanual haptic interfaces play a crucial role in successful task completion. This paper proposes a method for objectively comparing how well a haptic interface covers the reachable workspace of human arms. Two system configurations are analyzed for a recently introduced haptic device that is based on two DLR-KUKA light weight robots: the standard configuration, where the device is opposite the human operator, and the ergonomic configuration, where the haptic device is mounted behind the human operator. The human operator directly controls the robotic arms using handles. The analysis is performed using a representation of the robot arm workspace. The merits of restricting the comparisons to the most significant regions of the human workspace are discussed. Using this method, a greater workspace correspondence for the ergonomic configuration was shown. ©2010 IEEE.
Resumo:
Most animals have significant behavioral expertise built in without having to explicitly learn it all from scratch. This expertise is a product of evolution of the organism; it can be viewed as a very long term form of learning which provides a structured system within which individuals might learn more specialized skills or abilities. This paper suggests one possible mechanism for analagous robot evolution by describing a carefully designed series of networks, each one being a strict augmentation of the previous one, which control a six legged walking machine capable of walking over rough terrain and following a person passively sensed in the infrared spectrum. As the completely decentralized networks are augmented, the robot's performance and behavior repertoire demonstrably improve. The rationale for such demonstrations is that they may provide a hint as to the requirements for automatically building massive networks to carry out complex sensory-motor tasks. The experiments with an actual robot ensure that an essence of reality is maintained and that no critical problems have been ignored.
Resumo:
Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has received little previous attention. We present a framework for computing motion strategies that are guaranteed to succeed in the presence of all three kinds of uncertainty. The motion strategies comprise sensor-based gross motions, compliant motions, and simple pushing motions.