203 resultados para Robotic manipulators
Resumo:
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.
Resumo:
We present our work on tele-operating a complex humanoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their electromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasping of objects) on the 7 DOF arm.
Resumo:
Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment.
Resumo:
In this paper we present for the first time a complete symbolic navigation system that performs goal-directed exploration to unfamiliar environments on a physical robot. We introduce a novel construct called the abstract map to link provided symbolic spatial information with observed symbolic information and actual places in the real world. Symbolic information is observed using a text recognition system that has been developed specifically for the application of reading door labels. In the study described in this paper, the robot was provided with a floor plan and a destination. The destination was specified by a room number, used both in the floor plan and on the door to the room. The robot autonomously navigated to the destination using its text recognition, abstract map, mapping, and path planning systems. The robot used the symbolic navigation system to determine an efficient path to the destination, and reached the goal in two different real-world environments. Simulation results show that the system reduces the time required to navigate to a goal when compared to random exploration.
Resumo:
Using cameras onboard a robot for detecting a coloured stationary target outdoors is a difficult task. Apart from the complexity of separating the target from the background scenery over different ranges, there are also the inconsistencies with direct and reflected illumination from the sun,clouds, moving and stationary objects. They can vary both the illumination on the target and its colour as perceived by the camera. In this paper, we analyse the effect of environment conditions, range to target, camera settings and image processing on the reported colours of various targets. The analysis indicates the colour space and camera configuration that provide the most consistent colour values over varying environment conditions and ranges. This information is used to develop a detection system that provides range and bearing to detected targets. The system is evaluated over various lighting conditions from bright sunlight, shadows and overcast days and demonstrates robust performance. The accuracy of the system is compared against a laser beacon detector with preliminary results indicating it to be a valuable asset for long-range coloured target detection.
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology since the work is both arduous and dangerous. However, while the industry makes extensive use of mechanisation it has shown a slow uptake of automation. A major cause of this is the complexity of the task, and the limitations of existing automation technology which is predicated on a structured and time invariant working environment. Here we discuss the topic of mining automation from a robotics and computer vision perspective — as a problem in sensor based robot control, an issue which the robotics community has been studying for nearly two decades. We then describe two of our current mining automation projects to demonstrate what is possible for both open-pit and underground mining operations.
Resumo:
This paper describes a software architecture for real-world robotic applications. We discuss issues of software reliability, testing and realistic off-line simulation that allows the majority of the automation system to be tested off-line in the laboratory before deployment in the field. A recent project, the automation of a very large mining machine is used to illustrate the discussion.
Resumo:
This paper discusses some of the sensing technologies available for guiding robot manipulators for a class of underground mining tasks including drilling jumbos, bolting arms, shotcreters or explosive chargers. Data acquired with such sensors, in the laboratory and underground, is presented.
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology since the work is arduous, dangerous and often repetitive. This paper describes the development of an automation system for a physically large and complex field robotic system - a 3,500 tonne mining machine (a dragline). The major components of the system are discussed with a particular emphasis on the machine/operator interface. A very important aspect of this system is that it must work cooperatively with a human operator, seamlessly passing the control back and forth in order to achieve the main aim - increased productivity.
Resumo:
The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real time, using corners as object tokens. Local image-plane constraints are employed to solve the correspondence problem removing the need for a 3D motion model. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. The technique is novel in that feature detection and tracking is restricted to areas likely to contain meaningful image structure. Feature instantiation regions are defined from a combination of odometry informatin and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Preliminary experiments on a parallel (transputer) architecture indication that real-time operation is achievable.
Resumo:
This paper is not about the details of yet another robot control system, but rather the issues surrounding realworld robotic implementation. It is a fact that in order to realise a future where robots co-exist with people in everyday places, we have to pass through a developmental phase that involves some risk. Putting a “Keep Out, Experiment in Progress” sign on the door is no longer possible since we are now at a level of capability that requires testing over long periods of time in complex realistic environments that contain people. We all know that controlling the risk is important – a serious accident could set the field back globally – but just as important is convincing others that the risks are known and controlled. In this article, we describe our experience going down this path and we show that mobile robotics research health and safety assessment is still unexplored territory in universities and is often ignored. We hope that the article will make robotics research labs in universities around the world take note of these issues rather than operating under the radar to prevent any catastrophic accidents.
Resumo:
In recent years I have begun to integrate Creative Robotics into my Ecosophically-led art practices – which I have long deployed to investigate, materialise and engage thorny, ecological questions of the Anthropocene, seeking to understand how such forms of practice may promote the cultural conditions required to assure, rather than degrade, our collective futures. Many of us would instinctively conceive of robotics as an industrially driven endeavor, shaped by the pursuit of relentless efficiencies. Instead I ask through my practices, might the nascent field of Creative Robotics still be able to emerge with radically different frames of intention? Might creative practitioners still be able to shape experiences using robotic media that retain a healthy criticality towards such productivist lineages? Could this nascent form even bring forward fresh new techniques and assemblages that better encourage conversations around sustaining a future for the future, and, if so, which of its characteristics presents the greatest opportunities? I therefore ask, when Creative Robotics and Ecosophical Practice combine forces in strategic intervention, what qualities of this hybrid might best further the central aims of Ecosophical Practice – encouraging cultural conditions required to assure a future for the future?
Resumo:
Seagoing vessels have to undergo regular inspections, which are currently performed manually by ship surveyors. The main cost factor in a ship inspection is to provide access to the different areas of the ship, since the surveyor has to be close to the inspected parts, usually within arm's reach, either to perform a visual analysis or to take thickness measurements. The access to the structural elements in cargo holds, e.g., bulkheads, is normally provided by staging or by 'cherry-picking' cranes. To make ship inspections safer and more cost-efficient, we have introduced new inspection methods, tools, and systems, which have been evaluated in field trials, particularly focusing on cargo holds. More precisely, two magnetic climbing robots and a micro-aerial vehicle, which are able to assist the surveyor during the inspection, are introduced. Since localization of inspection data is mandatory for the surveyor, we also introduce an external localization system that has been verified in field trials, using a climbing inspection robot. Furthermore, the inspection data collected by the robotic systems are organized and handled by a spatial content management system that enables us to compare the inspection data of one survey with those from another, as well as to document the ship inspection when the robot team is used. Image-based defect detection is addressed by proposing an integrated solution for detecting corrosion and cracks. The systems' performance is reported, as well as conclusions on their usability, all in accordance with the output of field trials performed onboard two different vessels under real inspection conditions.
Resumo:
The inspection of marine vessels is currently performed manually. Inspectors use tools (e.g. cameras and devices for non-destructive testing) to detect damaged areas, cracks, and corrosion in large cargo holds, tanks, and other parts of a ship. Due to the size and complex geometry of most ships, ship inspection is time-consuming and expensive. The EU-funded project INCASS develops concepts for a marine inspection robotic assistant system to improve and automate ship inspections. In this paper, we introduce our magnetic wall–climbing robot: Marine Inspection Robotic Assistant (MIRA). This semiautonomous lightweight system is able to climb a vessels steel frame to deliver on-line visual inspection data. In addition, we describe the design of the robot and its building subsystems as well as its hardware and software components.
Resumo:
In contrast to single robotic agent, multi-robot systems are highly dependent on reliable communication. Robots have to synchronize tasks or to share poses and sensor readings with other agents, especially for co-operative mapping task where local sensor readings are incorporated into a global map. The drawback of existing communication frameworks is that most are based on a central component which has to be constantly within reach. Additionally, they do not prevent data loss between robots if a failure occurs in the communication link. During a distributed mapping task, loss of data is critical because it will corrupt the global map. In this work, we propose a cloud-based publish/subscribe mechanism which enables reliable communication between agents during a cooperative mission using the Data Distribution Service (DDS) as a transport layer. The usability of our approach is verified by several experiments taking into account complete temporary communication loss.