882 resultados para Robotic Arm
Reactive reaching and grasping on a humanoid: Towards closing the action-perception loop on the iCub
Resumo:
We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visual stream - while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.
Resumo:
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.
Resumo:
Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment.
Resumo:
In this paper we present for the first time a complete symbolic navigation system that performs goal-directed exploration to unfamiliar environments on a physical robot. We introduce a novel construct called the abstract map to link provided symbolic spatial information with observed symbolic information and actual places in the real world. Symbolic information is observed using a text recognition system that has been developed specifically for the application of reading door labels. In the study described in this paper, the robot was provided with a floor plan and a destination. The destination was specified by a room number, used both in the floor plan and on the door to the room. The robot autonomously navigated to the destination using its text recognition, abstract map, mapping, and path planning systems. The robot used the symbolic navigation system to determine an efficient path to the destination, and reached the goal in two different real-world environments. Simulation results show that the system reduces the time required to navigate to a goal when compared to random exploration.
Resumo:
Using cameras onboard a robot for detecting a coloured stationary target outdoors is a difficult task. Apart from the complexity of separating the target from the background scenery over different ranges, there are also the inconsistencies with direct and reflected illumination from the sun,clouds, moving and stationary objects. They can vary both the illumination on the target and its colour as perceived by the camera. In this paper, we analyse the effect of environment conditions, range to target, camera settings and image processing on the reported colours of various targets. The analysis indicates the colour space and camera configuration that provide the most consistent colour values over varying environment conditions and ranges. This information is used to develop a detection system that provides range and bearing to detected targets. The system is evaluated over various lighting conditions from bright sunlight, shadows and overcast days and demonstrates robust performance. The accuracy of the system is compared against a laser beacon detector with preliminary results indicating it to be a valuable asset for long-range coloured target detection.
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology since the work is both arduous and dangerous. However, while the industry makes extensive use of mechanisation it has shown a slow uptake of automation. A major cause of this is the complexity of the task, and the limitations of existing automation technology which is predicated on a structured and time invariant working environment. Here we discuss the topic of mining automation from a robotics and computer vision perspective — as a problem in sensor based robot control, an issue which the robotics community has been studying for nearly two decades. We then describe two of our current mining automation projects to demonstrate what is possible for both open-pit and underground mining operations.
Resumo:
This paper describes a software architecture for real-world robotic applications. We discuss issues of software reliability, testing and realistic off-line simulation that allows the majority of the automation system to be tested off-line in the laboratory before deployment in the field. A recent project, the automation of a very large mining machine is used to illustrate the discussion.
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology since the work is arduous, dangerous and often repetitive. This paper describes the development of an automation system for a physically large and complex field robotic system - a 3,500 tonne mining machine (a dragline). The major components of the system are discussed with a particular emphasis on the machine/operator interface. A very important aspect of this system is that it must work cooperatively with a human operator, seamlessly passing the control back and forth in order to achieve the main aim - increased productivity.
Resumo:
The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real time, using corners as object tokens. Local image-plane constraints are employed to solve the correspondence problem removing the need for a 3D motion model. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. The technique is novel in that feature detection and tracking is restricted to areas likely to contain meaningful image structure. Feature instantiation regions are defined from a combination of odometry informatin and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Preliminary experiments on a parallel (transputer) architecture indication that real-time operation is achievable.
Resumo:
The consequences of falls are often dreadful for individuals with lower limb amputation using bone-anchored prosthesis.[1-5] Typically, the impact on the fixation is responsible for bending the intercutaneous piece that could lead to a complete breakage over time. .[3, 5-8] The surgical replacement of this piece is possible but complex and expensive. Clearly, there is a need for solid data enabling an evidence-based design of protective devices limiting impact forces and torsion applied during a fall. The impact on the fixation during an actual fall is obviously difficult to record during a scientific experiment.[6, 8-13] Consequently, Schwartze and colleagues opted for one of the next best options science has to offer: simulation with an able-bodied participant. They recorded body movements and knee impacts on the floor while mimicking several plausible falling scenarios. Then, they calculated the forces and moments that would be applied at four levels along the femur corresponding to amputation heights.[6, 8-11, 14-25] The overall forces applied during the falls were similar regardless of the amputation height indicating that the impact forces were simply translated along the femur. As expected, they showed that overall moments generally increased with amputation height due to changes in lever arm. This work demonstrates that devices preventing only against force overload do not require considering amputation height while those protecting against bending moments should. Another significant contribution is to provide, for the time, the magnitude of the impact load during different falls. This loading range is crucial to the overall design and, more precisely, the triggering threshold of protective devices. Unfortunately, the analysis of only a single able-bodied participant replicating falls limits greatly the generalisation of the findings. Nonetheless, this case study is an important milestone contributing to a better understanding of load impact during a fall. This new knowledge will improve the treatment, the safe ambulation and, ultimately, the quality of life of individuals fitted with bone-anchored prosthesis.
Resumo:
This paper is not about the details of yet another robot control system, but rather the issues surrounding realworld robotic implementation. It is a fact that in order to realise a future where robots co-exist with people in everyday places, we have to pass through a developmental phase that involves some risk. Putting a “Keep Out, Experiment in Progress” sign on the door is no longer possible since we are now at a level of capability that requires testing over long periods of time in complex realistic environments that contain people. We all know that controlling the risk is important – a serious accident could set the field back globally – but just as important is convincing others that the risks are known and controlled. In this article, we describe our experience going down this path and we show that mobile robotics research health and safety assessment is still unexplored territory in universities and is often ignored. We hope that the article will make robotics research labs in universities around the world take note of these issues rather than operating under the radar to prevent any catastrophic accidents.
Resumo:
In recent years I have begun to integrate Creative Robotics into my Ecosophically-led art practices – which I have long deployed to investigate, materialise and engage thorny, ecological questions of the Anthropocene, seeking to understand how such forms of practice may promote the cultural conditions required to assure, rather than degrade, our collective futures. Many of us would instinctively conceive of robotics as an industrially driven endeavor, shaped by the pursuit of relentless efficiencies. Instead I ask through my practices, might the nascent field of Creative Robotics still be able to emerge with radically different frames of intention? Might creative practitioners still be able to shape experiences using robotic media that retain a healthy criticality towards such productivist lineages? Could this nascent form even bring forward fresh new techniques and assemblages that better encourage conversations around sustaining a future for the future, and, if so, which of its characteristics presents the greatest opportunities? I therefore ask, when Creative Robotics and Ecosophical Practice combine forces in strategic intervention, what qualities of this hybrid might best further the central aims of Ecosophical Practice – encouraging cultural conditions required to assure a future for the future?
Resumo:
The inspection of marine vessels is currently performed manually. Inspectors use tools (e.g. cameras and devices for non-destructive testing) to detect damaged areas, cracks, and corrosion in large cargo holds, tanks, and other parts of a ship. Due to the size and complex geometry of most ships, ship inspection is time-consuming and expensive. The EU-funded project INCASS develops concepts for a marine inspection robotic assistant system to improve and automate ship inspections. In this paper, we introduce our magnetic wall–climbing robot: Marine Inspection Robotic Assistant (MIRA). This semiautonomous lightweight system is able to climb a vessels steel frame to deliver on-line visual inspection data. In addition, we describe the design of the robot and its building subsystems as well as its hardware and software components.
Resumo:
In this work we present an autonomous mobile ma- nipulator that is used to collect sample containers in an unknown environment. The manipulator is part of a team of heterogeneous mobile robots that are to search and identify sample containers in an unknown environment. A map of the environment along with possible positions of sample containers are shared between the robots in the team by using a cloud-based communication interface. To grasp a container with its manipulator arm the robot has to place itself in a position suitable for the manipulation task. This optimal base placement pose is selected by querying a precomputed inverse reachability database.
Resumo:
In contrast to single robotic agent, multi-robot systems are highly dependent on reliable communication. Robots have to synchronize tasks or to share poses and sensor readings with other agents, especially for co-operative mapping task where local sensor readings are incorporated into a global map. The drawback of existing communication frameworks is that most are based on a central component which has to be constantly within reach. Additionally, they do not prevent data loss between robots if a failure occurs in the communication link. During a distributed mapping task, loss of data is critical because it will corrupt the global map. In this work, we propose a cloud-based publish/subscribe mechanism which enables reliable communication between agents during a cooperative mission using the Data Distribution Service (DDS) as a transport layer. The usability of our approach is verified by several experiments taking into account complete temporary communication loss.