901 resultados para machining robots
Resumo:
Robots currently recognise and use objects through algorithms that are hand-coded or specifically trained. Such robots can operate in known, structured environments but cannot learn to recognise or use novel objects as they appear. This thesis demonstrates that a robot can develop meaningful object representations by learning the fundamental relationship between action and change in sensory state; the robot learns sensorimotor coordination. Methods based on Markov Decision Processes are experimentally validated on a mobile robot capable of gripping objects, and it is found that object recognition and manipulation can be learnt as an emergent property of sensorimotor coordination.
Resumo:
This paper introduces a minimalistic approach to produce a visual hybrid map of a mobile robot’s working environment. The proposed system uses omnidirectional images along with odometry information to build an initial dense posegraph map. Then a two level hybrid map is extracted from the dense graph. The hybrid map consists of global and local levels. The global level contains a sparse topological map extracted from the initial graph using a dual clustering approach. The local level contains a spherical view stored at each node of the global level. The spherical views provide both an appearance signature for the nodes, which the robot uses to localize itself in the environment, and heading information when the robot uses the map for visual navigation. In order to show the usefulness of the map, an experiment was conducted where the map was used for multiple visual navigation tasks inside an office workplace.
Resumo:
This paper is concerned with how a localised and energy-constrained robot can maximise its time in the field by taking paths and tours that minimise its energy expenditure. A significant component of a robot's energy is expended on mobility and is a function of terrain traversability. We estimate traversability online from data sensed by the robot as it moves, and use this to generate maps, explore and ultimately converge on minimum energy tours of the environment. We provide results of detailed simulations and parameter studies that show the efficacy of this approach for a robot moving over terrain with unknown traversability as well as a number of a priori unknown hard obstacles.
Resumo:
This paper presents a full system demonstration of dynamic sensorbased reconfiguration of a networked robot team. Robots sense obstacles in their environment locally and dynamically adapt their global geometric configuration to conform to an abstract goal shape. We present a novel two-layer planning and control algorithm for team reconfiguration that is decentralised and assumes local (neighbour-to-neighbour) communication only. The approach is designed to be resource-efficient and we show experiments using a team of nine mobile robots with modest computation, communication, and sensing. The robots use acoustic beacons for localisation and can sense obstacles in their local neighbourhood using IR sensors. Our results demonstrate globally-specified reconfiguration from local information in a real robot network, and highlight limitations of standard mesh networks in implementing decentralised algorithms.
Resumo:
"This work considers a mobile service robot which uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the internal representation may become out of date after some time. To solve this problem the robot needs to be able to adapt its internal representation continually to the changes in the environment. This paper presents a method for creating an adaptive map for long-term appearance-based localization of a mobile robot using long-term and short-term memory concepts, with omni-directional vision as the external sensor."--publisher website
Resumo:
Throughout a lifetime of operation, a mobile service robot needs to acquire, store and update its knowledge of a working environment. This includes the ability to identify and track objects in different places, as well as using this information for interaction with humans. This paper introduces a long-term updating mechanism, inspired by the modal model of human memory, to enable a mobile robot to maintain its knowledge of a changing environment. The memory model is integrated with a hybrid map that represents the global topology and local geometry of the environment, as well as the respective 3D location of objects. We aim to enable the robot to use this knowledge to help humans by suggesting the most likely locations of specific objects in its map. An experiment using omni-directional vision demonstrates the ability to track the movements of several objects in a dynamic environment over an extended period of time.
Resumo:
We present a framework and first set of simulations for evolving a language for communicating about space. The framework comprises two components: (1) An established mobile robot platform, RatSLAM, which has a "brain" architecture based on rodent hippocampus with the ability to integrate visual and odometric cues to create internal maps of its environment. (2) A language learning system based on a neural network architecture that has been designed and implemented with the ability to evolve generalizable languages which can be learned by naive learners. A study using visual scenes and internal maps streamed from the simulated world of the robots to evolve languages is presented. This study investigated the structure of the evolved languages showing that with these inputs, expressive languages can effectively categorize the world. Ongoing studies are extending these investigations to evolve languages that use the full power of the robots representations in populations of agents.
Resumo:
UAVs could one day save the lives of lost civilians and those sent to find them, and a competition in outback Australia is proving how soon that day might come. We have all seen news stories of people who ventured beyond the day-to-day reach of the community and got lost: search parties are formed, aircraft drafted in, and often large sums of money expended in the quest to find them.
Resumo:
In 2013, ten teams from German universities and research institutes participated in a national robot competition called SpaceBot Cup organized by the DLR Space Administration. The robots had one hour to autonomously explore and map a challenging Mars-like environment, find, transport, and manipulate two objects, and navigate back to the landing site. Localization without GPS in an unstructured environment was a major issue as was mobile manipulation and very restricted communication. This paper describes our system of two rovers operating on the ground plus a quadrotor UAV simulating an observing orbiting satellite. We relied on ROS (robot operating system) as the software infrastructure and describe the main ROS components utilized in performing the tasks. Despite (or because of) faults, communication loss and breakdowns, it was a valuable experience with many lessons learned.
Resumo:
Have you ever wished you were Doctor Who and could pop yourself and your students into a Tardis and teleport them to an historical event or to meet a historical figure? We all know that unfortunately time travel is not (yet) possible, but maybe student and teacher teleportation just might be – sort of. Over the past few centuries and in lieu of time travel our communities have developed museums as a means of experiencing some of our history...
Resumo:
This paper is not about the details of yet another robot control system, but rather the issues surrounding realworld robotic implementation. It is a fact that in order to realise a future where robots co-exist with people in everyday places, we have to pass through a developmental phase that involves some risk. Putting a “Keep Out, Experiment in Progress” sign on the door is no longer possible since we are now at a level of capability that requires testing over long periods of time in complex realistic environments that contain people. We all know that controlling the risk is important – a serious accident could set the field back globally – but just as important is convincing others that the risks are known and controlled. In this article, we describe our experience going down this path and we show that mobile robotics research health and safety assessment is still unexplored territory in universities and is often ignored. We hope that the article will make robotics research labs in universities around the world take note of these issues rather than operating under the radar to prevent any catastrophic accidents.
Resumo:
The research reported in this paper explores autonomous technologies for agricultural farming application and is focused on the development of multiple-cooperative agricultural robots (AgBots). These are highly autonomous, small, lightweight, and unmanned machines that operate cooperatively (as opposed to a traditional single heavy machine) and are suited to work on broadacre land (large-scale crop operations on land parcels greater than 4,000m2). Since this is a new, and potentially disruptive technology, little is yet known about farmer attitudes towards robots, how robots might be incorporated into current farming practice, and how best to marry the capability of the robot with the work of the farmer. This paper reports preliminary insights (with a focus on farmer-robot control) gathered from field visits and contextual interviews with farmers, and contributes knowledge that will enable further work toward the design and application of agricultural robotics.