159 resultados para Robot localization
Resumo:
This paper describes a novel vision based texture tracking method to guide autonomous vehicles in agricultural fields where the crop rows are challenging to detect. Existing methods require sufficient visual difference between the crop and soil for segmentation, or explicit knowledge of the structure of the crop rows. This method works by extracting and tracking the direction and lateral offset of the dominant parallel texture in a simulated overhead view of the scene and hence abstracts away crop-specific details such as colour, spacing and periodicity. The results demonstrate that the method is able to track crop rows across fields with extremely varied appearance during day and night. We demonstrate this method can autonomously guide a robot along the crop rows.
Resumo:
For robots operating in outdoor environments, a number of factors, including weather, time of day, rough terrain, high speeds, and hardware limitations, make performing vision-based simultaneous localization and mapping with current techniques infeasible due to factors such as image blur and/or underexposure, especially on smaller platforms and low-cost hardware. In this paper, we present novel visual place-recognition and odometry techniques that address the challenges posed by low lighting, perceptual change, and low-cost cameras. Our primary contribution is a novel two-step algorithm that combines fast low-resolution whole image matching with a higher-resolution patch-verification step, as well as image saliency methods that simultaneously improve performance and decrease computing time. The algorithms are demonstrated using consumer cameras mounted on a small vehicle in a mixed urban and vegetated environment and a car traversing highway and suburban streets, at different times of day and night and in various weather conditions. The algorithms achieve reliable mapping over the course of a day, both when incrementally incorporating new visual scenes from different times of day into an existing map, and when using a static map comprising visual scenes captured at only one point in time. Using the two-step place-recognition process, we demonstrate for the first time single-image, error-free place recognition at recall rates above 50% across a day-night dataset without prior training or utilization of image sequences. This place-recognition performance enables topologically correct mapping across day-night cycles.
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology, since the work is arduous, dangerous, and often repetitive. This paper presents a broad overview of the issues involved in the development of a physically large and complex field robotic system—a 3500-tonne mining machine (dragline). Draglines are “walking cranes” used in open-pit coal mining to remove the material covering a coal seam. The critical issues of robust load position sensing, modeling of the dynamics of the electrical drive system and the swinging load, control strategies, the operator interface, and automation system architecture are addressed. An important aspect of this system is that it must work cooperatively with a human operator, seamlessly passing control back and forth in order to achieve the main aim—increased productivity.
Resumo:
Background Maintenance of communication is important for people with dementia living in long-term care. The purpose of this study was to assess the feasibility of using “Giraff”, a telepresence robot to enhance engagement between family and a person with dementia living in long-term care. Methods A mixed-methods approach involving semi-structured interviews, call records and video observational data was used. Five people with dementia and their family member participated in a discussion via the Giraff robot for a minimum of six times over a six-week period. A feasibility framework was used to assess feasibility and included video analysis of emotional response and engagement. Results Twenty-six calls with an average duration of 23 mins took place. Residents showed a general state of positive emotions across the calls with a high level of engagement and a minimal level of negative emotions. Participants enjoyed the experience and families reported that the Giraff robot offered the opportunity to reduce social isolation. A number of software and hardware challenges were encountered. Conclusions Participants perceived this novel approach to engage families and people with dementia as a feasible option. Participants were observed and also reported to enjoy the experience. The technical challenges identified have been improved in a newer version of the robot. Future research should include a feasibility trial of longer duration, with a larger sample and a cost analysis.
Resumo:
Locomotion and autonomy in humanoid robots is of utmost importance in integrating them into social and community service type roles. However, the limited range and speed of these robots severely limits their ability to be deployed in situations where fast response is necessary. While the ability for a humanoid to drive a vehicle would aide in increasing their overall mobility, the ability to mount and dismount a vehicle designed for human occupants is a non-trivial problem. To address this issue, this paper presents an innovative approach to enabling a humanoid robot to mount and dismount a vehicle by proposing a simple mounting bracket involving no moving parts. In conjunction with a purpose built robotic vehicle, the mounting bracket successfully allowed a humanoid Nao robot to mount, dismount and drive the vehicle.
Resumo:
This paper presents a low-bandwidth multi-robot communication system designed to serve as a backup communication channel in the event a robot suffers a network device fault. While much research has been performed in the area of distributing network communication across multiple robots within a system, individual robots are still susceptible to hardware failure. In the past, such robots would simply be removed from service, and their tasks re-allocated to other members. However, there are times when a faulty robot might be crucial to a mission, or be able to contribute in a less communication intensive area. By allowing robots to encode and decode messages into unique sequences of DTMF symbols, called words, our system is able to facilitate continued low-bandwidth communication between robots without access to network communication. Our results have shown that the system is capable of permitting robots to negotiate task initiation and termination, and is flexible enough to permit a pair of robots to perform a simple turn taking task.
Resumo:
Mobile robots and animals alike must effectively navigate their environments in order to achieve their goals. For animals goal-directed navigation facilitates finding food, seeking shelter or migration; similarly robots perform goal-directed navigation to find a charging station, get out of the rain or guide a person to a destination. This similarity in tasks extends to the environment as well; increasingly, mobile robots are operating in the same underwater, ground and aerial environments that animals do. Yet despite these similarities, goal-directed navigation research in robotics and biology has proceeded largely in parallel, linked only by a small amount of interdisciplinary research spanning both areas. Most state-of-the-art robotic navigation systems employ a range of sensors, world representations and navigation algorithms that seem far removed from what we know of how animals navigate; their navigation systems are shaped by key principles of navigation in ‘real-world’ environments including dealing with uncertainty in sensing, landmark observation and world modelling. By contrast, biomimetic animal navigation models produce plausible animal navigation behaviour in a range of laboratory experimental navigation paradigms, typically without addressing many of these robotic navigation principles. In this paper, we attempt to link robotics and biology by reviewing the current state of the art in conventional and biomimetic goal-directed navigation models, focusing on the key principles of goal-oriented robotic navigation and the extent to which these principles have been adapted by biomimetic navigation models and why.
Resumo:
This thesis demonstrates that robots can learn about how the world changes, and can use this information to recognise where they are, even when the appearance of the environment has changed a great deal. The ability to localise in highly dynamic environments using vision only is a key tool for achieving long-term, autonomous navigation in unstructured outdoor environments. The proposed learning algorithms are designed to be unsupervised, and can be generated by the robot online in response to its observations of the world, without requiring information from a human operator or other external source.
Resumo:
The aim of this ethnographic study was to understand welding practices in shipyard environments with the purpose of designing an interactive welding robot that can help workers with their daily job. The robot is meant to be deployed for automatic welding on jack-up rig structures. The design of the robot turns out to be a challenging task due to several problematic working conditions on the shipyard, such as dust, irregular floor, high temperature, wind variations, elevated working platforms, narrow spaces, and circular welding paths requiring a robotic arm with more than 6 degrees of freedom. Additionally, the environment is very noisy and the workers – mostly foreigners – have a very basic level of English. These two issues need to be taken into account when designing the interactive user interface for the robot. Ideally, the communication flow between the two parties involved should be as frictionless as possible. The paper presents the results of our field observations and welders’ interviews, as well as our robot design recommendation for the next project stage.
Resumo:
Localization of technology is now widely applied to the preservation and revival of the culture of indigenous peoples around the world, most commonly through the translation into indigenous languages, which has been proven to increase the adoption of technology. However, this current form of localization excludes two demographic groups, which are key to the effectiveness of localization efforts in the African context: the younger generation (under the age of thirty) with an Anglo- American cultural view who have no need or interest in their indigenous culture; and the older generation (over the age of fifty) who are very knowledgeable about their indigenous culture, but have little or no knowledge on the use of a computer. This paper presents the design of a computer game engine that can be used to provide an interface for both technology and indigenous culture learning for both generations. Four indigenous Ugandan games are analyzed and identified for their attractiveness to both generations, to both rural and urban populations, and for their propensity to develop IT skills in older generations.
Resumo:
In this paper, the recent results of the space project IMPERA are presented. The goal of IMPERA is the development of a multirobot planning and plan execution architecture with a focus on a lunar sample collection scenario in an unknown environment. We describe the implementation and verification of different modules that are integrated into a distributed system architecture. The modules include a mission planning approach for a multirobot system and modules for task and skill execution within a lunar use-case scenario. The skills needed for the test scenario include cooperative exploration and mapping strategies for an unknown environment, the localization and classification of sample containers using a novel approach of semantic perception, and the skill of transporting sample containers to a collection point using a mobile manipulation robot. Additionally, we present our approach of a reliable communication framework that can deal with communication loss during the mission. Several modules are tested within several experiments in the domain of planning and plan execution, communication, coordinated exploration, perception, and object transportation. An overall system integration is tested on a mission scenario experiment using three robots.
Resumo:
We propose the use of optical flow information as a method for detecting and describing changes in the environment, from the perspective of a mobile camera. We analyze the characteristics of the optical flow signal and demonstrate how robust flow vectors can be generated and used for the detection of depth discontinuities and appearance changes at key locations. To successfully achieve this task, a full discussion on camera positioning, distortion compensation, noise filtering, and parameter estimation is presented. We then extract statistical attributes from the flow signal to describe the location of the scene changes. We also employ clustering and dominant shape of vectors to increase the descriptiveness. Once a database of nodes (where a node is a detected scene change) and their corresponding flow features is created, matching can be performed whenever nodes are encountered, such that topological localization can be achieved. We retrieve the most likely node according to the Mahalanobis and Chi-square distances between the current frame and the database. The results illustrate the applicability of the technique for detecting and describing scene changes in diverse lighting conditions, considering indoor and outdoor environments and different robot platforms.