935 resultados para robotics manipulators
Resumo:
This paper presents a new metric, which we call the lighting variance ratio, for quantifying descriptors in terms of their variance to illumination changes. In many applications it is desirable to have descriptors that are robust to changes in illumination, especially in outdoor environments. The lighting variance ratio is useful for comparing descriptors and determining if a descriptor is lighting invariant enough for a given environment. The metric is analysed across a number of datasets, cameras and descriptors. The results show that the upright SIFT descriptor is typically the most lighting invariant descriptor.
Resumo:
This paper describes a lightweight, modular and energy efficient robotic vehicle platform designed for broadacre agriculture - the Small Robotic Farm Vehicle (SRFV). The current trend in farming is towards increasingly large machines that optimise the individual farmer’s productivity. Instead, the SRFV is designed to promote the sustainable intensification of agriculture by allowing farmers to concentrate on more important farm management tasks. The robot has been designed with a user-centred approach which focuses the outcomes of the project on the needs of the key project stakeholders. In this way user and environmental considerations for broadacre farming have informed the vehicle platform configuration, locomotion, power requirements and chassis construction. The resultant design is a lightweight, modular four-wheeled differential steer vehicle incorporating custom twin in-hub electric drives with emergency brakes. The vehicle is designed for a balance between low soil impact, stability, energy efficiency and traction. The paper includes modelling of the robot’s dynamics during an emergency brake in order to determine the potential for tipping. The vehicle is powered by a selection of energy sources including rechargeable lithium batteries and petrol-electric generators.
Resumo:
This paper presents a trajectory-tracking control strategy for a class of mechanical systems in Hamiltonian form. The class is characterised by a simplectic interconnection arising from the use of generalised coordinates and full actuation. The tracking error dynamic is modelled as a port-Hamiltonian Systems (PHS). The control action is designed to take the error dynamics into a desired closed-loop PHS characterised by a constant mass matrix and a potential energy with a minimum at the origin. A transformation of the momentum and a feedback control is exploited to obtain a constant generalised mass matrix in closed loop. The stability of the close-loop system is shown using the close-loop Hamiltonian as a Lyapunov function. The paper also considers the addition of integral action to design a robust controller that ensures tracking in spite of disturbances. As a case study, the proposed control design methodology is applied to a fully actuated robotic manipulator.
Resumo:
The autonomous capabilities in collaborative unmanned aircraft systems are growing rapidly. Without appropriate transparency, the effectiveness of the future multiple Unmanned Aerial Vehicle (UAV) management paradigm will be significantly limited by the human agent’s cognitive abilities; where the operator’s CognitiveWorkload (CW) and Situation Awareness (SA) will present as disproportionate. This proposes a challenge in evaluating the impact of robot autonomous capability feedback, allowing the human agent greater transparency into the robot’s autonomous status - in a supervisory role. This paper presents; the motivation, aim, related works, experiment theory, methodology, results and discussions, and the future work succeeding this preliminary study. The results in this paper illustrates that, with a greater transparency of a UAV’s autonomous capability, an overall improvement in the subjects’ cognitive abilities was evident, that is, with a confidence of 95%, the test subjects’ mean CW was demonstrated to have a statistically significant reduction, while their mean SA was demonstrated to have a significant increase.
Resumo:
This paper presents a framework for synchronising multiple triggered sensors with respect to a local clock using standard computing hardware. Providing sensor measurements with accurate and meaningful timestamps is important for many sensor fusion, state estimation and control applications. Accurately synchronising sensor timestamps can be performed with specialised hardware, however, performing sensor synchronisation using standard computing hardware and non-real-time operating systems is difficult due to inaccurate and temperature sensitive clocks, variable communication delays and operating system scheduling delays. Results show the ability of our framework to estimate time offsets to sub-millisecond accuracy. We also demonstrate how synchronising timestamps with our framework results in a tenfold reduction in image stabilisation error for a vehicle driving on rough terrain. The source code will be released as an open source tool for time synchronisation in ROS.
Resumo:
With the increasing need to adapt to new environments, data-driven approaches have been developed to estimate terrain traversability by learning the rover’s response on the terrain based on experience. Multiple learning inputs are often used to adequately describe the various aspects of terrain traversability. In a complex learning framework, it can be difficult to identify the relevance of each learning input to the resulting estimate. This paper addresses the suitability of each learning input by systematically analyzing the impact of each input on the estimate. Sensitivity Analysis (SA) methods provide a means to measure the contribution of each learning input to the estimate variability. Using a variance-based SA method, we characterize how the prediction changes as one or more of the input changes, and also quantify the prediction uncertainty as attributed from each of the inputs in the framework of dependent inputs. We propose an approach built on Analysis of Variance (ANOVA) decomposition to examine the prediction made in a near-to-far learning framework based on multi-task GP regression. We demonstrate the approach by analyzing the impact of driving speed and terrain geometry on the prediction of the rover’s attitude and chassis configuration in a Marsanalogue terrain using our prototype rover Mawson.
Resumo:
This paper presents an enhanced algorithm for matching laser scan maps using histogram correlations. The histogram representation effectively summarizes a map's salient features such that pairs of maps can be matched efficiently without any prior guess as to their alignment. The histogram matching algorithm has been enhanced in order to work well in outdoor unstructured environments by using entropy metrics, weighted histograms and proper thresholding of quality metrics. Thus our large-scale scan-matching SLAM implementation has a vastly improved ability to close large loops in real-time even when odometry is not available. Our experimental results have demonstrated a successful mapping of the largest area ever mapped to date using only a single laser scanner. We also demonstrate our ability to solve the lost robot problem by localizing a robot to a previously built map without any prior initialization.
Resumo:
It is commonplace to use digital video cameras in robotic applications. These cameras have built-in exposure control but they do not have any knowledge of the environment, the lens being used, the important areas of the image and do not always produce optimal image exposure. Therefore, it is desirable and often necessary to control the exposure off the camera. In this paper we present a scheme for exposure control which enables the user application to determine the area of interest. The proposed scheme introduces an intermediate transparent layer between the camera and the user application which combines the information from these for optimal exposure production. We present results from indoor and outdoor scenarios using directional and fish-eye lenses showing the performance and advantages of this framework.
Resumo:
Changing environments pose a serious problem to current robotic systems aiming at long term operation under varying seasons or local weather conditions. This paper is built on our previous work where we propose to learn to predict the changes in an environment. Our key insight is that the occurring scene changes are in part systematic, repeatable and therefore predictable. The goal of our work is to support existing approaches to place recognition by learning how the visual appearance of an environment changes over time and by using this learned knowledge to predict its appearance under different environmental conditions. We describe the general idea of appearance change prediction (ACP) and investigate properties of our novel implementation based on vocabularies of superpixels (SP-ACP). Our previous work showed that the proposed approach significantly improves the performance of SeqSLAM and BRIEF-Gist for place recognition on a subset of the Nordland dataset under extremely different environmental conditions in summer and winter. This paper deepens the understanding of the proposed SP-ACP system and evaluates the influence of its parameters. We present the results of a large-scale experiment on the complete 10 h Nordland dataset and appearance change predictions between different combinations of seasons.
Resumo:
In 2013, ten teams from German universities and research institutes participated in a national robot competition called SpaceBot Cup organized by the DLR Space Administration. The robots had one hour to autonomously explore and map a challenging Mars-like environment, find, transport, and manipulate two objects, and navigate back to the landing site. Localization without GPS in an unstructured environment was a major issue as was mobile manipulation and very restricted communication. This paper describes our system of two rovers operating on the ground plus a quadrotor UAV simulating an observing orbiting satellite. We relied on ROS (robot operating system) as the software infrastructure and describe the main ROS components utilized in performing the tasks. Despite (or because of) faults, communication loss and breakdowns, it was a valuable experience with many lessons learned.
Resumo:
We show that the parallax motion resulting from non-nodal rotation in panorama capture can be exploited for light field construction from commodity hardware. Automated panoramic image capture typically seeks to rotate a camera exactly about its nodal point, for which no parallax motion is observed. This can be difficult or impossible to achieve due to limitations of the mounting or optical systems, and consequently a wide range of captured panoramas suffer from parallax between images. We show that by capturing such imagery over a regular grid of camera poses, then appropriately transforming the captured imagery to a common parameterisation, a light field can be constructed. The resulting four-dimensional image encodes scene geometry as well as texture, allowing an increasingly rich range of light field processing techniques to be applied. Employing an Ocular Robotics REV25 camera pointing system, we demonstrate light field capture,refocusing and low-light image enhancement.
Reactive reaching and grasping on a humanoid: Towards closing the action-perception loop on the iCub
Resumo:
We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visual stream - while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.
Resumo:
We present our work on tele-operating a complex humanoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their electromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasping of objects) on the 7 DOF arm.
Resumo:
In this paper we present for the first time a complete symbolic navigation system that performs goal-directed exploration to unfamiliar environments on a physical robot. We introduce a novel construct called the abstract map to link provided symbolic spatial information with observed symbolic information and actual places in the real world. Symbolic information is observed using a text recognition system that has been developed specifically for the application of reading door labels. In the study described in this paper, the robot was provided with a floor plan and a destination. The destination was specified by a room number, used both in the floor plan and on the door to the room. The robot autonomously navigated to the destination using its text recognition, abstract map, mapping, and path planning systems. The robot used the symbolic navigation system to determine an efficient path to the destination, and reached the goal in two different real-world environments. Simulation results show that the system reduces the time required to navigate to a goal when compared to random exploration.
Resumo:
This paper reports work involved with the automation of a Hot Metal Carrier — a 20 tonne forklift-type vehicle used to move molten metal in aluminium smelters. To achieve efficient vehicle operation, issues of autonomous navigation and materials handling must be addressed. We present our complete system and experiments demontrating reliable operation. One of the most significant experiments was five-hours of continuous operation where the vehicle travelled over 8 km and conducted 60 load handling operations. We also describe an experiment where the vehicle and autonomous operation were supervised from the other side of the world via a satellite phone network.