840 resultados para Robotic Grasping
Resumo:
Persistent monitoring of the ocean is not optimally accomplished by repeatedly executing a fixed path in a fixed location. The ocean is dynamic, and so should the executed paths to monitor and observe it. An open question merging autonomy and optimal sampling is how and when to alter a path/decision, yet achieve desired science objectives. Additionally, many marine robotic deployments can last multiple weeks to months; making it very difficult for individuals to continuously monitor and retask them as needed. This problem becomes increasingly more complex when multiple platforms are operating simultaneously. There is a need for monitoring and adaptation of the robotic fleet via teams of scientists working in shifts; crowds are ideal for this task. In this paper, we present a novel application of crowd-sourcing to extend the autonomy of persistent-monitoring vehicles to enable nonrepetitious sampling over long periods of time. We present a framework that enables the control of a marine robot by anybody with an internet-enabled device. Voters are provided current vehicle location, gathered science data and predicted ocean features through the associated decision support system. Results are included from a simulated implementation of our system on a Wave Glider operating in Monterey Bay with the science objective to maximize the sum of observed nitrate values collected.
Resumo:
This paper proposes an online learning control system that uses the strategy of Model Predictive Control (MPC) in a model based locally weighted learning framework. The new approach, named Locally Weighted Learning Model Predictive Control (LWL-MPC), is proposed as a solution to learn to control robotic systems with nonlinear and time varying dynamics. This paper demonstrates the capability of LWL-MPC to perform online learning while controlling the joint trajectories of a low cost, three degree of freedom elastic joint robot. The learning performance is investigated in both an initial learning phase, and when the system dynamics change due to a heavy object added to the tool point. The experiment on the real elastic joint robot is presented and LWL-MPC is shown to successfully learn to control the system with and without the object. The results highlight the capability of the learning control system to accommodate the lack of mechanical consistency and linearity in a low cost robot arm.
Resumo:
Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].
Resumo:
This paper presents the design of μAV, a palm size open source micro quadrotor constructed on a single Printed Circuit Board. The aim of the micro quadrotor is to provide a lightweight (approximately 86g) and cheap robotic research platform that can be used for a range of robotic applications. One possible application could be a cheap test bed for robotic swarm research. The goal of this paper is to give an overview of the design and capabilities of the micro quadrotor. The micro quadrotor is complete with a 9 Degree of Freedom Inertial Measurement Unit, a Gumstix Overo® Computer-On-Module which can run the widely used Robot Operating System (ROS) for use with other research algorithms.
Resumo:
This paper presents a new multi-scale place recognition system inspired by the recent discovery of overlapping, multi-scale spatial maps stored in the rodent brain. By training a set of Support Vector Machines to recognize places at varying levels of spatial specificity, we are able to validate spatially specific place recognition hypotheses against broader place recognition hypotheses without sacrificing localization accuracy. We evaluate the system in a range of experiments using cameras mounted on a motorbike and a human in two different environments. At 100% precision, the multiscale approach results in a 56% average improvement in recall rate across both datasets. We analyse the results and then discuss future work that may lead to improvements in both robotic mapping and our understanding of sensory processing and encoding in the mammalian brain.
Resumo:
In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.
Resumo:
An important aspect of robotic path planning for is ensuring that the vehicle is in the best location to collect the data necessary for the problem at hand. Given that features of interest are dynamic and move with oceanic currents, vehicle speed is an important factor in any planning exercises to ensure vehicles are at the right place at the right time. Here, we examine different Gaussian process models to find a suitable predictive kinematic model that enable the speed of an underactuated, autonomous surface vehicle to be accurately predicted given a set of input environmental parameters.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.
Resumo:
The vast majority of current robot mapping and navigation systems require specific well-characterized sensors that may require human-supervised calibration and are applicable only in one type of environment. Furthermore, if a sensor degrades in performance, either through damage to itself or changes in environmental conditions, the effect on the mapping system is usually catastrophic. In contrast, the natural world presents robust, reasonably well-characterized solutions to these problems. Using simple movement behaviors and neural learning mechanisms, rats calibrate their sensors for mapping and navigation in an incredibly diverse range of environments and then go on to adapt to sensor damage and changes in the environment over the course of their lifetimes. In this paper, we introduce similar movement-based autonomous calibration techniques that calibrate place recognition and self-motion processes as well as methods for online multisensor weighting and fusion. We present calibration and mapping results from multiple robot platforms and multisensory configurations in an office building, university campus, and forest. With moderate assumptions and almost no prior knowledge of the robot, sensor suite, or environment, the methods enable the bio-inspired RatSLAM system to generate topologically correct maps in the majority of experiments.
Resumo:
This work is motivated by the desire to covertly track mobile targets, either animal or human, in previously unmapped outdoor natural environments using off-road robotic platforms with a non-negligible acoustic signature. The use of robots for stealthy surveillance is not new. Many studies exist but only consider the navigation problem to maintain visual covertness. However, robotic systems also have a significant acoustic footprint from the onboard sensors, motors, computers and cooling systems, and also from the wheels interacting with the terrain during motion. All these can jepordise any visual covertness. In this work, we experimentally explore the concepts of opportunistically utilizing naturally occurring sounds within outdoor environments to mask the motion of a robot, and being visually covert whilst maintaining constant observation of the target. Our experiments in a constrained outdoor built environment demonstrate the effectiveness of the concept by showing a reduced acoustic signature as perceived by a mobile target allowing the robot to covertly navigate to opportunistic vantage points for observation.
Resumo:
Real-time image analysis and classification onboard robotic marine vehicles, such as AUVs, is a key step in the realisation of adaptive mission planning for large-scale habitat mapping in previously unexplored environments. This paper describes a novel technique to train, process, and classify images collected onboard an AUV used in relatively shallow waters with poor visibility and non-uniform lighting. The approach utilises Förstner feature detectors and Laws texture energy masks for image characterisation, and a bag of words approach for feature recognition. To improve classification performance we propose a usefulness gain to learn the importance of each histogram component for each class. Experimental results illustrate the performance of the system in characterisation of a variety of marine habitats and its ability to operate onboard an AUV's main processor suitable for real-time mission planning.
Resumo:
Light of Extinction presents a diverse series of views into the complex antics of a semi-autonomous gaggle of robotic actants. Audiences initially enter into the 'backend' of the experience to be rudely confronted with the raw, messy operations of a horde of object-manipulating robotic forms. Seen through viewing apertures these ‘things’ deny any opportunity to grasp their imagined order. Audiences then flow on into the 'front end' of the work where now, seen through another aperture, the very same forms seemingly coordinate a stunning deep-field choreography, floating lusciously within inky landscapes of media, noise and embodied sound. As one series of conceptions slip into extinction, so others flow on in. The idea of the 'extinction of human experience' expresses a projected fear for that which will disappear when biodiverse worlds have descended into an era of permanent darkness. ‘Light Of Extinction' re-positions this anthropomorphic lament in order to suggest a more rounded acknowledgement of what might still remain - suggesting the previously unacknowledged power and place of autonomous, synthetic creation. Momentary disbelief gives way to a relieving celebration of the imagined birth of ‘things’ – without need for staples such as conventional light or the harmonious lullabies of long-extinguished sounds.
Resumo:
This paper proposes a method for designing set-point regulation controllers for a class of underactuated mechanical systems in Port-Hamiltonian System (PHS) form. A new set of potential shape variables in closed loop is proposed, which can replace the set of open loop shape variables-the configuration variables that appear in the kinetic energy. With this choice, the closed-loop potential energy contains free functions of the new variables. By expressing the regulation objective in terms of these new potential shape variables, the desired equilibrium can be assigned and there is freedom to reshape the potential energy to achieve performance whilst maintaining the PHS form in closed loop. This complements contemporary results in the literature, which preserve the open-loop shape variables. As a case study, we consider a robotic manipulator mounted on a flexible base and compensate for the motion of the base while positioning the end effector with respect to the ground reference. We compare the proposed control strategy with special cases that correspond to other energy shaping strategies previously proposed in the literature.
Resumo:
A robust visual tracking system requires an object appearance model that is able to handle occlusion, pose, and illumination variations in the video stream. This can be difficult to accomplish when the model is trained using only a single image. In this paper, we first propose a tracking approach based on affine subspaces (constructed from several images) which are able to accommodate the abovementioned variations. We use affine subspaces not only to represent the object, but also the candidate areas that the object may occupy. We furthermore propose a novel approach to measure affine subspace-to-subspace distance via the use of non-Euclidean geometry of Grassmann manifolds. The tracking problem is then considered as an inference task in a Markov Chain Monte Carlo framework via particle filtering. Quantitative evaluation on challenging video sequences indicates that the proposed approach obtains considerably better performance than several recent state-of-the-art methods such as Tracking-Learning-Detection and MILtrack.