840 resultados para Robotic grasping


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neal M J, Boyce D, Rowland J J, Lee M H, and Olivier P L. Robotic grasping by showing: an experimental comparison of two novel algorithms. In Proceedings of IFAC - SICICA'97, pages 345-350, Annecy, France, 1997.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visual stream - while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses the problem of developing automatic grasping capabilities for robotic hands. Using a 2-jointed and a 4-jointed nmodel of the hand, we establish the geometric conditions necessary for achieving form closure grasps of cylindrical objects. We then define and show how to construct the grasping pre-image for quasi-static (friction dominated) and zero-G (inertia dominated) motions for sensorless and sensor-driven grasps with and without arm motions. While the approach does not rely on detailed modeling, it is computationally inexpensive, reliable, and easy to implement. Example behaviors were successfully implemented on the Salisbury hand and on a planar 2-fingered, 4 degree-of-freedom hand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this research is to develop the prototype of a tactile sensing platform for anthropomorphic manipulation research. We investigate this problem through the fabrication and simple control of a planar 2-DOF robotic finger inspired by anatomic consistency, self-containment, and adaptability. The robot is equipped with a tactile sensor array based on optical transducer technology whereby localized changes in light intensity within an illuminated foam substrate correspond to the distribution and magnitude of forces applied to the sensor surface plane. The integration of tactile perception is a key component in realizing robotic systems which organically interact with the world. Such natural behavior is characterized by compliant performance that can initiate internal, and respond to external, force application in a dynamic environment. However, most of the current manipulators that support some form of haptic feedback either solely derive proprioceptive sensation or only limit tactile sensors to the mechanical fingertips. These constraints are due to the technological challenges involved in high resolution, multi-point tactile perception. In this work, however, we take the opposite approach, emphasizing the role of full-finger tactile feedback in the refinement of manual capabilities. To this end, we propose and implement a control framework for sensorimotor coordination analogous to infant-level grasping and fixturing reflexes. This thesis details the mechanisms used to achieve these sensory, actuation, and control objectives, along with the design philosophies and biological influences behind them. The results of behavioral experiments with a simple tactilely-modulated control scheme are also described. The hope is to integrate the modular finger into an %engineered analog of the human hand with a complete haptic system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During grasping and intelligent robotic manipulation tasks, the camera position relative to the scene changes dramatically because the robot is moving to adapt its path and correctly grasp objects. This is because the camera is mounted at the robot effector. For this reason, in this type of environment, a visual recognition system must be implemented to recognize and “automatically and autonomously” obtain the positions of objects in the scene. Furthermore, in industrial environments, all objects that are manipulated by robots are made of the same material and cannot be differentiated by features such as texture or color. In this work, first, a study and analysis of 3D recognition descriptors has been completed for application in these environments. Second, a visual recognition system designed from specific distributed client-server architecture has been proposed to be applied in the recognition process of industrial objects without these appearance features. Our system has been implemented to overcome problems of recognition when the objects can only be recognized by geometric shape and the simplicity of shapes could create ambiguity. Finally, some real tests are performed and illustrated to verify the satisfactory performance of the proposed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soft robots are robots made mostly or completely of soft, deformable, or compliant materials. As humanoid robotic technology takes on a wider range of applications, it has become apparent that they could replace humans in dangerous environments. Current attempts to create robotic hands for these environments are very difficult and costly to manufacture. Therefore, a robotic hand made with simplistic architecture and cheap fabrication techniques is needed. The goal of this thesis is to detail the design, fabrication, modeling, and testing of the SUR Hand. The SUR Hand is a soft, underactuated robotic hand designed to be cheaper and easier to manufacture than conventional hands. Yet, it maintains much of their dexterity and precision. This thesis will detail the design process for the soft pneumatic fingers, compliant palm, and flexible wrist. It will also discuss a semi-empirical model for finger design and the creation and validation of grasping models.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract - Mobile devices in the near future will need to collaborate to fulfill their function. Collaboration will be done by communication. We use a real world example of robotic soccer to come up with the necessary structures required for robotic communication. A review of related work is done and it is found no examples come close to providing a RANET. The robotic ad hoc network (RANET) we suggest uses existing structures pulled from the areas of wireless networks, peer to peer and software life-cycle management. Gaps are found in the existing structures so we describe how to extend some structures to satisfy the design. The RANET design supports robot cooperation by exchanging messages, discovering needed skills that other robots on the network may possess and the transfer of these skills. The network is built on top of a Bluetooth wireless network and uses JXTA to communicate and transfer skills. OSGi bundles form the skills that can be transferred. To test the nal design a reference implementation is done. Deficiencies in some third party software is found, specifically JXTA and JamVM and GNU Classpath. Lastly we look at how to fix the deciencies by porting the JXTA C implementation to the target robotic platform and potentially eliminating the TCP/IP layer, using UDP instead of TCP or using an adaptive TCP/IP stack. We also propose a future areas of investigation; how to seed the configuration for the Personal area network (PAN) Bluetooth protocol extension so a Bluetooth TCP/IP link is more quickly formed and using the STP to allow multi-hop messaging and transfer of skills.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The implementation of a robotic security solution generally requires one algorithm to route the robot around the environment and another algorithm to perform anomaly detection. Solutions to the routing problem require the robot to have a good estimate of its own pose. We present a novel security system that uses metrics generated by the localisation algorithm to perform adaptive anomaly detection. The localisation algorithm is a vision-based SLAM solution called RatSLAM, based on mechanisms within the hippocampus. The anomaly detection algorithm is based on the mechanisms used by the immune system to identify threats to the body. The system is explored using data gathered within an unmodified office environment. It is shown that the algorithm successfully reacts to the presence of people and objects in areas where they are not usually present and is tolerised against the presence of people in environments that are usually dynamic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional cameras have limited dynamic range, and as a result vision-based robots cannot effectively view an environment made up of both sunny outdoor areas and darker indoor areas. This paper presents an approach to extend the effective dynamic range of a camera, achieved by changing the exposure level of the camera in real-time to form a sequence of images which collectively cover a wide range of radiance. Individual control algorithms for each image have been developed to maximize the viewable area across the sequence. Spatial discrepancies between images, caused by the moving robot, are improved by a real-time image registration process. The sequence is then combined by merging color and contour information. By integrating these techniques it becomes possible to operate a vision-based robot in wide radiance range scenes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes the use of optical flow from a moving robot to provide force feedback to an operator's joystick to facilitate collision free teleoperation. Optic flow is measured by wide angle cameras on board the vehicle and used to generate a virtual environmental force that is reflected to the user through the joystick, as well as feeding back into the control of the vehicle. The coupling between optical flow (velocity) and force is modelled as an impedance - in this case an optical impedance. We show that the proposed control is dissipative and prevents the vehicle colliding with the environment as well as providing the operator with a natural feel for the remote environment. The paper focuses on applications to aerial robotics vehicles, however, the ideas apply directly to other force actuated vehicles such as submersibles or space vehicles, and the authors believe the approach has potential for control of terrestrial vehicles and even teleoperation of manipulators. Experimental results are provided for a simulated aerial robot in a virtual environment controlled by a haptic joystick.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes technologies we have developed to perform autonomous large-scale off-world excavation. A scale dragline excavator of size similar to that required for lunar excavation was made capable of autonomous control. Systems have been put in place to allow remote operation of the machine from anywhere in the world. Algorithms have been developed for complete autonomous digging and dumping of material taking into account machine and terrain constraints and regolith variability. Experimental results are presented showing the ability to autonomously excavate and move large amounts of regolith and accurately place it at a specified location.