772 resultados para robotic grasping


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Robotic grasping has been studied increasingly for a few decades. While progress has been made in this field, robotic hands are still nowhere near the capability of human hands. However, in the past few years, the increase in computational power and the availability of commercial tactile sensors have made it easier to develop techniques that exploit the feedback from the hand itself, the sense of touch. The focus of this thesis lies in the use of this sense. The work described in this thesis focuses on robotic grasping from two different viewpoints: robotic systems and data-driven grasping. The robotic systems viewpoint describes a complete architecture for the act of grasping and, to a lesser extent, more general manipulation. Two central claims that the architecture was designed for are hardware independence and the use of sensors during grasping. These properties enables the use of multiple different robotic platforms within the architecture. Secondly, new data-driven methods are proposed that can be incorporated into the grasping process. The first of these methods is a novel way of learning grasp stability from the tactile and haptic feedback of the hand instead of analytically solving the stability from a set of known contacts between the hand and the object. By learning from the data directly, there is no need to know the properties of the hand, such as kinematics, enabling the method to be utilized with complex hands. The second novel method, probabilistic grasping, combines the fields of tactile exploration and grasp planning. By employing well-known statistical methods and pre-existing knowledge of an object, object properties, such as pose, can be inferred with related uncertainty. This uncertainty is utilized by a grasp planning process which plans for stable grasps under the inferred uncertainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Robotic Grasping is an important research topic in robotics since for robots to attain more general-purpose utility, grasping is a necessary skill, but very challenging to master. In general the robots may use their perception abilities like an image from a camera to identify grasps for a given object usually unknown. A grasp describes how a robotic end-effector need to be positioned to securely grab an object and successfully lift it without lost it, at the moment state of the arts solutions are still far behind humans. In the last 5–10 years, deep learning methods take the scene to overcome classical problem like the arduous and time-consuming approach to form a task-specific algorithm analytically. In this thesis are present the progress and the approaches in the robotic grasping field and the potential of the deep learning methods in robotic grasping. Based on that, an implementation of a Convolutional Neural Network (CNN) as a starting point for generation of a grasp pose from camera view has been implemented inside a ROS environment. The developed technologies have been integrated into a pick-and-place application for a Panda robot from Franka Emika. The application includes various features related to object detection and selection. Additionally, the features have been kept as generic as possible to allow for easy replacement or removal if needed, without losing time for improvement or new testing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Monimutkaisissa ja muuttuvissa ympäristöissä työskentelevät robotit tarvitsevat kykyä manipuloida ja tarttua esineisiin. Tämä työ tutkii robottitarttumisen ja robottitartuntapis-teiden koneoppimisen aiempaa tutkimusta ja nykytilaa. Nykyaikaiset menetelmät käydään läpi, ja Le:n koneoppimiseen pohjautuva luokitin toteutetaan, koska se tarjoaa parhaan onnistumisprosentin tutkituista menetelmistä ja on muokattavissa sopivaksi käytettävissä olevalle robotille. Toteutettu menetelmä käyttää intensititeettikuvaan ja syvyyskuvaan po-hjautuvia ominaisuuksi luokitellakseen potentiaaliset tartuntapisteet. Tämän toteutuksen tulokset esitellään.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses the problem of developing automatic grasping capabilities for robotic hands. Using a 2-jointed and a 4-jointed nmodel of the hand, we establish the geometric conditions necessary for achieving form closure grasps of cylindrical objects. We then define and show how to construct the grasping pre-image for quasi-static (friction dominated) and zero-G (inertia dominated) motions for sensorless and sensor-driven grasps with and without arm motions. While the approach does not rely on detailed modeling, it is computationally inexpensive, reliable, and easy to implement. Example behaviors were successfully implemented on the Salisbury hand and on a planar 2-fingered, 4 degree-of-freedom hand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this research is to develop the prototype of a tactile sensing platform for anthropomorphic manipulation research. We investigate this problem through the fabrication and simple control of a planar 2-DOF robotic finger inspired by anatomic consistency, self-containment, and adaptability. The robot is equipped with a tactile sensor array based on optical transducer technology whereby localized changes in light intensity within an illuminated foam substrate correspond to the distribution and magnitude of forces applied to the sensor surface plane. The integration of tactile perception is a key component in realizing robotic systems which organically interact with the world. Such natural behavior is characterized by compliant performance that can initiate internal, and respond to external, force application in a dynamic environment. However, most of the current manipulators that support some form of haptic feedback either solely derive proprioceptive sensation or only limit tactile sensors to the mechanical fingertips. These constraints are due to the technological challenges involved in high resolution, multi-point tactile perception. In this work, however, we take the opposite approach, emphasizing the role of full-finger tactile feedback in the refinement of manual capabilities. To this end, we propose and implement a control framework for sensorimotor coordination analogous to infant-level grasping and fixturing reflexes. This thesis details the mechanisms used to achieve these sensory, actuation, and control objectives, along with the design philosophies and biological influences behind them. The results of behavioral experiments with a simple tactilely-modulated control scheme are also described. The hope is to integrate the modular finger into an %engineered analog of the human hand with a complete haptic system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During grasping and intelligent robotic manipulation tasks, the camera position relative to the scene changes dramatically because the robot is moving to adapt its path and correctly grasp objects. This is because the camera is mounted at the robot effector. For this reason, in this type of environment, a visual recognition system must be implemented to recognize and “automatically and autonomously” obtain the positions of objects in the scene. Furthermore, in industrial environments, all objects that are manipulated by robots are made of the same material and cannot be differentiated by features such as texture or color. In this work, first, a study and analysis of 3D recognition descriptors has been completed for application in these environments. Second, a visual recognition system designed from specific distributed client-server architecture has been proposed to be applied in the recognition process of industrial objects without these appearance features. Our system has been implemented to overcome problems of recognition when the objects can only be recognized by geometric shape and the simplicity of shapes could create ambiguity. Finally, some real tests are performed and illustrated to verify the satisfactory performance of the proposed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soft robots are robots made mostly or completely of soft, deformable, or compliant materials. As humanoid robotic technology takes on a wider range of applications, it has become apparent that they could replace humans in dangerous environments. Current attempts to create robotic hands for these environments are very difficult and costly to manufacture. Therefore, a robotic hand made with simplistic architecture and cheap fabrication techniques is needed. The goal of this thesis is to detail the design, fabrication, modeling, and testing of the SUR Hand. The SUR Hand is a soft, underactuated robotic hand designed to be cheaper and easier to manufacture than conventional hands. Yet, it maintains much of their dexterity and precision. This thesis will detail the design process for the soft pneumatic fingers, compliant palm, and flexible wrist. It will also discuss a semi-empirical model for finger design and the creation and validation of grasping models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first mechanical Automaton concept was found in a Chinese text written in the 3rd century BC, while Computer Vision was born in the late 1960s. Therefore, visual perception applied to machines (i.e. the Machine Vision) is a young and exciting alliance. When robots came in, the new field of Robotic Vision was born, and these terms began to be erroneously interchanged. In short, we can say that Machine Vision is an engineering domain, which concern the industrial use of Vision. The Robotic Vision, instead, is a research field that tries to incorporate robotics aspects in computer vision algorithms. Visual Servoing, for example, is one of the problems that cannot be solved by computer vision only. Accordingly, a large part of this work deals with boosting popular Computer Vision techniques by exploiting robotics: e.g. the use of kinematics to localize a vision sensor, mounted as the robot end-effector. The remainder of this work is dedicated to the counterparty, i.e. the use of computer vision to solve real robotic problems like grasping objects or navigate avoiding obstacles. Will be presented a brief survey about mapping data structures most widely used in robotics along with SkiMap, a novel sparse data structure created both for robotic mapping and as a general purpose 3D spatial index. Thus, several approaches to implement Object Detection and Manipulation, by exploiting the aforementioned mapping strategies, will be proposed, along with a completely new Machine Teaching facility in order to simply the training procedure of modern Deep Learning networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Minimally invasive techniques have been revolutionary and provide clinical evidence of decreased morbidity and comparable efficacy to traditional open surgery. Computer-assisted surgical devices have recently been approved for general surgical use. Aim: The aim of this study was to report the first known case of pancreatic resection with the use of a computer-assisted, or robotic, surgical device in Latin America. Patient and Methods: A 37-year-old female with a previous history of radical mastectomy for bilateral breast cancer due to a BRCA2 mutation presented with an acute pancreatitis episode. Radiologic investigation disclosed an intraductal pancreatic neoplasm located in the neck of the pancreas with atrophy of the body and tail. The main pancreatic duct was enlarged. The surgical decision was to perform a laparoscopic subtotal pancreatectomy, using the da Vinci (R) robotic system (Intuitive Surgical, Sunnyvale, CA). Five trocars were used. Pancreatic transection was achieved with vascular endoscopic stapler. The surgical specimen was removed without an additional incision. Results: Operative time was 240 minutes. Blood loss was minimal, and the patient did not receive a transfusion. The recovery was uneventful, and the patient was discharged on postoperative day 4. Conclusions: The subtotal laparoscopic pancreatic resection can safely be performed. The da Vinci robotic system allowed for technical refinements of laparoscopic pancreatic resection. Robotic assistance improved the dissection and control of major blood vessels due to three-dimensional visualization of the operative field and instruments with wrist-type end-effectors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a mixed validation approach based on coloured Petri nets and 3D graphic simulation for the design of supervisory systems in manufacturing cells with multiple robots. The coloured Petri net is used to model the cell behaviour at a high level of abstraction. It models the activities of each cell component and its coordination by a supervisory system. The graphical simulation is used to analyse and validate the cell behaviour in a 3D environment, allowing the detection of collisions and the calculation of process times. The motivation for this work comes from the aeronautic industry. The automation of a fuselage assembly process requires the integration of robots with other cell components such as metrological or vision systems. In this cell, the robot trajectories are defined by the supervisory system and results from the coordination of the cell components. The paper presents the application of the approach for an aircraft assembly cell under integration in Brazil. This case study shows the feasibility of the approach and supports the discussion of its main advantages and limits. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE center dot To evaluate early trifecta outcomes after robotic-assisted radical prostatectomy (RARP) performed by a high-volume surgeon. PATIENTS AND METHODS center dot We evaluated prospectively 1100 consecutive patients who underwent RARP performed by one surgeon. In all, 541 men were considered potent before RARP; of these 404 underwent bilateral full nerve sparing and were included in this analysis. center dot Baseline and postoperative urinary and sexual functions were assessed using self-administered validated questionnaires. center dot Postoperative continence was defined as the use of no pads; potency was defined as the ability to achieve and maintain satisfactory erections for sexual intercourse > 50% of times, with or without the use of oral phosphodiesterase type 5 inhibitors; Biochemical recurrence (BCR) was defined as two consecutive PSA levels of > 0.2 ng/mL after RARP. center dot Results were compared between three age groups: Group 1, < 55 years, Group 2, 56-65 years and Group 3, > 65 years. RESULTS center dot The trifecta rates at 6 weeks, 3, 6, 12, and 18 months after RARP were 42.8%, 65.3%, 80.3%, 86% and 91%, respectively. center dot There were no statistically significant differences in the continence and BCR-free rates between the three age groups at all postoperative intervals analysed. center dot Nevertheless, younger men had higher potency rates and shorter time to recovery of sexual function when compared with older men at 6 weeks, 3, 6 and 12 months after RARP (P < 0.01 at all time points). center dot Similarly, younger men also had a shorter time to achieving the trifecta and had higher trifecta rates at 6 weeks, 3 and 6 months after RARP compared with older men (P < 0.01 at all time points). CONCLUSION center dot RARP offers excellent short-term trifecta outcomes when performed by an experienced surgeon. center dot Younger men had a shorter time to achieving the trifecta and higher overall trifecta rates when compared with older men at 6 weeks, 3 and 6 months after RARP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Perioperative complications following robotic-assisted radical prostatectomy (RARP) have been previously reported in recent series. Few studies, however, have used standardized systems to classify surgical complications, and that inconsistency has hampered accurate comparisons between different series or surgical approaches. Objective: To assess trends in the incidence and to classify perioperative surgical complications following RARP in 2500 consecutive patients. Design, setting, and participants: We analyzed 2500 patients who underwent RARP for treatment of clinically localized prostate cancer (PCa) from August 2002 to February 2009. Data were prospectively collected in a customized database and retrospectively analyzed. Intervention: All patients underwent RARP performed by a single surgeon. Measurements: The data were collected prospectively in a customized database. Complications were classified using the Clavien grading system. To evaluate trends regarding complications and radiologic anastomotic leaks, we compared eight groups of 300 patients each, categorized according the surgeon`s experience (number of cases). Results and limitations: Our median operative time was 90 min (interquartile range [IQR]: 75-100 min). The median estimated blood loss was 100 ml (IQR: 100-150 ml). Our conversion rate was 0.08%, comprising two procedures converted to standard laparoscopy due to robot malfunction. One hundred and forty complications were observed in 127 patients (5.08%). The following percentages of patients presented graded complications: grade 1, 2.24%; grade 2, 1.8%; grade 3a, 0.08%; grade 3b, 0.48%; grade 4a, 0.40%. There were no cases of multiple organ dysfunction or death (grades 4b and 5). There were significant decreases in the overall complication rates (p = 0.0034) and in the number of anastomotic leaks (p < 0.001) as the surgeon`s experience increased. Conclusions: RARP is a safe option for treatment of clinically localized PCa, presenting low complication rates in experienced hands. Although the robotic system provides the surgeon with enhanced vision and dexterity, proficiency is only accomplished with consistent surgical volume; complication rates demonstrated a tendency to decrease as the surgeon`s experience increased. (C) 2010 European Association of Urology. Published by Elsevier B. V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context: The purpose of this article is to review the history of robotic surgery, its impact on teaching as well as a description of historical and current robots used in the medical arena. Summary of evidence: Although the history of robots dates back to 2000 years or more, the last two decades have seen an outstanding revolution in medicine, due to all the changes that robotic surgery has made in the way of performing, teaching and practicing surgery. Conclusions: Robotic surgery has evolved into a complete and self-contained field, with enormous potential for future development. The results to date have shown that this technology is capable of providing good outcomes and quality care for patients. (C) 2011 AEU. Published by Elsevier Espana, S.L. All rights reserved.