979 resultados para Visione Robotica Calibrazione Camera Robot Hand Eye


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi indaga il problema di calibrazione Hand Eye, ovvero la trasformazione geometrica fra i diversi sistemi di riferimento della camera e dell'attuatore del robot, presentando il problema ed analizzando le varie soluzioni proposte in letteratura. Viene infine presentata una implementazione realizzata in collaborazione con l'azienda SpecialVideo, implementata utilizzando l'algoritmo proposto da Konstantinos Daniilidis, il quale propone una formulazione del problema sfruttando l'utilizzo di quaternioni duali, risolvendo simultaneamente la parte rotatoria e traslatoria della trasformazione. Si conclude il lavoro con una analisi dell'efficacia del metodo implementato su dati simulati e proponendo eventuali estensioni, allo scopo di poter riutilizzare in futuro il lavoro svolto nel software aziendale, con dati reali e con diversi tipi di telecamere, principalmente camere lineari o laser.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main purpose of robot calibration is the correction of the possible errors in the robot parameters. This paper presents a method for a kinematic calibration of a parallel robot that is equipped with one camera in hand. In order to preserve the mechanical configuration of the robot, the camera is utilized to acquire incremental positions of the end effector from a spherical object that is fixed in the word reference frame. The positions of the end effector are related to incremental positions of resolvers of the motors of the robot, and a kinematic model of the robot is used to find a new group of parameters which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and improving spatial measurements. Finally, the robotic system is designed to carry out tracking tasks and the calibration of the robot is validated by means of integrating the errors of the visual controller.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New low cost sensors and the new open free libraries for 3D image processing are permitting to achieve important advances for robot vision applications such as tridimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a method to recognize the human hand and to track the fingers is proposed. This new method is based on point clouds from range images, RGBD. It does not require visual marks, camera calibration, environment knowledge and complex expensive acquisition systems. Furthermore, this method has been implemented to create a human interface in order to move a robot hand. The human hand is recognized and the movement of the fingers is analyzed. Afterwards, it is imitated from a Barret hand, using communication events programmed from ROS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans can effortlessly manipulate objects in their hands, dexterously sliding and twisting them within their grasp. Robots, however, have none of these capabilities, they simply grasp objects rigidly in their end effectors. To investigate this common form of human manipulation, an analysis of controlled slipping of a grasped object within a robot hand was performed. The Salisbury robot hand demonstrated many of these controlled slipping techniques, illustrating many results of this analysis. First, the possible slipping motions were found as a function of the location, orientation, and types of contact between the hand and object. Second, for a given grasp, the contact types were determined as a function of the grasping force and the external forces on the object. Finally, by changing the grasping force, the robot modified the constraints on the object and affect controlled slipping slipping motions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of the appropriate distribution of forces among the fingers of a four-fingered robot hand is addressed. The finger-object interactions are modelled as point frictional contacts, hence the system is indeterminate and an optimal solution is required for controlling forces acting on an object. A fast and efficient method for computing the grasping and manipulation forces is presented, where computation has been based on using the true model of the nonlinear frictional cone of contact. Results are compared with previously employed methods of linearizing the cone constraints and minimizing the internal forces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel method for the calibration of a parallel robot, which allows a more accurate configuration instead of a configuration based on nominal parameters. It is used, as the main sensor with one camera installed in the robot hand that determines the relative position of the robot with respect to a spherical object fixed in the working area of the robot. The positions of the end effector are related to the incremental positions of resolvers of the robot motors. A kinematic model of the robot is used to find a new group of parameters, which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and thereby improve spatial measurements. Finally, several working tests, static and tracking tests are executed in order to verify how the robotic system behaviour improves by using calibrated parameters against nominal parameters. In order to emphasize that, this proposed new method uses neither external nor expensive sensor. That is why new robots are useful in teaching and research activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questa tesi si è focalizzata sulla topologia robotica. In particolare, in questo elaborato si è voluto sottolineare l’importanza della topografia dei pezzi nella visione robotica. Siamo partiti dalle definizioni di politopo e di mappa gaussiana estesa, per poi passare ad alcuni punti chiave della robotica, quali la definizione di posa di un oggetto, di “peg in the hole”e di forma da X. Questi punti ci hanno permesso di enunciare i teoremi di Minkowski ed Alexandrov che sono stati poi utilizzati nella costruzione del metodo EGI. Questo metodo è stato quindi utilizzato per determinare l’assetto di un oggetto nello spazio e permettere quindi al braccio del robot di afferrarlo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laparoscopic surgery (LS) has revolutionized traditional surgical techniques introducing minimally invasive procedures for diagnosis and local therapies. LSs have undeniable advantages, such as small patient incisions, reduced postoperative pain and faster recovery. On the other hand, restricted vision of the anatomical target, difficult handling of the surgical instruments, restricted mobility inside the human body, need of dexterity to hand-eye coordination and inadequate and non-ergonomic surgical instruments may restrict LS only to more specialized surgeons. To overcome the referred limitations, this work presents a new robotic surgical handheld system – the EndoRobot. The EndoRobot was designed to be used in clinical practice or even as a surgical simulator. It integrates an electromechanical system with 3 degrees of freedom. Each degree can be manipulated independently and combined with different levels of sensitivity allowing fast and slow movements. As other features, the EndoRobot has battery power or external power supply, enables the use of bipolar radiofrequency to prevent bleeding while cutting and allows plug-and-play of the laparoscopic forceps for rapid exchange. As a surgical simulator, the system was also instrumented to measure and transmit, in real time, its position and orientation for a training software able to monitor and assist the trainee’s surgical movements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: The objective of this trial was to assess which type of warm-up has the highest effect on virtual reality (VR) laparoscopy performance. The following warm-up strategies were applied: a hands-on exercise (group 1), a cognitive exercise (group 2), and no warm-up (control, group 3). DESIGN: This is a 3-arm randomized controlled trial. SETTING: The trial was conducted at the department of surgery of the University Hospital Basel in Switzerland. PARTICIPANTS: A total of 94 participants, all laypersons without any surgical or VR experience, completed the study. RESULTS: A total of 96 participants were randomized, 31 to group 1, 31 to group 2, and 32 to group 3. There were 2 postrandomization exclusions. In the multivariate analysis, we found no evidence that the intervention had an effect on VR performance as represented by 6 calculated subscores of accuracy, time, and path length for (1) camera manipulation and (2) hand-eye coordination combined with 2-handed maneuvers (p = 0.795). Neither the comparison of the average of the intervention groups (groups 1 and 2) vs control (group 3) nor the pairwise comparisons revealed any significant differences in VR performance, neither multivariate nor univariate. VR performance improved with increasing performance score in the cognitive exercise warm-up (iPad 3D puzzle) for accuracy, time, and path length in the camera navigation task. CONCLUSIONS: We were unable to show an effect of the 2 tested warm-up strategies on VR performance in laypersons. We are currently designing a follow-up study including surgeons rather than laypersons with a longer warm-up exercise, which is more closely related to the final task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently a substantial amount of research has been done in the field of dextrous manipulation and hand manoeuvres. The main concern has been how to control robot hands so that they can execute manipulation tasks with the same dexterity and intuition as human hands. This paper surveys multi-fingered robot hand research and development topics which include robot hand design, object force distribution and control, grip transform, grasp stability and its synthesis, grasp stiffness and compliance motion and robot arm-hand coordination. Three main topics are presented in this article. The first is an introduction to the subject. The second concentrates on examples of mechanical manipulators used in research and the methods employed to control them. The third presents work which has been done on the field of object manipulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Manipulation of an object by a multi-fingered robot hand requires task planning which involves computation of joint space vectors and fingertip forces. To implement a task as fast as possible, computations have to be carried out in minimum time. The state of the art in manipulation by multi-fingered robot hand designs has shown the possible use of remotely driven finger joints. Such remotely driven hands require computation of tendon displacement for evaluating joint space vectors before signals are sent to actuators. Alternatively, a direct drive hand is a mechanical hand in which the shafts of articulated joints are directly coupled to the rotors of motors with high output torques. This article has been divided into two main sections. The first section presents a brief view of manipulation using a direct drive approach. Meanwhile, the other section presents ongoing research which is being carried out to design a four-finger articulated hand in the Department of Cybernetics at the University of Reading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.