5 resultados para haptic grasping

em Universidad de Alicante


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tactile sensors play an important role in robotics manipulation to perform dexterous and complex tasks. This paper presents a novel control framework to perform dexterous manipulation with multi-fingered robotic hands using feedback data from tactile and visual sensors. This control framework permits the definition of new visual controllers which allow the path tracking of the object motion taking into account both the dynamics model of the robot hand and the grasping force of the fingertips under a hybrid control scheme. In addition, the proposed general method employs optimal control to obtain the desired behaviour in the joint space of the fingers based on an indicated cost function which determines how the control effort is distributed over the joints of the robotic hand. Finally, authors show experimental verifications on a real robotic manipulation system for some of the controllers derived from the control framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To evaluate and compare the visual, refractive, contrast sensitivity, and aberrometric outcomes with a diffractive bifocal and trifocal intraocular lens (IOL) of the same material and haptic design. METHODS: Sixty eyes of 30 patients undergoing bilateral cataract surgery were enrolled and randomly assigned to one of two groups: the bifocal group, including 30 eyes implanted with the bifocal diffractive IOL AT LISA 801 (Carl Zeiss Meditec, Jena, Germany), and the trifocal group, including eyes implanted with the trifocal diffractive IOL AT LISA tri 839 MP (Carl Zeiss Meditec). Analysis of visual and refractive outcomes, contrast sensitivity, ocular aberrations (OPD-Scan III; Nidek, Inc., Gagamori, Japan), and defocus curve were performed during a 3-month follow-up period. RESULTS: No statistically significant differences between groups were found in 3-month postoperative uncorrected and corrected distance visual acuity (P > .21). However, uncorrected, corrected, and distance-corrected near and intermediate visual acuities were significantly better in the trifocal group (P < .01). No significant differences between groups were found in postoperative spherical equivalent (P = .22). In the binocular defocus curve, the visual acuity was significantly better for defocus of -0.50 to -1.50 diopters in the trifocal group (P < .04) and -3.50 to -4.00 diopters in the bifocal group (P < .03). No statistically significant differences were found between groups in most of the postoperative corneal, internal, and ocular aberrations (P > .31), and in contrast sensitivity for most frequencies analyzed (P > .15). CONCLUSIONS: Trifocal diffractive IOLs provide significantly better intermediate vision over bifocal IOLs, with equivalent postoperative levels of visual and ocular optical quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with enfasis on readability and understanding rather than on efficiency. However, the library can also be used for research purposes. JavaVis is an open source Java library, oriented to the teaching of Computer Vision. It consists of a framework with several features that meet its demands. It has been designed to be easy to use: the user does not have to deal with internal structures or graphical interface, and should the student need to add a new algorithm it can be done simply enough. Once we sketch the library, we focus on the experience the student gets using this library in several computer vision courses. Our main goal is to find out whether the students understand what they are doing, that is, find out how much the library helps the student in grasping the basic concepts of computer vision. In the last four years we have conducted surveys to assess how much the students have improved their skills by using this library.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During grasping and intelligent robotic manipulation tasks, the camera position relative to the scene changes dramatically because the robot is moving to adapt its path and correctly grasp objects. This is because the camera is mounted at the robot effector. For this reason, in this type of environment, a visual recognition system must be implemented to recognize and “automatically and autonomously” obtain the positions of objects in the scene. Furthermore, in industrial environments, all objects that are manipulated by robots are made of the same material and cannot be differentiated by features such as texture or color. In this work, first, a study and analysis of 3D recognition descriptors has been completed for application in these environments. Second, a visual recognition system designed from specific distributed client-server architecture has been proposed to be applied in the recognition process of industrial objects without these appearance features. Our system has been implemented to overcome problems of recognition when the objects can only be recognized by geometric shape and the simplicity of shapes could create ambiguity. Finally, some real tests are performed and illustrated to verify the satisfactory performance of the proposed system.