875 resultados para Object manipulation
Resumo:
To manipulate an object skillfully, the brain must learn its dynamics, specifying the mapping between applied force and motion. A fundamental issue in sensorimotor control is whether such dynamics are represented in an extrinsic frame of reference tied to the object or an intrinsic frame of reference linked to the arm. Although previous studies have suggested that objects are represented in arm-centered coordinates [1-6], all of these studies have used objects with unusual and complex dynamics. Thus, it is not known how objects with natural dynamics are represented. Here we show that objects with simple (or familiar) dynamics and those with complex (or unfamiliar) dynamics are represented in object- and arm-centered coordinates, respectively. We also show that objects with simple dynamics are represented with an intermediate coordinate frame when vision of the object is removed. These results indicate that object dynamics can be flexibly represented in different coordinate frames by the brain. We suggest that with experience, the representation of the dynamics of a manipulated object may shift from a coordinate frame tied to the arm toward one that is linked to the object. The additional complexity required to represent dynamics in object-centered coordinates would be economical for familiar objects because such a representation allows object use regardless of the orientation of the object in hand.
Resumo:
Somatosensory object discrimination has been shown to involve widespread cortical and subcortical structures in both cerebral hemispheres. In this study we aimed to identify the networks involved in tactile object manipulation by principal component analysis (PCA) of individual subjects. We expected to find more than one network.
Resumo:
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.
Resumo:
Background: The relationship between normal and tangential force components (grip force - GF and load force - LF, respectively) acting on the digits-object interface during object manipulation reveals neural mechanisms involved in movement control. Here, we examined whether the feedback type provided to the participants during exertion of LF would influence GF-LF coordination and task performance. Methods. Sixteen young (24.7 ±3.8 years-old) volunteers isometrically exerted continuously sinusoidal FZ (vertical component of LF) by pulling a fixed instrumented handle up and relaxing under two feedback conditions: targeting and tracking. In targeting condition, FZ exertion range was determined by horizontal lines representing the upper (10 N) and lower (1 N) targets, with frequency (0.77 or 1.53 Hz) dictated by a metronome. In tracking condition, a sinusoidal template set at similar frequencies and range was presented and should be superposed by the participants' exerted FZ. Task performance was assessed by absolute errors at peaks (AEPeak) and valleys (AEValley) and GF-LF coordination by GF-LF ratios, maximum cross-correlation coefficients (r max), and time lags. Results: The results revealed no effect of feedback and no feedback by frequency interaction on any variable. AE Peak and GF-LF ratio were higher and rmax lower at 1.53 Hz than at 0.77 Hz. Conclusion: These findings indicate that the type of feedback does not influence task performance and GF-LF coordination. Therefore, we recommend the use of tracking tasks when assessing GF-LF coordination during isometric LF exertion in externally fixed instrumented handles because they are easier to understand and provide additional indices (e.g., RMSE) of voluntary force control. © 2013 Pedão et al.; licensee BioMed Central Ltd.
Resumo:
This paper describes the design of a modular multi-finger haptic device for virtual object manipulation. Mechanical structures are based on one module per finger and can be scaled up to three fingers. Mechanical configurations for two and three fingers are based on the use of one and two redundant axes, respectively. As demonstrated, redundant axes significantly increase workspace and prevent link collisions, which is their main asset with respect to other multi-finger haptic devices. The location of redundant axes and link dimensions have been optimized in order to guarantee a proper workspace, manipulability, force capability, and inertia for the device. The mechanical haptic device design and a thimble adaptable to different finger sizes have also been developed for virtual object manipulation.
Reactive reaching and grasping on a humanoid: Towards closing the action-perception loop on the iCub
Resumo:
We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visual stream - while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.
Resumo:
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions.
Resumo:
On a daily basis, humans interact with a vast range of objects and tools. A class of tasks, which can pose a serious challenge to our motor skills, are those that involve manipulating objects with internal degrees of freedom, such as when folding laundry or using a lasso. Here, we use the framework of optimal feedback control to make predictions of how humans should interact with such objects. We confirm the predictions experimentally in a two-dimensional object manipulation task, in which subjects learned to control six different objects with complex dynamics. We show that the non-intuitive behavior observed when controlling objects with internal degrees of freedom can be accounted for by a simple cost function representing a trade-off between effort and accuracy. In addition to using a simple linear, point-mass optimal control model, we also used an optimal control model, which considers the non-linear dynamics of the human arm. We find that the more realistic optimal control model captures aspects of the data that cannot be accounted for by the linear model or other previous theories of motor control. The results suggest that our everyday interactions with objects can be understood by optimality principles and advocate the use of more realistic optimal control models for the study of human motor neuroscience.
Resumo:
Robotic manipulanda are extensively used in investigation of the motor control of human arm movements. They permit the application of translational forces to the arm based on its state and can be used to probe issues ranging from mechanisms of neural control to biomechanics. However, most current designs are optimized for studying either motor learning or stiffness. Even fewer include end-point torque control which is important for the simulation of objects and the study of tool use. Here we describe a modular, general purpose, two-dimensional planar manipulandum (vBOT) primarily optimized for dynamic learning paradigms. It employs a carbon fibre arm arranged as a parallelogram which is driven by motors via timing pulleys. The design minimizes the intrinsic dynamics of the manipulandum without active compensation. A novel variant of the design (WristBOT) can apply torques at the handle using an add-on cable drive mechanism. In a second variant (StiffBOT) a more rigid arm can be substituted and zero backlash belts can be used, making the StiffBOT more suitable for the study of stiffness. The three variants can be used with custom built display rigs, mounting, and air tables. We investigated the performance of the vBOT and its variants in terms of effective end-point mass, viscosity and stiffness. Finally we present an object manipulation task using the WristBOT. This demonstrates that subjects can perceive the orientation of the principal axis of an object based on haptic feedback arising from its rotational dynamics.
Resumo:
Bio-inspired designs can provide an answer to engineering problems such as swimming strategies at the micron or nano-scale. Scientists are now designing artificial micro-swimmers that can mimic flagella-powered swimming of micro-organisms. In an application such as lab-on-a-chip in which micro-object manipulation in small flow geometries could be achieved by micro-swimmers, control of the swimming direction becomes an important aspect for retrieval and control of the micro-swimmer. A bio-inspired approach for swimming direction reversal (a flagellum bearing mastigonemes) can be used to design such a system and is being explored in the present work. We analyze the system using a computational framework in which the equations of solid mechanics and fluid dynamics are solved simultaneously. The fluid dynamics of Stokes flow is represented by a 2D Stokeslets approach while the solid mechanics behavior is realized using Euler-Bernoulli beam elements. The working principle of a flagellum bearing mastigonemes can be broken up into two parts: (1) the contribution of the base flagellum and (2) the contribution of mastigonemes, which act like cilia. These contributions are counteractive, and the net motion (velocity and direction) is a superposition of the two. In the present work, we also perform a dimensional analysis to understand the underlying physics associated with the system parameters such as the height of the mastigonemes, the number of mastigonemes, the flagellar wave length and amplitude, the flagellum length, and mastigonemes rigidity. Our results provide fundamental physical insight on the swimming of a flagellum with mastigonemes, and it provides guidelines for the design of artificial flagellar systems.
Resumo:
Current models of motor learning posit that skill acquisition involves both the formation and decay of multiple motor memories that can be engaged in different contexts. Memory formation is assumed to be context dependent, so that errors most strongly update motor memories associated with the current context. In contrast, memory decay is assumed to be context independent, so that movement in any context leads to uniform decay across all contexts. We demonstrate that for both object manipulation and force-field adaptation, contrary to previous models, memory decay is highly context dependent. We show that the decay of memory associated with a given context is greatest for movements made in that context, with more distant contexts showing markedly reduced decay. Thus, both memory formation and decay are strongest for the current context. We propose that this apparently paradoxical organization provides a mechanism for optimizing performance. While memory decay tends to reduce force output, memory formation can correct for any errors that arise, allowing the motor system to regulate force output so as to both minimize errors and avoid unnecessary energy expenditure. The motor commands for any given context thus result from a balance between memory formation and decay, while memories for other contexts are preserved.
Resumo:
ROSSI: Emergence of communication in Robots through Sensorimotor and Social Interaction, T. Ziemke, A. Borghi, F. Anelli, C. Gianelli, F. Binkovski, G. Buccino, V. Gallese, M. Huelse, M. Lee, R. Nicoletti, D. Parisi, L. Riggio, A. Tessari, E. Sahin, International Conference on Cognitive Systems (CogSys 2008), University of Karlsruhe, Karlsruhe, Germany, 2008 Sponsorship: EU-FP7