2 resultados para visuomotoric, visual feedback, intermanual transfer

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reaching and grasping an object is an action that can be performed in light, under visual guidance, as well as in darkness, under proprioceptive control only. Area V6A is a visuomotor area involved in the control of reaching movements. V6A, besides neurons activated by the execution of reaching movements, shows passive somatosensory and visual responses. This suggests fro V6A a multimodal capability of integrating sensory and motor-related information, We wanted to know whether this integration occurrs in reaching movements and in the present study we tested whether the visual feedback influenced the reaching activity of V6A neurons. In order to better address this question, we wanted to interpret the neural data in the light of the kinematic of reaching performance. We used an experimental paradigm that could examine V6A responses in two different visual backgrounds, light and dark. In these conditions, the monkey performed an istructed-delay reaching task moving the hand towards different target positions located in the peripersonal space. During the execution of reaching task, the visual feedback is processed in a variety of patterns of modulation, sometimes not expected. In fact, having already demonstrated in V6A reach-related discharges in absence of visual feedback, we expected two types of neural modulation: 1) the addition of light in the environment enhanced reach-related discharges recorded in the dark; 2) the light left the neural response unmodified. Unexpectedly, the results show a complex pattern of modulation that argues against a simple additive interaction between visual and motor-related signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the camera sensor. The work considers the case of a monocular system with the only camera mounted on the robot end effector (eye in hand configuration). Through IBVS the system can be positioned with respect to a 3D fixed target by minimizing the differences between its initial view and its goal view, corresponding respectively to the initial and the goal system configurations: the robot Cartesian Motion is thus generated only by means of visual informations. However, the execution of a positioning control task by IBVS is not straightforward because singularity problems may occur and local minima may be reached where the reached image is very close to the target one but the 3D positioning task is far from being fulfilled: this happens in particular for large camera displacements, when the the initial and the goal target views are noticeably different. To overcame singularity and local minima drawbacks, maintaining the good properties of IBVS robustness with respect to modeling and camera calibration errors, an opportune image path planning can be exploited. This work deals with the problem of generating opportune image plane trajectories for tracked points of the servoing control scheme (a trajectory is made of a path plus a time law). The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image planned trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We will show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories. Since the paths in the image are off-line generated it is also possible to tune the planning parameters so as to maintain the target inside the camera field of view even if, in some unfortunate cases, the feature target points would leave the camera images due to 3D robot motions. To test the validity of the proposed approach some both experiments and simulations results have been reported taking also into account the influence of noise in the path planning strategy. The experiments have been realized with a 6DOF anthropomorphic manipulator with a fire-wire camera installed on its end effector: the results demonstrate the good performances and the feasibility of the proposed approach.