979 resultados para Visione Robotica Calibrazione Camera Robot Hand Eye
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
The spectral content of the myoelectric signals from the muscles of the remnant forearms of three persons with congenital absences (CA) of their forearms was compared with signals from their intact contra-lateral limbs, similar muscles in three persons with acquired losses (AL) and seven persons without absences [no loss (NL)]. The observed bandwidth for the CA subjects was broader with peak energy between 200 and 300 Hz. While the signals from the contra-lateral limbs and the AL and NL subjects was in the 100-150 Hz range: The mean skew of the signals from the AL subjects was 46.3 +/- 6.7 and those with NL of 45.4 +/- 8.7, while the signals from those with CAs had a skew of 11.0 +/- 11. The structure of the muscles of one CA subject was observed ultrasonically. The muscle showed greater disruption than normally developed muscles. It is speculated that the myographic signal reflects the structure of the muscle. which has developed in a more disorganized manner as a result of the muscle not being stretched by other muscles across the missing distal joint, even in the muscles that are used regularly to control arm prostheses.
Resumo:
This paper presents a new strategy for controlling rigid-robot manipulators in the presence of parametric uncertainties or un-modelled dynamics. The strategy combines an adaptation law with a well known robust controller proposed by Spong, which is derived using Lyapunov's direct method. Although the tracking problem of manipulators has been successfully solved with different strategies, there are some conditions under which their efficiency is limited. Specifically, their performance decreases when unknown loading masses or model disturbances are introduced. The aim of this work is to show that the proposed strategy performs better than existing algorithms, as verified with real-time experimental results with a Puma-560 robot. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Rodney Brooks has been called the “Self Styled Bad Boy of Robotics”. In the 1990s he gained this dubious honour by orchestrating a string of highly evocative robots from his artificial interligence Labs at the Massachusettes Institute of Technology (MIT), Boston, USA.
Resumo:
Calibrated cameras are an extremely useful resource for computer vision scenarios. Typically, cameras are calibrated through calibration targets, measurements of the observed scene, or self-calibrated through features matched between cameras with overlapping fields of view. This paper considers an approach to camera calibration based on observations of a pedestrian and compares the resulting calibration to a commonly used approach requiring that measurements be made of the scene.