856 resultados para eye movements
Resumo:
Two experiments examined imitation of lateralised body movement sequences presented at six viewing angles (0º, 60º, 120º, 180º, 240º, and 300º rotation relative to the participant’s body). Experiment 1 found that, when participants were instructed simply to ‘‘do what the model does’’, at all viewing angles they produced more actions using the same side of the body as the model (anatomical matches), than actions using the opposite side (anatomical non-matches). In Experiment 2 participants were instructed to produce either anatomical matches or anatomical non-matches of observed actions. When the model was viewed from behind (0º), the anatomically matching group were more accurate than the anatomically non-matching group, but the non-matching group was superior when the model faced the participant (180º and 240º). No reliable differences were observed between groups at 60º, 120º, and 300º. In combination, the results of Experiments 1 and 2 suggest that, when they are confronting a model, people choose to imitate the hard way; they attempt to match observed actions anatomically, in spite of the fact that anatomical matching is more subject to error than anatomical non-matching.
Resumo:
Automatically extracting interesting objects from videos is a very challenging task and is applicable to many research areas such robotics, medical imaging, content based indexing and visual surveillance. Automated visual surveillance is a major research area in computational vision and a commonly applied technique in an attempt to extract objects of interest is that of motion segmentation. Motion segmentation relies on the temporal changes that occur in video sequences to detect objects, but as a technique it presents many challenges that researchers have yet to surmount. Changes in real-time video sequences not only include interesting objects, environmental conditions such as wind, cloud cover, rain and snow may be present, in addition to rapid lighting changes, poor footage quality, moving shadows and reflections. The list provides only a sample of the challenges present. This thesis explores the use of motion segmentation as part of a computational vision system and provides solutions for a practical, generic approach with robust performance, using current neuro-biological, physiological and psychological research in primate vision as inspiration.
Resumo:
Understanding human movement is key to improving input devices and interaction techniques. This paper presents a study of mouse movements of motion-impaired users, with an aim to gaining a better understanding of impaired movement. The cursor trajectories of six motion-impaired users and three able-bodied users are studied according to their submovement structure. Several aspects of the movement are studied, including the frequency and duration of pauses between submovements, verification times, the number of submovements, the peak speed of submovements and the accuracy of submovements in two-dimensions. Results include findings that some motion-impaired users pause more often and for longer than able-bodied users, require up to five times more submovements to complete the same task, and exhibit a correlation between error and peak submovement speed that does not exist for able-bodied users.
Resumo:
The authors demonstrate four real-time reactive responses to movement in everyday scenes using an active head/eye platform. They first describe the design and realization of a high-bandwidth four-degree-of-freedom head/eye platform and visual feedback loop for the exploration of motion processing within active vision. The vision system divides processing into two scales and two broad functions. At a coarse, quasi-peripheral scale, detection and segmentation of new motion occurs across the whole image, and at fine scale, tracking of already detected motion takes place within a foveal region. Several simple coarse scale motion sensors which run concurrently at 25 Hz with latencies around 100 ms are detailed. The use of these sensors are discussed to drive the following real-time responses: (1) head/eye saccades to moving regions of interest; (2) a panic response to looming motion; (3) an opto-kinetic response to continuous motion across the image and (4) smooth pursuit of a moving target using motion alone.
Resumo:
Motor imagery, passive movement, and movement observation have been suggested to activate the sensorimotor system without overt movement. The present study investigated these three covert movement modes together with overt movement in a within-subject design to allow for a fine-grained comparison of their abilities in activating the sensorimotor system, i.e. premotor, primary motor, and somatosensory cortices. For this, 21 healthy volunteers underwent functional magnetic resonance imaging (fMRI). In addition we explored the abilities of the different covert movement modes in activating the sensorimotor system in a pilot study of 5 stroke patients suffering from chronic severe hemiparesis. Results demonstrated that while all covert movement modes activated sensorimotor areas, there were profound differences between modes and between healthy volunteers and patients. In healthy volunteers, the pattern of neural activation in overt execution was best resembled by passive movement, followed by motor imagery, and lastly by movement observation. In patients, attempted overt execution was best resembled by motor imagery, followed by passive movement, and lastly by movement observation. Our results indicate that for severely hemiparetic stroke patients motor imagery may be the preferred way to activate the sensorimotor system without overt behavior. In addition, the clear differences between the covert movement modes point to the need for within-subject comparisons.
Resumo:
Recent evidence suggests that the mirror neuron system responds to the goals of actions, even when the end of the movement is hidden from view. To investigate whether this predictive ability might be based on the detection of early differences between actions with different outcomes, we used electromyography (EMG) and motion tracking to assess whether two actions with different goals (grasp to eat and grasp to place) differed from each other in their initial reaching phases. In a second experiment, we then tested whether observers could detect early differences and predict the outcome of these movements, based on seeing only part of the actions. Experiment 1 revealed early kinematic differences between the two movements, with grasp-to-eat movements characterised by an earlier peak acceleration, and different grasp position, compared to grasp-to-place movements. There were also significant differences in forearm muscle activity in the reaching phase of the two actions. The behavioural data arising from Experiments 2a and 2b indicated that observers are not able to predict whether an object is going to be brought to the mouth or placed until after the grasp has been completed. This suggests that the early kinematic differences are either not visible to observers, or that they are not used to predict the end-goals of actions. These data are discussed in the context of the mirror neuron system
Resumo:
Consistent with a negativity bias account, neuroscientific and behavioral evidence demonstrates modulation of even early sensory processes by unpleasant, potentially threat-relevant information. The aim of this research is to assess the extent to which pleasant and unpleasant visual stimuli presented extrafoveally capture attention and impact eye movement control. We report an experiment examining deviations in saccade metrics in the presence of emotional image distractors that are close to a nonemotional target. We additionally manipulate the saccade latency to test when the emotional distractor has its biggest impact on oculomotor control. The results demonstrate that saccade landing position was pulled toward unpleasant distractors, and that this pull was due to the quick saccade responses. Overall, these findings support a negativity bias account of early attentional control and call for the need to consider the time course of motivated attention when affect is implicit
Resumo:
Abstract: Movements away from the natal or home territory are important to many ecological processes, including gene flow, population regulation, and disease epidemiology, yet quantitative data on these behaviors are lacking. Red foxes exhibit 2 periods of extraterritorial movements: when an individual disperses and when males search neighboring territories for extrapair copulations during the breeding season. Using radiotracking data collected at 5-min interfix intervals, we compared movement parameters, including distance moved, speed of movement, and turning angles, of dispersal and reproductive movements to those made during normal territorial movements; the instantaneous separation distances of dispersing and extraterritorial movements to the movements of resident adults; and the frequency of locations of 95%, 60%, and 30% harmonic mean isopleths of adult fox home territories to randomly generated fox movements. Foxes making reproductive movements traveled farther than when undertaking other types of movement, and dispersal movements were straighter. Reproductive and dispersal movements were faster than territorial movements and also differed in intensity of search and thoroughness. Foxes making dispersal movements avoided direct contact with territorial adults and moved through peripheral areas of territories. The converse was true for reproductive movements. Although similar in some basic characteristics, dispersal and reproductive movements are fundamentally different both behaviorally and spatially and are likely to have different ultimate purposes and contrasting effects on spatial processes such as disease transmission