36 resultados para The Gaze
Resumo:
We measured the movements of soccer players heading a football in a fully immersive virtual reality environment. In mid-flight the ball’s trajectory was altered from its normal quasi-parabolic path to a linear one, producing a jump in the rate of change of the angle of elevation of gaze (α) from player to ball. One reaction time later the players adjusted their speed so that the rate of change of α increased when it had been reduced and reduced it when it had been increased. Since the result of the player’s movement was to regain a value of the rate of change close to that before the disturbance, the data suggest that the players have an expectation of, and memory for, the pattern that the rate of change of α will follow during the flight. The results support the general claim that players intercepting balls use servo control strategies and are consistent with the particular claim of Optic Acceleration Cancellation theory that the servo strategy is to allow α to increase at a steadily decreasing rate.
Resumo:
Background: Shifting gaze and attention ahead of the hand is a natural component in the performance of skilled manual actions. Very few studies have examined the precise co-ordination between the eye and hand in children with Developmental Coordination Disorder (DCD). Methods This study directly assessed the maturity of eye-hand co-ordination in children with DCD. A double-step pointing task was used to investigate the coupling of the eye and hand in 7-year-old children with and without DCD. Sequential targets were presented on a computer screen, and eye and hand movements were recorded simultaneously. Results There were no differences between typically developing (TD) and DCD groups when completing fast single-target tasks. There were very few differences in the completion of the first movement in the double-step tasks, but differences did occur during the second sequential movement. One factor appeared to be the propensity for the DCD children to delay their hand movement until some period after the eye had landed on the target. This resulted in a marked increase in eye-hand lead during the second movement, disrupting the close coupling and leading to a slower and less accurate hand movement among children with DCD. Conclusions In contrast to skilled adults, both groups of children preferred to foveate the target prior to initiating a hand movement if time allowed. The TD children, however, were more able to reduce this foveation period and shift towards a feedforward mode of control for hand movements. The children with DCD persevered with a look-then-move strategy, which led to an increase in error. For the group of DCD children in this study, there was no evidence of a problem in speed or accuracy of simple movements, but there was a difficulty in concatenating the sequential shifts of gaze and hand required for the completion of everyday tasks or typical assessment items.
Resumo:
During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.
Resumo:
The efficacy of explicit and implicit learning paradigms was examined during the very early stages of learning the perceptual-motor anticipation task of predicting ball direction from temporally occluded footage of soccer penalty kicks. In addition, the effect of instructional condition on point-of-gaze during learning was examined. A significant improvement in horizontal prediction accuracy was observed in the explicit learning group; however, similar improvement was evident in a placebo group who watched footage of soccer matches. Only the explicit learning intervention resulted in changes in eye movement behaviour and increased awareness of relevant postural cues. Results are discussed in terms of methodological and practical issues regarding the employment of implicit perceptual training interventions. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This article explores young infants' ability to learn new words in situations providing tightly controlled social and salience cues to their reference. Four experiments investigated whether, given two potential referents, 15-month-olds would attach novel labels to (a) an image toward which a digital recording of a face turned and gazed, (b) a moving image versus a stationary image, (c) a moving image toward which the face gazed, and (d) a gazed-on image versus a moving image. Infants successfully used the recorded gaze cue to form new word-referent associations and also showed learning in the salience condition. However, their behavior in the salience condition and in the experiments that followed suggests that, rather than basing their judgments of the words' reference on the mere presence or absence of the referent's motion, infants were strongly biased to attend to the consistency with which potential referents moved when a word was heard. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
A desktop tool for replay and analysis of gaze-enhanced multiparty virtual collaborative sessions is described. We linked three CAVE (TM)-like environments, creating a multiparty collaborative virtual space where avatars are animated with 3D gaze as well as head and hand motions in real time. Log files are recorded for subsequent playback and analysis Using the proposed software tool. During replaying the user can rotate the viewpoint and navigate in the simulated 3D scene. The playback mechanism relies on multiple distributed log files captured at every site. This structure enables an observer to experience latencies of movement and information transfer for every site as this is important fir conversation analysis. Playback uses an event-replay algorithm, modified to allow fast traversal of the scene by selective rendering of nodes, and to simulate fast random access. The tool's is analysis module can show each participant's 3D gaze points and areas where gaze has been concentrated.
Resumo:
Our eyes are input sensors which Provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals and call indicate emotional responses, prior to the viewer becoming aware of them. In this paper we discuss a method of tracking a user's eye movements, and Use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a, natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
The essay asserts that, since pioneering work in the 1970s and 80s (in Screen in particular), the study of classical Hollywood cinema has failed adequately to acknowledge and understand the role of spectacle therein. This essay outlines theoretical but, even more, practical understandings of particular kinds of spectacle; they are susceptible to the practice of close analysis. Seeking to discuss spectacle in precise terms and in particular contexts, I define two kinds of spectacle associated with the historical film: ‘the decor of history’ and ‘the spectacular vista’. The example of Gone with the Wind illustrates the interrelationship between these two kinds of spectacle and their associations with particular ideas of femininity and masculinity. This gendering of spectacle is related to ‘the historical gaze’, a performative gesture that exemplifies the wider rhetoric of historical films, in their seeking to address the historical knowledge of the film spectator and to uphold a vision of history as being driven by powerful men, aware of their own destiny. Over the course of the three famous hilltop scenes in Gone with the Wind, one can plot Scarlett O'Hara's increased access to this kind of foresight and fortitude coded as ‘masculine’. This character arc can also be traced through Scarlett's shifting place within the film's use of spectacle: she begins the film wholly preoccupied with the domestic world of lavish parties and beautiful gowns; however, after her encounter with cataclysmic history visualized as a vast, terrible spectacle (the fall of Atlanta), Scarlett assumes the role occupied by her broken and emasculated father.
Resumo:
This paper presents a review of the design and development of the Yorick series of active stereo camera platforms and their integration into real-time closed loop active vision systems, whose applications span surveillance, navigation of autonomously guided vehicles (AGVs), and inspection tasks for teleoperation, including immersive visual telepresence. The mechatronic approach adopted for the design of the first system, including head/eye platform, local controller, vision engine, gaze controller and system integration, proved to be very successful. The design team comprised researchers with experience in parallel computing, robot control, mechanical design and machine vision. The success of the project has generated sufficient interest to sanction a number of revisions of the original head design, including the design of a lightweight compact head for use on a robot arm, and the further development of a robot head to look specifically at increasing visual resolution for visual telepresence. The controller and vision processing engines have also been upgraded, to include the control of robot heads on mobile platforms and control of vergence through tracking of an operator's eye movement. This paper details the hardware development of the different active vision/telepresence systems.
Resumo:
The objective of a Visual Telepresence System is to provide the operator with a high fidelity image from a remote stereo camera pair linked to a pan/tilt device such that the operator may reorient the camera position by use of head movement. Systems such as these which utilise virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the displays is generally fixed and is most suitable only for viewing elements of a scene at a particular distance. To address such limitations, a prototype system has been developed where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator. This paper explores why it is necessary to actively adjust the display system as well as the cameras and justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms. The performance and accuracy of the system is assessed with respect to eye movement.
Resumo:
In an essay from 1910 the architect and critic Adolf Loos distinguishes between buildings that are for everyday practical use and buildings made for contemplation. The latter type he asserts may be considered as both architecture and works of art. He refers to only two types of contemplative architecture namely the tomb and the monument. There are certain paintings made in the early part of the twentieth century that do not observe this separation such as certain works by Hopper and de Chirico. Here the commonplace is simultaneously experienced in the way a tomb might be. This mortifying gaze condemns building by inducing a sense that space has become inhospitable and alienating. It could be argued that these and other paintings made around this time such as Carlo Carra The Abandoned House 1916 are like premonitions of what will occur when building observes the prescription laid down by Loos and omit an aesthetic dimension. However it might also suggest that buildings need their tombs or at least some space that is not completely assimilable by the daily, practical and functional needs of an inhabitant.
Resumo:
Visual Telepresence system which utilize virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the display is fixed and is most suitable only for viewing elements of a scene at a particular distance. In such a system, the operator's ability to gaze around without use of head movement is severely limited. A trade off must be made between a poor viewing resolution or a narrow width of viewing field. To address these limitations a prototype system where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator has been developed. This paper explores the reasons why is necessary to actively adjust both the display system and the cameras and furthermore justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms, An assessment of the performance of the system against a fixed camera/display system when operators are assigned basic tasks involving depth and distance/size perception. The sensitivity to variations in transient performance of the display and camera vergence is also assessed.
Resumo:
The authors present an active vision system which performs a surveillance task in everyday dynamic scenes. The system is based around simple, rapid motion processors and a control strategy which uses both position and velocity information. The surveillance task is defined in terms of two separate behavioral subsystems, saccade and smooth pursuit, which are demonstrated individually on the system. It is shown how these and other elementary responses to 2D motion can be built up into behavior sequences, and how judicious close cooperation between vision and control results in smooth transitions between the behaviors. These ideas are demonstrated by an implementation of a saccade to smooth pursuit surveillance system on a high-performance robotic hand/eye platform.