56 resultados para gaze

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent studies have identified a distributed network of brain regions thought to support cognitive reappraisal processes underlying emotion regulation in response to affective images, including parieto-temporal regions and lateral/medial regions of prefrontal cortex (PFC). A number of these commonly activated regions are also known to underlie visuospatial attention and oculomotor control, which raises the possibility that people use attentional redeployment rather than, or in addition to, reappraisal as a strategy to regulate emotion. We predicted that a significant portion of the observed variance in brain activation during emotion regulation tasks would be associated with differences in how participants visually scan the images while regulating their emotions. We recorded brain activation using fMRI and quantified patterns of gaze fixation while participants increased or decreased their affective response to a set of affective images. fMRI results replicated previous findings on emotion regulation with regulation differences reflected in regions of PFC and the amygdala. In addition, our gaze fixation data revealed that when regulating, individuals changed their gaze patterns relative to a control condition. Furthermore, this variation in gaze fixation accounted for substantial amounts of variance in brain activation. These data point to the importance of controlling for gaze fixation in studies of emotion regulation that use visual stimuli.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A desktop tool for replay and analysis of gaze-enhanced multiparty virtual collaborative sessions is described. We linked three CAVE (TM)-like environments, creating a multiparty collaborative virtual space where avatars are animated with 3D gaze as well as head and hand motions in real time. Log files are recorded for subsequent playback and analysis Using the proposed software tool. During replaying the user can rotate the viewpoint and navigate in the simulated 3D scene. The playback mechanism relies on multiple distributed log files captured at every site. This structure enables an observer to experience latencies of movement and information transfer for every site as this is important fir conversation analysis. Playback uses an event-replay algorithm, modified to allow fast traversal of the scene by selective rendering of nodes, and to simulate fast random access. The tool's is analysis module can show each participant's 3D gaze points and areas where gaze has been concentrated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Participants' eye-gaze is generally not captured or represented in immersive collaborative virtual environment (ICVE) systems. We present EyeCVE. which uses mobile eye-trackers to drive the gaze of each participant's virtual avatar, thus supporting remote mutual eye-contact and awareness of others' gaze in a perceptually unfragmented shared virtual workspace. We detail trials in which participants took part in three-way conferences between remote CAVE (TM) systems linked via EyeCVE. Eye-tracking data was recorded and used to evaluate interaction, confirming; the system's support for the use of gaze as a communicational and management resource in multiparty conversational scenarios. We point toward subsequent investigation of eye-tracking in ICVEs for enhanced remote social-interaction and analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eye gaze is an important conversational resource that until now could only be supported across a distance if people were rooted to the spot. We introduce EyeCVE, the worldpsilas first tele-presence system that allows people in different physical locations to not only see what each other are doing but follow each otherpsilas eyes, even when walking about. Projected into each space are avatar representations of remote participants, that reproduce not only body, head and hand movements, but also those of the eyes. Spatial and temporal alignment of remote spaces allows the focus of gaze as well as activity and gesture to be used as a resource for non-verbal communication. The temporal challenge met was to reproduce eye movements quick enough and often enough to interpret their focus during a multi-way interaction, along with communicating other verbal and non-verbal language. The spatial challenge met was to maintain communicational eye gaze while allowing free movement of participants within a virtually shared common frame of reference. This paper reports on the technical and especially temporal characteristics of the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our eyes are input sensors which Provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals and call indicate emotional responses, prior to the viewer becoming aware of them. In this paper we discuss a method of tracking a user's eye movements, and Use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a, natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of a Visual Telepresence System is to provide the operator with a high fidelity image from a remote stereo camera pair linked to a pan/tilt device such that the operator may reorient the camera position by use of head movement. Systems such as these which utilise virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the displays is generally fixed and is most suitable only for viewing elements of a scene at a particular distance. To address such limitations, a prototype system has been developed where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator. This paper explores why it is necessary to actively adjust the display system as well as the cameras and justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms. The performance and accuracy of the system is assessed with respect to eye movement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Humans from an early age look longer at preferred stimuli, and also typically look longer at facial expressions of emotion, particularly happy faces. Atypical gaze patterns towards social stimuli are common in Autism Spectrum Conditions (ASC). However, it is unknown if gaze fixation patterns have any genetic basis. In this study, we tested if variations in the cannabinoid receptor 1 (CNR1) gene are associated with gaze duration towards happy faces. This gene was selected because CNR1 is a key component of the endocannabinoid system, involved in processing reward, and in our previous fMRI study we found variations in CNR1 modulates the striatal response to happy (but not disgust) faces. The striatum is involved in guiding gaze to rewarding aspects of a visual scene. We aimed to validate and extend this result in another sample using a different technique (gaze tracking). METHODS: 30 volunteers (13 males, 17 females) from the general population observed dynamic emotion expressions on a screen while their eye movements were recorded. They were genotyped for the identical four SNPs in the CNR1 gene tested in our earlier fMRI study. RESULTS: Two SNPs (rs806377 and rs806380) were associated with differential gaze duration for happy (but not disgust) faces. Importantly, the allelic groups associated with greater striatal response to happy faces in the fMRI study were associated with longer gaze duration for happy faces. CONCLUSIONS: These results suggest CNR1 variations modulate striatal function that underlies the perception of signals of social reward such as happy faces. This suggests CNR1 is a key element in the molecular architecture of perception of certain basic emotions. This may have implications for understanding neurodevelopmental conditions marked by atypical eye contact and facial emotion processing, such as ASC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual Telepresence system which utilize virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the display is fixed and is most suitable only for viewing elements of a scene at a particular distance. In such a system, the operator's ability to gaze around without use of head movement is severely limited. A trade off must be made between a poor viewing resolution or a narrow width of viewing field. To address these limitations a prototype system where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator has been developed. This paper explores the reasons why is necessary to actively adjust both the display system and the cameras and furthermore justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms, An assessment of the performance of the system against a fixed camera/display system when operators are assigned basic tasks involving depth and distance/size perception. The sensitivity to variations in transient performance of the display and camera vergence is also assessed.