91 resultados para EYE-MOVEMENTS
Resumo:
Eye gaze is an important conversational resource that until now could only be supported across a distance if people were rooted to the spot. We introduce EyeCVE, the worldpsilas first tele-presence system that allows people in different physical locations to not only see what each other are doing but follow each otherpsilas eyes, even when walking about. Projected into each space are avatar representations of remote participants, that reproduce not only body, head and hand movements, but also those of the eyes. Spatial and temporal alignment of remote spaces allows the focus of gaze as well as activity and gesture to be used as a resource for non-verbal communication. The temporal challenge met was to reproduce eye movements quick enough and often enough to interpret their focus during a multi-way interaction, along with communicating other verbal and non-verbal language. The spatial challenge met was to maintain communicational eye gaze while allowing free movement of participants within a virtually shared common frame of reference. This paper reports on the technical and especially temporal characteristics of the system.
Resumo:
Our eyes are input sensors which Provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals and call indicate emotional responses, prior to the viewer becoming aware of them. In this paper we discuss a method of tracking a user's eye movements, and Use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a, natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Perceptual multimedia quality is of paramount importance to the continued take-up and proliferation of multimedia applications: users will not use and pay for applications if they are perceived to be of low quality. Whilst traditionally distributed multimedia quality has been characterised by Quality of Service (QoS) parameters, these neglect the user perspective of the issue of quality. In order to redress this shortcoming, we characterise the user multimedia perspective using the Quality of Perception (QoP) metric, which encompasses not only a user’s satisfaction with the quality of a multimedia presentation, but also his/her ability to analyse, synthesise and assimilate informational content of multimedia. In recognition of the fact that monitoring eye movements offers insights into visual perception, as well as the associated attention mechanisms and cognitive processes, this paper reports on the results of a study investigating the impact of differing multimedia presentation frame rates on user QoP and eye path data. Our results show that provision of higher frame rates, usually assumed to provide better multimedia presentation quality, do not significantly impact upon the median coordinate value of eye path data. Moreover, higher frame rates do not significantly increase level of participant information assimilation, although they do significantly improve overall user enjoyment and quality perception of the multimedia content being shown.
Resumo:
Visual telepresence seeks to extend existing teleoperative capability by supplying the operator with a 3D interactive view of the remote environment. This is achieved through the use of a stereo camera platform which, through appropriate 3D display devices, provides a distinct image to each eye of the operator, and which is slaved directly from the operator's head and eye movements. However, the resolution within current head mounted displays remains poor, thereby reducing the operator's visual acuity. This paper reports on the feasibility of incorporation of eye tracking to increase resolution and investigates the stability and control issues for such a system. Continuous domain and discrete simulations are presented which indicates that eye tracking provides a stable feedback loop for tracking applications, though some empirical testing (currently being initiated) of such a system will be required to overcome indicated stability problems associated with micro saccades of the human operator.
Resumo:
Autism spectrum conditions (autism) affect ~1% of the population and are characterized by deficits in social communication. Oxytocin has been widely reported to affect social-communicative function and its neural underpinnings. Here we report the first evidence that intranasal oxytocin administration improves a core problem that individuals with autism have in using eye contact appropriately in real-world social settings. A randomized double-blind, placebo-controlled, within-subjects design is used to examine how intranasal administration of 24 IU of oxytocin affects gaze behavior for 32 adult males with autism and 34 controls in a real-time interaction with a researcher. This interactive paradigm bypasses many of the limitations encountered with conventional static or computer-based stimuli. Eye movements are recorded using eye tracking, providing an objective measurement of looking patterns. The measure is shown to be sensitive to the reduced eye contact commonly reported in autism, with the autism group spending less time looking to the eye region of the face than controls. Oxytocin administration selectively enhanced gaze to the eyes in both the autism and control groups (transformed mean eye-fixation difference per second=0.082; 95% CI:0.025–0.14, P=0.006). Within the autism group, oxytocin has the most effect on fixation duration in individuals with impaired levels of eye contact at baseline (Cohen’s d=0.86). These findings demonstrate that the potential benefits of oxytocin in autism extend to a real-time interaction, providing evidence of a therapeutic effect in a key aspect of social communication.
Resumo:
Compared to skilled adult readers, children typically make more fixations that are longer in duration, shorter saccades, and more regressions, thus reading more slowly (Blythe & Joseph, 2011). Recent attempts to understand the reasons for these differences have discovered some similarities (e.g., children and adults target their saccades similarly; Joseph, Liversedge, Blythe, White, & Rayner, 2009) and some differences (e.g., children’s fixation durations are more affected by lexical variables; Blythe, Liversedge, Joseph, White, & Rayner, 2009) that have yet to be explained. In this article, the E-Z Reader model of eye-movement control in reading (Reichle, 2011; Reichle, Pollatsek, Fisher, & Rayner, 1998) is used to simulate various eye-movement phenomena in adults versus children in order to evaluate hypotheses about the concurrent development of reading skill and eye-movement behavior. These simulations suggest that the primary difference between children and adults is their rate of lexical processing, and that different rates of (post-lexical) language processing may also contribute to some phenomena (e.g., children’s slower detection of semantic anomalies; Joseph et al., 2008). The theoretical implications of this hypothesis are discussed, including possible alternative accounts of these developmental changes, how reading skill and eye movements change across the entire lifespan (e.g., college-aged vs. elderly readers), and individual differences in reading ability.
Resumo:
We investigated the processes of how adult readers evaluate and revise their situation model during reading by monitoring their eye movements as they read narrative texts and subsequent critical sentences. In each narrative text, a short introduction primed a knowledge-based inference, followed by a target concept that was either expected (e.g., “oven”) or unexpected (e.g., “grill”) in relation to the inferred concept. Eye movements showed that readers detected a mismatch between the new unexpected information and their prior interpretation, confirming their ability to evaluate inferential information. Just below the narrative text, a critical sentence included a target word that was either congruent (e.g., “roasted”) or incongruent (e.g., “barbecued”) with the expected but not the unexpected concept. Readers spent less time reading the congruent than the incongruent target word, reflecting the facilitation of prior information. In addition, when the unexpected (but not expected) concept had been presented, participants with lower verbal (but not visuospatial) working memory span exhibited longer reading times and made more regressions (from the critical sentence to previous information) on encountering congruent information, indicating difficulty in inhibiting their initial incorrect interpretation and revising their situation model
Resumo:
Models of perceptual decision making often assume that sensory evidence is accumulated over time in favor of the various possible decisions, until the evidence in favor of one of them outweighs the evidence for the others. Saccadic eye movements are among the most frequent perceptual decisions that the human brain performs. We used stochastic visual stimuli to identify the temporal impulse response underlying saccadic eye movement decisions. Observers performed a contrast search task, with temporal variability in the visual signals. In experiment 1, we derived the temporal filter observers used to integrate the visual information. The integration window was restricted to the first similar to 100 ms after display onset. In experiment 2, we showed that observers cannot perform the task if there is no useful information to distinguish the target from the distractor within this time epoch. We conclude that (1) observers did not integrate sensory evidence up to a criterion level, (2) observers did not integrate visual information up to the start of the saccadic dead time, and (3) variability in saccade latency does not correspond to variability in the visual integration period. Instead, our results support a temporal filter model of saccadic decision making. The temporal impulse response identified by our methods corresponds well with estimates of integration times of V1 output neurons.
Resumo:
Saccadic eye movements and fixations are the behavioral means by which we visually sample text during reading. Human oculomotor control is governed by a complex neurophysiological system involving the brain stem, superior colliculus, and several cortical areas [1, 2]. A very widely held belief among researchers investigating primate vision is that the oculomotor system serves to orient the visual axes of both eyes to fixate the same target point in space. It is argued that such precise positioning of the eyes is necessary to place images on corresponding retinal locations, such that on each fixation a single, nondiplopic, visual representation is perceived [3]. Vision works actively through a continual sampling process involving saccades and fixations [4]. Here we report that during normal reading, the eyes do not always fixate the same letter within a word. We also demonstrate that saccadic targeting is yoked and based on a unified cyclopean percept of a whole word since it is unaffected if different word parts are delivered exclusively to each eye via a dichoptic presentation technique. These two findings together suggest that the visual signal from each eye is fused at a very early stage in the visual pathway, even when the fixation disparity is greater than one character (0.29 deg), and that saccade metrics for each eye are computed on the basis of that fused signal.
Resumo:
We investigated infants' sensitivity to spatiotemporal structure. In Experiment 1, circles appeared in a statistically defined spatial pattern. At test 11-month-olds, but not 8-month-olds, looked longer at a novel spatial sequence. Experiment 2 presented different color/shape stimuli, but only the location sequence was violated during test; 8-month-olds preferred the novel spatial structure, but 5-month-olds did not. In Experiment 3, the locations but not color/shape pairings were constant at test; 5-month-olds showed a novelty preference. Experiment 4 examined "online learning": We recorded eye movements of 8-month-olds watching a spatiotemporal sequence. Saccade latencies to predictable locations decreased. We argue that temporal order statistics involving informative spatial relations become available to infants during the first year after birth, assisted by multiple cues.
Resumo:
Saccadic eye-movements to a visual target are less accurate if there are distracters close to its location (local distracters). The addition of more distracters, remote from the target location (remote distracters), invokes an involuntary increase in the response latency of the saccade and attenuates the effect of local distracters on accuracy. This may be due to the target and distracters directly competing (direct route) or to the remote distracters acting to impair the ability to disengage from fixation (indirect route). To distinguish between these we examined the development of saccade competition by recording saccade latency and accuracy responses made to a target and local distracter compared with those made with an addition of a remote distracter. The direct route would predict that the remote distracter impacts on the developing competition between target and local distracter, while the indirect route would predict no change as the accuracy benefit here derives from accessing the same competitive process but at a later stage. We found that the presence of the remote distracter did not change the pattern of accuracy improvement. This suggests that the remote distracter was acting along an indirect route that inhibits disengagement from fixation, slows saccade initiation, and enables more accurate saccades to be made.
Resumo:
BACKGROUND: Humans from an early age look longer at preferred stimuli, and also typically look longer at facial expressions of emotion, particularly happy faces. Atypical gaze patterns towards social stimuli are common in Autism Spectrum Conditions (ASC). However, it is unknown if gaze fixation patterns have any genetic basis. In this study, we tested if variations in the cannabinoid receptor 1 (CNR1) gene are associated with gaze duration towards happy faces. This gene was selected because CNR1 is a key component of the endocannabinoid system, involved in processing reward, and in our previous fMRI study we found variations in CNR1 modulates the striatal response to happy (but not disgust) faces. The striatum is involved in guiding gaze to rewarding aspects of a visual scene. We aimed to validate and extend this result in another sample using a different technique (gaze tracking). METHODS: 30 volunteers (13 males, 17 females) from the general population observed dynamic emotion expressions on a screen while their eye movements were recorded. They were genotyped for the identical four SNPs in the CNR1 gene tested in our earlier fMRI study. RESULTS: Two SNPs (rs806377 and rs806380) were associated with differential gaze duration for happy (but not disgust) faces. Importantly, the allelic groups associated with greater striatal response to happy faces in the fMRI study were associated with longer gaze duration for happy faces. CONCLUSIONS: These results suggest CNR1 variations modulate striatal function that underlies the perception of signals of social reward such as happy faces. This suggests CNR1 is a key element in the molecular architecture of perception of certain basic emotions. This may have implications for understanding neurodevelopmental conditions marked by atypical eye contact and facial emotion processing, such as ASC.
Resumo:
We investigated whether attention shifts and eye movement preparation are mediated by shared control mechanisms, as claimed by the premotor theory of attention. ERPs were recorded in three tasks where directional cues presented at the beginning of each trial instructed participants to direct their attention to the cued side without eye movements (Covert task), to prepare an eye movement in the cued direction without attention shifts (Saccade task) or both (Combined task). A peripheral visual Go/Nogo stimulus that was presented 800 ms after cue onset signalled whether responses had to be executed or withheld. Lateralised ERP components triggered during the cue–target interval, which are assumed to reflect preparatory control mechanisms that mediate attentional orienting, were very similar across tasks. They were also present in the Saccade task, which was designed to discourage any concomitant covert attention shifts. These results support the hypothesis that saccade preparation and attentional orienting are implemented by common control structures. There were however systematic differences in the impact of eye movement programming and covert attention on ERPs triggered in response to visual stimuli at cued versus uncued locations. It is concluded that, although the preparatory processes underlying saccade programming and covert attentional orienting may be based on common mechanisms, they nevertheless differ in their spatially specific effects on visual information processing.
Resumo:
Remote transient changes in the environment, such as the onset of visual distractors, impact on the exe- cution of target directed saccadic eye movements. Studies that have examined the latency of the saccade response have shown conflicting results. When there was an element of target selection, saccade latency increased as the distance between distractor and target increased. In contrast, when target selection is minimized by restricting the target to appear on one axis position, latency has been found to be slowest when the distractor is shown at fixation and reduces as it moves away from this position, rather than from the target. Here we report four experiments examining saccade latency as target and distractor posi- tions are varied. We find support for both a dependence of saccade latency on distractor distance from target and from fixation: saccade latency was longer when distractor is shown close to fixation and even longer still when shown in an opposite location (180°) to the target. We suggest that this is due to inhib- itory interactions between the distractor, fixation and the target interfering with fixation disengagement and target selection.