33 resultados para Eye Movement

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consistent with a negativity bias account, neuroscientific and behavioral evidence demonstrates modulation of even early sensory processes by unpleasant, potentially threat-relevant information. The aim of this research is to assess the extent to which pleasant and unpleasant visual stimuli presented extrafoveally capture attention and impact eye movement control. We report an experiment examining deviations in saccade metrics in the presence of emotional image distractors that are close to a nonemotional target. We additionally manipulate the saccade latency to test when the emotional distractor has its biggest impact on oculomotor control. The results demonstrate that saccade landing position was pulled toward unpleasant distractors, and that this pull was due to the quick saccade responses. Overall, these findings support a negativity bias account of early attentional control and call for the need to consider the time course of motivated attention when affect is implicit

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been suggested that the evidence used to support a decision to move our eyes and the confidence we have in that decision are derived from a common source. Alternatively, confidence may be based on further post-decisional processes. In three experiments we examined this. In Experiment 1, participants chose between two targets on the basis of varying levels of evidence (i.e., the direction of motion coherence in a Random-Dot-Kinematogram). They indicated this choice by making a saccade to one of two targets and then indicated their confidence. Saccade trajectory deviation was taken as a measure of the inhibition of the non-selected target. We found that as evidence increased so did confidence and deviations of saccade trajectory away from the non-selected target. However, a correlational analysis suggested they were not related. In Experiment 2 an option to opt-out of the choice was offered on some trials if choice proved too difficult. In this way we isolated trials on which confidence in target selection was high (i.e., when the option to opt-out was available but not taken). Again saccade trajectory deviations were found not to differ in relation to confidence. In Experiment 3 we directly manipulated confidence, such that participants had high or low task confidence. They showed no differences in saccade trajectory deviations. These results support post-decisional accounts of confidence: evidence supporting the decision to move the eyes is reflected in saccade control, but the confidence that we have in that choice is subject to further post-decisional processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compared to skilled adult readers, children typically make more fixations that are longer in duration, shorter saccades, and more regressions, thus reading more slowly (Blythe & Joseph, 2011). Recent attempts to understand the reasons for these differences have discovered some similarities (e.g., children and adults target their saccades similarly; Joseph, Liversedge, Blythe, White, & Rayner, 2009) and some differences (e.g., children’s fixation durations are more affected by lexical variables; Blythe, Liversedge, Joseph, White, & Rayner, 2009) that have yet to be explained. In this article, the E-Z Reader model of eye-movement control in reading (Reichle, 2011; Reichle, Pollatsek, Fisher, & Rayner, 1998) is used to simulate various eye-movement phenomena in adults versus children in order to evaluate hypotheses about the concurrent development of reading skill and eye-movement behavior. These simulations suggest that the primary difference between children and adults is their rate of lexical processing, and that different rates of (post-lexical) language processing may also contribute to some phenomena (e.g., children’s slower detection of semantic anomalies; Joseph et al., 2008). The theoretical implications of this hypothesis are discussed, including possible alternative accounts of these developmental changes, how reading skill and eye movements change across the entire lifespan (e.g., college-aged vs. elderly readers), and individual differences in reading ability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigated the processes of how adult readers evaluate and revise their situation model during reading by monitoring their eye movements as they read narrative texts and subsequent critical sentences. In each narrative text, a short introduction primed a knowledge-based inference, followed by a target concept that was either expected (e.g., “oven”) or unexpected (e.g., “grill”) in relation to the inferred concept. Eye movements showed that readers detected a mismatch between the new unexpected information and their prior interpretation, confirming their ability to evaluate inferential information. Just below the narrative text, a critical sentence included a target word that was either congruent (e.g., “roasted”) or incongruent (e.g., “barbecued”) with the expected but not the unexpected concept. Readers spent less time reading the congruent than the incongruent target word, reflecting the facilitation of prior information. In addition, when the unexpected (but not expected) concept had been presented, participants with lower verbal (but not visuospatial) working memory span exhibited longer reading times and made more regressions (from the critical sentence to previous information) on encountering congruent information, indicating difficulty in inhibiting their initial incorrect interpretation and revising their situation model

Relevância:

70.00% 70.00%

Publicador:

Resumo:

For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Lexical compounds in English are constrained in that the non-head noun can be an irregular but not a regular plural (e.g. mice eater vs. *rats eater), a contrast that has been argued to derive from a morphological constraint on modifiers inside compounds. In addition, bare nouns are preferred over plural forms inside compounds (e.g. mouse eater vs. mice eater), a contrast that has been ascribed to the semantics of compounds. Measuring eyemovements during reading, this study examined how morphological and semantic information become available over time during the processing of a compound. We found that the morphological constraint affected both early and late eye-movement measures, whereas the semantic constraint for singular non-heads only affected late measures of processing. These results indicate that morphological information becomes available earlier than semantic information during the processing of compounds.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Using the eye movement monitoring technique, the present study examined whether wh-dependency formation is sensitive to island constraints in second language (L2) sentence comprehension, and whether the presence of an intervening relative clause island has any effects on learners’ ability to ultimately resolve long wh-dependencies. Participants included proficient learners of L2 English from typologically different language backgrounds (German, Chinese), as well as a group of native English-speaking controls. Our results indicate that both the learners and the native speakers were sensitive to relative clause islands during processing, irrespective of typological differences between the learners’ L1s, but that the learners had more difficulty than native speakers linking distant wh-fillers to their lexical subcategorizers during processing. We provide a unified processing-based account for our findings.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The hypothesis that pronouns can be resolved via either the syntax or the discourse representation has played an important role in linguistic accounts of pronoun interpretation (e.g. Grodzinsky & Reinhart, 1993). We report the results of an eye-movement monitoring study investigating the relative timing of syntactically-mediated variable binding and discourse-based coreference assignment during pronoun resolution. We examined whether ambiguous pronouns are preferentially resolved via either the variable binding or coreference route, and in particular tested the hypothesis that variable binding should always be computed before coreference assignment. Participants’ eye movements were monitored while they read sentences containing a pronoun and two potential antecedents, a c-commanding quantified noun phrase and a non c-commanding proper name. Gender congruence between the pronoun and either of the two potential antecedents was manipulated as an experimental diagnostic for dependency formation. In two experiments, we found that participants’ reading times were reliably longer when the linearly closest antecedent mismatched in gender with the pronoun. These findings fail to support the hypothesis that variable binding is computed before coreference assignment, and instead suggest that antecedent recency plays an important role in affecting the extent to which a variable binding antecedent is considered. We discuss these results in relation to models of memory retrieval during sentence comprehension, and interpret the antecedent recency preference as an example of forgetting over time.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

While eye movements have been used widely to investigate how skilled adult readers process written language, relatively little research has used this methodology with children. This is unfortunate as, as we discuss here, eye-movement studies have significant potential to inform our understanding of children’s reading development. We consider some of the empirical and theoretical issues that arise when using this methodology with children, illustrating our points with data from an experiment examining word frequency effects in 8-year-old children’s sentence reading. Children showed significantly longer gaze durations to low than high-frequency words, demonstrating that linguistic characteristics of text drive children’s eye movements as they read. We discuss these findings within the broader context of how eye-movement studies can inform our understanding of children’s reading, and can assist with the development of appropriately targeted interventions to support children as they learn to read.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Children’s eye movements during reading. In this chapter, we evaluate the literature on children’s eye movements during reading to date. We describe the basic, developmental changes that occur in eye movement behaviour during reading, discuss age-related changes in the extent and time course of information extraction during fixations in reading, and compare the effects of visual and linguistic manipulations in the text on children’s eye movement behaviour in relation to skilled adult readers. We argue that future research will benefit from examining how eye movement behaviour during reading develops in relation to language and literacy skills, and use of computational modelling with children’s eye movement data may improve our understanding of the mechanisms that underlie the progression from beginning to skilled reader.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The efficacy of explicit and implicit learning paradigms was examined during the very early stages of learning the perceptual-motor anticipation task of predicting ball direction from temporally occluded footage of soccer penalty kicks. In addition, the effect of instructional condition on point-of-gaze during learning was examined. A significant improvement in horizontal prediction accuracy was observed in the explicit learning group; however, similar improvement was evident in a placebo group who watched footage of soccer matches. Only the explicit learning intervention resulted in changes in eye movement behaviour and increased awareness of relevant postural cues. Results are discussed in terms of methodological and practical issues regarding the employment of implicit perceptual training interventions. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Models of perceptual decision making often assume that sensory evidence is accumulated over time in favor of the various possible decisions, until the evidence in favor of one of them outweighs the evidence for the others. Saccadic eye movements are among the most frequent perceptual decisions that the human brain performs. We used stochastic visual stimuli to identify the temporal impulse response underlying saccadic eye movement decisions. Observers performed a contrast search task, with temporal variability in the visual signals. In experiment 1, we derived the temporal filter observers used to integrate the visual information. The integration window was restricted to the first similar to 100 ms after display onset. In experiment 2, we showed that observers cannot perform the task if there is no useful information to distinguish the target from the distractor within this time epoch. We conclude that (1) observers did not integrate sensory evidence up to a criterion level, (2) observers did not integrate visual information up to the start of the saccadic dead time, and (3) variability in saccade latency does not correspond to variability in the visual integration period. Instead, our results support a temporal filter model of saccadic decision making. The temporal impulse response identified by our methods corresponds well with estimates of integration times of V1 output neurons.