856 resultados para eye movements
Resumo:
Corticobasal degeneration is a rare, progressive neurodegenerative disease and a member of the 'parkinsonian' group of disorders, which also includes Parkinson's disease, progressive supranuclear palsy, dementia with Lewy bodies and multiple system atrophy. The most common initial symptom is limb clumsiness, usually affecting one side of the body, with or without accompanying rigidity or tremor. Subsequently, the disease affects gait and there is a slow progression to influence ipsilateral arms and legs. Apraxia and dementia are the most common cortical signs. Corticobasal degeneration can be difficult to distinguish from other parkinsonian syndromes but if ocular signs and symptoms are present, they may aid clinical diagnosis. Typical ocular features include increased latency of saccadic eye movements ipsilateral to the side exhibiting apraxia, impaired smooth pursuit movements and visuo-spatial dysfunction, especially involving spatial rather than object-based tasks. Less typical features include reduction in saccadic velocity, vertical gaze palsy, visual hallucinations, sleep disturbance and an impaired electroretinogram. Aspects of primary vision such as visual acuity and colour vision are usually unaffected. Management of the condition to deal with problems of walking, movement, daily tasks and speech problems is an important aspect of the disease.
Resumo:
Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.
Resumo:
For over 50 years, the Satisfaction of Search effect, and more recently known as the Subsequent Search Miss (SSM) effect, has plagued the field of radiology. Defined as a decrease in additional target accuracy after detecting a prior target in a visual search, SSM errors are known to underlie both real-world search errors (e.g., a radiologist is more likely to miss a tumor if a different tumor was previously detected) and more simplified, lab-based search errors (e.g., an observer is more likely to miss a target ‘T’ if a different target ‘T’ was previously detected). Unfortunately, little was known about this phenomenon’s cognitive underpinnings and SSM errors have proven difficult to eliminate. However, more recently, experimental research has provided evidence for three different theories of SSM errors: the Satisfaction account, the Perceptual Set account, and the Resource Depletion account. A series of studies examined performance in a multiple-target visual search and aimed to provide support for the Resource Depletion account—a first target consumes cognitive resources leaving less available to process additional targets.
To assess a potential mechanism underlying SSM errors, eye movements were recorded in a multiple-target visual search and were used to explore whether a first target may result in an immediate decrease in second-target accuracy, which is known as an attentional blink. To determine whether other known attentional distractions amplified the effects of finding a first target has on second-target detection, distractors within the immediate vicinity of the targets (i.e., clutter) were measured and compared to accuracy for a second target. To better understand which characteristics of attention were impacted by detecting a first target, individual differences within four characteristics of attention were compared to second-target misses in a multiple-target visual search.
The results demonstrated that an attentional blink underlies SSM errors with a decrease in second-target accuracy from 135ms-405ms after detection or re-fixating a first target. The effects of clutter were exacerbated after finding a first target causing a greater decrease in second-target accuracy as clutter increased around a second-target. The attentional characteristics of modulation and vigilance were correlated with second- target misses and suggest that worse attentional modulation and vigilance are predictive of more second-target misses. Taken together, these result are used as the foundation to support a new theory of SSM errors, the Flux Capacitor theory. The Flux Capacitor theory predicts that once a target is found, it is maintained as an attentional template in working memory, which consumes attentional resources that could otherwise be used to detect additional targets. This theory not only proposes why attentional resources are consumed by a first target, but encompasses the research in support of all three SSM theories in an effort to establish a grand, unified theory of SSM errors.
Resumo:
Young infants' learning of words for abstract concepts like 'all gone' and 'eat,' in contrast to their learning of more concrete words like 'apple' and 'shoe,' may follow a relatively protracted developmental course. We examined whether infants know such abstract words. Parents named one of two events shown in side-by-side videos while their 6-16-month-old infants (n=98) watched. On average, infants successfully looked at the named video by 10 months, but not earlier, and infants' looking at the named referent increased robustly at around 14 months. Six-month-olds already understand concrete words in this task (Bergelson & Swingley, 2012). A video-corpus analysis of unscripted mother-infant interaction showed that mothers used the tested abstract words less often in the presence of their referent events than they used concrete words in the presence of their referent objects. We suggest that referential uncertainty in abstract words' teaching conditions may explain the later acquisition of abstract than concrete words, and we discuss the possible role of changes in social-cognitive abilities over the 6-14 month period.
Resumo:
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF’s local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Resumo:
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF's local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Resumo:
Moving through a stable, three-dimensional world is a hallmark of our motor and perceptual experience. This stability is constantly being challenged by movements of the eyes and head, inducing retinal blur and retino-spatial misalignments for which the brain must compensate. To do so, the brain must account for eye and head kinematics to transform two-dimensional retinal input into the reference frame necessary for movement or perception. The four studies in this thesis used both computational and psychophysical approaches to investigate several aspects of this reference frame transformation. In the first study, we examined the neural mechanism underlying the visuomotor transformation for smooth pursuit using a feedforward neural network model. After training, the model performed the general, three-dimensional transformation using gain modulation. This gave mechanistic significance to gain modulation observed in cortical pursuit areas while also providing several testable hypotheses for future electrophysiological work. In the second study, we asked how anticipatory pursuit, which is driven by memorized signals, accounts for eye and head geometry using a novel head-roll updating paradigm. We showed that the velocity memory driving anticipatory smooth pursuit relies on retinal signals, but is updated for the current head orientation. In the third study, we asked how forcing retinal motion to undergo a reference frame transformation influences perceptual decision making. We found that simply rolling one's head impairs perceptual decision making in a way captured by stochastic reference frame transformations. In the final study, we asked how torsional shifts of the retinal projection occurring with almost every eye movement influence orientation perception across saccades. We found a pre-saccadic, predictive remapping consistent with maintaining a purely retinal (but spatially inaccurate) orientation perception throughout the movement. Together these studies suggest that, despite their spatial inaccuracy, retinal signals play a surprisingly large role in our seamless visual experience. This work therefore represents a significant advance in our understanding of how the brain performs one of its most fundamental functions.
Resumo:
Recent empirical studies about the neurological executive nature of reading in bilinguals differ in their evaluations of the degree of selective manifestation in lexical access as implicated by data from early and late reading measures in the eye-tracking paradigm. Currently two scenarios are plausible: (1) Lexical access in reading is fundamentally language non-selective and top-down effects from semantic context can influence the degree of selectivity in lexical access; (2) Cross-lingual lexical activation is actuated via bottom-up processes without being affected by top-down effects from sentence context. In an attempt to test these hypotheses empirically, this study analyzed reader-text events arising when cognate facilitation and semantic constraint interact in a 22 factorially designed experiment tracking the eye movements of 26 Swedish-English bilinguals reading in their L2. Stimulus conditions consisted of high- and low-constraint sentences embedded with either a cognate or a non-cognate control word. The results showed clear signs of cognate facilitation in both early and late reading measures and in either sentence conditions. This evidence in favour of the non-selective hypothesis indicates that the manifestation of non-selective lexical access in reading is not constrained by top-down effects from semantic context.
Resumo:
The degree to which a person relies on visual stimuli for spatial orientation is termed visual dependency (VD). VD is considered a perceptual trait or cognitive style influenced by psychological factors and mediated by central re-weighting of the sensory inputs involved in spatial orientation. VD is often measured using the rod-and-disk test, wherein participants align a central rod to the subjective visual vertical (SVV) in the presence of a background that is either stationary or rotating around the line of sight - dynamic SVV. Although this task has been employed to assess VD in health and vestibular disease, it is unknown what effect torsional nystagmic eye movements may have on individual performance. Using caloric ear irrigation, 3D video-oculography and the rod-and-disk test, we show that caloric torsional nystagmus modulates measures of visual dependency and demonstrate that increases in tilt after irrigation are positively correlated with changes in ocular torsional eye movements. When the direction of the slow phase of the torsional eye movement induced by the caloric is congruent with that induced by the rotating visual stimulus, there is a significant increase in tilt. When these two torsional components are in opposition there is a decrease. These findings show that measures of visual dependence can be influenced by oculomotor responses induced by caloric stimulation. The findings are of significance for clinical studies as they indicate that VD, which often increases in vestibular disorders, is not only modulated by changes in cognitive style but also by eye movements, in particular nystagmus.
Resumo:
Processing language is postulated to involve a mental simulation, or re-enactment of perceptual, motor, and introspective states that were acquired experientially (Barsalou, 1999, 2008). One such aspect that is mentally simulated during processing of certain concepts is spatial location. For example, upon processing the word “moon” the prominent spatial location of the concept (e.g. ‘upward’) is mentally simulated. In six eye-tracking experiments, we investigate how mental simulations of spatial location affect processing. We first address a conflict in previous literature whereby processing is shown to be impacted in both a facilitatory and inhibitory way. Two of our experiments showed that mental simulations of spatial association facilitate saccades launched toward compatible locations; however, a third experiment showed an inhibitory effect on saccades launched towards incompatible locations. We investigated these differences with further experiments, which led us to conclude that the nature of the effect (facilitatory or inhibitory) is dependent on the demands of the task and, in fitting with the theory of Grounded Cognition (Barsalou, 2008), that mental simulations impact processing in a dynamic way. Three further experiments explored the nature of verticality – specifically, whether ‘up’ is perceived as away from gravity, or above our head. Using similar eye-tracking methods, and by manipulating the position of participants, we were able to dissociate these two possible standpoints. The results showed that mental simulations of spatial location facilitated saccades to compatible locations, but only when verticality was dissociated from gravity (i.e. ‘up’ was above the participant’s head). We conclude that this is not due to an ‘embodied’ mental simulation, but rather a result of heavily ingrained visuo-motor association between vertical space and eye movements.
Resumo:
Wydział Anglistyki
Resumo:
This paper investigates how textbook design may influence students’ visual attention to graphics, photos and text in current geography textbooks. Eye tracking, a visual method of data collection and analysis, was utilised to precisely monitor students’ eye movements while observing geography textbook spreads. In an exploratory study utilising random sampling, the eye movements of 20 students (secondary school students 15–17 years of age and university students 20–24 years of age) were recorded. The research entities were double-page spreads of current German geography textbooks covering an identical topic, taken from five separate textbooks. A two-stage test was developed. Each participant was given the task of first looking at the entire textbook spread to determine what was being explained on the pages. In the second stage, participants solved one of the tasks from the exercise section. Overall, each participant studied five different textbook spreads and completed five set tasks. After the eye tracking study, each participant completed a questionnaire. The results may verify textbook design as one crucial factor for successful knowledge acquisition from textbooks. Based on the eye tracking documentation, learning-related challenges posed by images and complex image-text structures in textbooks are elucidated and related to educational psychology insights and findings from visual communication and textbook analysis.
Resumo:
Background: Shifting gaze and attention ahead of the hand is a natural component in the performance of skilled manual actions. Very few studies have examined the precise co-ordination between the eye and hand in children with Developmental Coordination Disorder (DCD). Methods This study directly assessed the maturity of eye-hand co-ordination in children with DCD. A double-step pointing task was used to investigate the coupling of the eye and hand in 7-year-old children with and without DCD. Sequential targets were presented on a computer screen, and eye and hand movements were recorded simultaneously. Results There were no differences between typically developing (TD) and DCD groups when completing fast single-target tasks. There were very few differences in the completion of the first movement in the double-step tasks, but differences did occur during the second sequential movement. One factor appeared to be the propensity for the DCD children to delay their hand movement until some period after the eye had landed on the target. This resulted in a marked increase in eye-hand lead during the second movement, disrupting the close coupling and leading to a slower and less accurate hand movement among children with DCD. Conclusions In contrast to skilled adults, both groups of children preferred to foveate the target prior to initiating a hand movement if time allowed. The TD children, however, were more able to reduce this foveation period and shift towards a feedforward mode of control for hand movements. The children with DCD persevered with a look-then-move strategy, which led to an increase in error. For the group of DCD children in this study, there was no evidence of a problem in speed or accuracy of simple movements, but there was a difficulty in concatenating the sequential shifts of gaze and hand required for the completion of everyday tasks or typical assessment items.
Resumo:
A method for simultaneous iontophoretic injections of the anterograde tracer Phaseolus vulgaris leukoagglutinin and the retrograde tracer fluorogold was used to characterize in the rat a hypothalamothalamocortical pathway ending in a region thought to regulate attentional mechanisms by way of eye and head movements. The relevant medial hypothalamic nuclei receive pheromonal information from the amygdala and project to specific parts of the thalamic nucleus reuniens and anteromedial nucleus, which then project to a specific lateral part of the retrosplenial area (or medial visual cortex). This cortical area receives a convergent input from the lateral posterior thalamic nucleus and projects to the superior colliculus. Bidirectional connections with the hippocampal formation suggest that activity in this circuit is modified by previous experience. Striking parallels with basal ganglia circuitry are noted.
Resumo:
Mode of access: Internet.