28 resultados para Visual Word-recognition
Resumo:
A substantial amount of evidence has been collected to propose an exclusive role for the dorsal visual pathway in the control of guided visual search mechanisms, specifically in the preattentive direction of spatial selection [Vidyasagar, T. R. (1999). A neuronal model of attentional spotlight: Parietal guiding the temporal. Brain Research and Reviews, 30, 66-76; Vidyasagar, T. R. (2001). From attentional gating in macaque primary visual cortex to dyslexia in humans. Progress in Brain Research, 134, 297-312]. Moreover, it has been suggested recently that the dorsal visual pathway is specifically involved in the spatial selection and sequencing required for orthographic processing in visual word recognition. In this experiment we manipulate the demands for spatial processing in a word recognition, lexical decision task by presenting target words in a normal spatial configuration, or where the constituent letters of each word are spatially shifted relative to each other. Accurate word recognition in the Shifted-words condition should demand higher spatial encoding requirements, thereby making greater demands on the dorsal visual stream. Magnetoencephalographic (MEG) neuroimaging revealed a high frequency (35-40 Hz) right posterior parietal activation consistent with dorsal stream involvement occurring between 100 and 300 ms post-stimulus onset, and then again at 200-400 ms. Moreover, this signal was stronger in the shifted word condition, compared to the normal word condition. This result provides neurophysiological evidence that the dorsal visual stream may play an important role in visual word recognition and reading. These results further provide a plausible link between early stage theories of reading, and the magnocellular-deficit theory of dyslexia, which characterises many types of reading difficulty. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
We used magnetoencephalography (MEG) to map the spatiotemporal evolution of cortical activity for visual word recognition. We show that for five-letter words, activity in the left hemisphere (LH) fusiform gyrus expands systematically in both the posterior-anterior and medial-lateral directions over the course of the first 500 ms after stimulus presentation. Contrary to what would be expected from cognitive models and hemodynamic studies, the component of this activity that spatially coincides with the visual word form area (VWFA) is not active until around 200 ms post-stimulus, and critically, this activity is preceded by and co-active with activity in parts of the inferior frontal gyrus (IFG, BA44/6). The spread of activity in the VWFA for words does not appear in isolation but is co-active in parallel with spread of activity in anterior middle temporal gyrus (aMTG, BA 21 and 38), posterior middle temporal gyrus (pMTG, BA37/39), and IFG. © 2004 Elsevier Inc. All rights reserved.
Resumo:
Background - It is well established that the left inferior frontal gyrus plays a key role in the cerebral cortical network that supports reading and visual word recognition. Less clear is when in time this contribution begins. We used magnetoencephalography (MEG), which has both good spatial and excellent temporal resolution, to address this question. Methodology/Principal Findings - MEG data were recorded during a passive viewing paradigm, chosen to emphasize the stimulus-driven component of the cortical response, in which right-handed participants were presented words, consonant strings, and unfamiliar faces to central vision. Time-frequency analyses showed a left-lateralized inferior frontal gyrus (pars opercularis) response to words between 100–250 ms in the beta frequency band that was significantly stronger than the response to consonant strings or faces. The left inferior frontal gyrus response to words peaked at ~130 ms. This response was significantly later in time than the left middle occipital gyrus, which peaked at ~115 ms, but not significantly different from the peak response in the left mid fusiform gyrus, which peaked at ~140 ms, at a location coincident with the fMRI–defined visual word form area (VWFA). Significant responses were also detected to words in other parts of the reading network, including the anterior middle temporal gyrus, the left posterior middle temporal gyrus, the angular and supramarginal gyri, and the left superior temporal gyrus. Conclusions/Significance - These findings suggest very early interactions between the vision and language domains during visual word recognition, with speech motor areas being activated at the same time as the orthographic word-form is being resolved within the fusiform gyrus. This challenges the conventional view of a temporally serial processing sequence for visual word recognition in which letter forms are initially decoded, interact with their phonological and semantic representations, and only then gain access to a speech code.
Resumo:
This thesis investigates various aspects of peripheral vision, which is known not to be as acute as vision at the point of fixation. Differences between foveal and peripheral vision are generally thought to be of a quantitative rather than a qualitative nature. However, the rate of decline in sensitivity between foveal and peripheral vision is known to be task dependent and the mechanisms underlying the differences are not yet well understood. Several experiments described here have employed a psychophysical technique referred to as 'spatial scaling'. Thresholds are determined at several eccentricities for ranges of stimuli which are magnified versions of one another. Using this methodology a parameter called the E2 value is determined, which defines the eccentricity at which stimulus size must double in order to maintain performance equivalent to that at the fovea. Experiments of this type have evaluated the eccentricity dependencies of detection tasks (kinetic and static presentation of a differential light stimulus), resolution tasks (bar orientation discrimination in the presence of flanking stimuli, word recognition and reading performance), and relative localisation tasks (curvature detection and discrimination). Most tasks could be made equal across the visual field by appropriate magnification. E2 values are found to vary widely dependent on the task, and possible reasons for such variations are discussed. The dependence of positional acuity thresholds on stimulus eccentricity, separation and spatial scale parameters is also examined. The relevance of each factor in producing 'Weber's law' for position can be determined from the results.
Resumo:
Objectives Ecstasy is a recreational drug whose active ingredient, 3,4-methylenedioxymethamphetamine (MDMA), acts predominantly on the serotonergic system. Although MDMA is known to be neurotoxic in animals, the long-term effects of recreational Ecstasy use in humans remain controversial but one commonly reported consequence is mild cognitive impairment particularly affecting verbal episodic memory. Although event-related potentials (ERPs) have made significant contributions to our understanding of human memory processes, until now they have not been applied to study the long-term effects of Ecstasy. The aim of this study was to examine the effects of past Ecstasy use on recognition memory for both verbal and non-verbal stimuli using ERPs. Methods We compared the ERPs of 15 Ecstasy/polydrug users with those of 14 cannabis users and 13 non-illicit drug users as controls. Results Despite equivalent memory performance, Ecstasy/polydrug users showed an attenuated late positivity over left parietal scalp sites, a component associated with the specific memory process of recollection. Conlusions This effect was only found in the word recognition task which is consistent with evidence that left hemisphere cognitive functions are disproportionately affected by Ecstasy, probably because the serotonergic system is laterally asymmetrical. Experimentally, decreasing central serotonergic activity through acute tryptophan depletion also selectively impairs recollection, and this too suggests the importance of the serotonergic system. Overall, our results suggest that Ecstasy users, who also use a wide range of other drugs, show a durable abnormality in a specific ERP component thought to be associated with recollection.
Resumo:
To represent the local orientation and energy of a 1-D image signal, many models of early visual processing employ bandpass quadrature filters, formed by combining the original signal with its Hilbert transform. However, representations capable of estimating an image signal's 2-D phase have been largely ignored. Here, we consider 2-D phase representations using a method based upon the Riesz transform. For spatial images there exist two Riesz transformed signals and one original signal from which orientation, phase and energy may be represented as a vector in 3-D signal space. We show that these image properties may be represented by a Singular Value Decomposition (SVD) of the higher-order derivatives of the original and the Riesz transformed signals. We further show that the expected responses of even and odd symmetric filters from the Riesz transform may be represented by a single signal autocorrelation function, which is beneficial in simplifying Bayesian computations for spatial orientation. Importantly, the Riesz transform allows one to weight linearly across orientation using both symmetric and asymmetric filters to account for some perceptual phase distortions observed in image signals - notably one's perception of edge structure within plaid patterns whose component gratings are either equal or unequal in contrast. Finally, exploiting the benefits that arise from the Riesz definition of local energy as a scalar quantity, we demonstrate the utility of Riesz signal representations in estimating the spatial orientation of second-order image signals. We conclude that the Riesz transform may be employed as a general tool for 2-D visual pattern recognition by its virtue of representing phase, orientation and energy as orthogonal signal quantities.
Resumo:
BACKGROUND: Glue ear or otitis media with effusion (OME) is common in children and may be associated with hearing loss (HL). For most children it has no long lasting effects on cognitive development but it is unclear whether there are subgroups at higher risk of sequelae. OBJECTIVES: To examine the association between a score comprising the number of times a child had OME and HL (OME/HL score) in the first four/five years of life and IQ at age 4 and 8. To examine whether any association between OME/HL and IQ is moderated by socioeconomic, child or family factors. METHODS: Prospective, longitudinal cohort study: the Avon Longitudinal Study of Parents and Children (ALSPAC). 1155 children tested using tympanometry on up to nine occasions and hearing for speech (word recognition) on up to three occasions between age 8 months and 5 years. An OME/HL score was created and associations with IQ at ages 4 and 8 were examined. Potential moderators included a measure of the child's cognitive stimulation at home (HOME score). RESULTS: For the whole sample at age 4 the group with the highest 10% OME/HL scores had performance IQ 5 points lower [95% CI -9, -1] and verbal IQ 6 points lower [95% CI -10, -3] than the unaffected group. By age 8 the evidence for group differences was weak. There were significant interactions between OME/HL and the HOME score: those with high OME/HL scores and low 18 month HOME scores had lower IQ at age 4 and 8 than those with high OME/HL scores and high HOME scores. Adjusted mean differences ranged from 5 to 8 IQ points at age 4 and 8. CONCLUSIONS: The cognitive development of children from homes with lower levels of cognitive stimulation is susceptible to the effects of glue ear and hearing loss.
Resumo:
Human object recognition is considered to be largely invariant to translation across the visual field. However, the origin of this invariance to positional changes has remained elusive, since numerous studies found that the ability to discriminate between visual patterns develops in a largely location-specific manner, with only a limited transfer to novel visual field positions. In order to reconcile these contradicting observations, we traced the acquisition of categories of unfamiliar grey-level patterns within an interleaved learning and testing paradigm that involved either the same or different retinal locations. Our results show that position invariance is an emergent property of category learning. Pattern categories acquired over several hours at a fixed location in either the peripheral or central visual field gradually become accessible at new locations without any position-specific feedback. Furthermore, categories of novel patterns presented in the left hemifield are distinctly faster learnt and better generalized to other locations than those learnt in the right hemifield. Our results suggest that during learning initially position-specific representations of categories based on spatial pattern structure become encoded in a relational, position-invariant format. Such representational shifts may provide a generic mechanism to achieve perceptual invariance in object recognition.
Resumo:
We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures. © 2012 Psychonomic Society, Inc.
Resumo:
Recent experimental studies have shown that development towards adult performance levels in configural processing in object recognition is delayed through middle childhood. Whilst partchanges to animal and artefact stimuli are processed with similar to adult levels of accuracy from 7 years of age, relative size changes to stimuli result in a significant decrease in relative performance for participants aged between 7 and 10. Two sets of computational experiments were run using the JIM3 artificial neural network with adult and 'immature' versions to simulate these results. One set progressively decreased the number of neurons involved in the representation of view-independent metric relations within multi-geon objects. A second set of computational experiments involved decreasing the number of neurons that represent view-dependent (nonrelational) object attributes in JIM3's Surface Map. The simulation results which show the best qualitative match to empirical data occurred when artificial neurons representing metric-precision relations were entirely eliminated. These results therefore provide further evidence for the late development of relational processing in object recognition and suggest that children in middle childhood may recognise objects without forming structural description representations.
Resumo:
Automatic Term Recognition (ATR) is a fundamental processing step preceding more complex tasks such as semantic search and ontology learning. From a large number of methodologies available in the literature only a few are able to handle both single and multi-word terms. In this paper we present a comparison of five such algorithms and propose a combined approach using a voting mechanism. We evaluated the six approaches using two different corpora and show how the voting algorithm performs best on one corpus (a collection of texts from Wikipedia) and less well using the Genia corpus (a standard life science corpus). This indicates that choice and design of corpus has a major impact on the evaluation of term recognition algorithms. Our experiments also showed that single-word terms can be equally important and occupy a fairly large proportion in certain domains. As a result, algorithms that ignore single-word terms may cause problems to tasks built on top of ATR. Effective ATR systems also need to take into account both the unstructured text and the structured aspects and this means information extraction techniques need to be integrated into the term recognition process.
Resumo:
Parkinson's disease (PD) is a common disorder of middle-aged and elderly people, in which there is degeneration of the extra-pyramidal motor system. In some patients, the disease is associated with a range of visual signs and symptoms, including defects in visual acuity, colour vision, the blink reflex, pupil reactivity, saccadic and smooth pursuit movements and visual evoked potentials. In addition, there may be psychophysical changes, disturbances of complex visual functions such as visuospatial orientation and facial recognition, and chronic visual hallucinations. Some of the treatments associated with PD may have adverse ocular reactions. If visual problems are present, they can have an important effect on overall motor function, and quality of life of patients can be improved by accurate diagnosis and correction of such defects. Moreover, visual testing is useful in separating PD from other movement disorders with visual symptoms, such as dementia with Lewy bodies (DLB), multiple system atrophy (MSA) and progressive supranuclear palsy (PSP). Although not central to PD, visual signs and symptoms can be an important though obscure aspect of the disease and should not be overlooked.
Resumo:
PURPOSE: To provide a consistent standard for the evaluation of different types of presbyopic correction. SETTING: Eye Clinic, School of Life and Health Sciences, Aston University, Birmingham, United Kingdom. METHODS: Presbyopic corrections examined were accommodating intraocular lenses (IOLs), simultaneous multifocal and monovision contact lenses, and varifocal spectacles. Binocular near visual acuity measured with different optotypes (uppercase letters, lowercase letters, and words) and reading metrics assessed with the Minnesota Near Reading chart (reading acuity, critical print size [CPS], CPS reading speed) were intercorrelated (Pearson product moment correlations) and assessed for concordance (intraclass correlation coefficients [ICC]) and agreement (Bland-Altman analysis) for indication of clinical usefulness. RESULTS: Nineteen accommodating IOL cases, 40 simultaneous contact lens cases, and 38 varifocal spectacle cases were evaluated. Other than CPS reading speed, all near visual acuity and reading metrics correlated well with each other (r>0.70, P<.001). Near visual acuity measured with uppercase letters was highly concordant (ICC, 0.78) and in close agreement with lowercase letters (+/- 0.17 logMAR). Near word acuity agreed well with reading acuity (+/- 0.16 logMAR), which in turn agreed well with near visual acuity measured with uppercase letters 0.16 logMAR). Concordance (ICC, 0.18 to 0.46) and agreement (+/- 0.24 to 0.30 logMAR) of CPS with the other near metrics was moderate. CONCLUSION: Measurement of near visual ability in presbyopia should be standardized to include assessment of near visual acuity with logMAR uppercase-letter optotypes, smallest logMAR print size that maintains maximum reading speed (CPS), and reading speed. J Cataract Refract Surg 2009; 35:1401-1409 (C) 2009 ASCRS and ESCRS
Resumo:
This paper presents a case study of the use of a visual interactive modelling system to investigate issues involved in the management of a hospital ward. Visual Interactive Modelling systems are seen to offer the learner the opportunity to explore operational management issues from a varied perspective and to provide an interactive system in which the learner receives feedback on the consequences of their actions. However to maximise the potential learning experience for a student requires the recognition that they require task structure which helps them to understand the concepts involved. These factors can be incorporated into the visual interactive model by providing an interface customised to guide the student through the experimentation. Recent developments of VIM systems in terms of their connectivity with the programming language Visual Basic facilitates this customisation.
Resumo:
This study explores the relationship between attentional processing mediated by visual magnocellular (MC) processing and reading ability. Reading ability in a group of primary school children was compared to performance on a visual cued coherent motion detection task. The results showed that a brief spatial cue was more effective in drawing attention either away or towards a visual target in the group of readers ranked in the upper 25% of the sample compared to lower ranked readers. Regression analysis showed a significant relationship between attentional processing and reading when the effects of age and intellectual ability were removed. Results suggested a stronger relationship between visual attentional and non-word reading compared to irregular word reading. (C) 2004 Lippincott Williams & Wilkins, Inc.