166 resultados para Visual word recognition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current state of empirical investigations refers to consciousness as an all-or-none phenomenon. However, a recent theoretical account opens up this perspective by proposing a partial level (between nil and full) of conscious perception. In the well-studied case of single-word reading, short-lived exposure can trigger incomplete word-form recognition wherein letters fall short of forming a whole word in one's conscious perception thereby hindering word-meaning access and report. Hence, the processing from incomplete to complete word-form recognition straightforwardly mirrors a transition from partial to full-blown consciousness. We therefore hypothesized that this putative functional bottleneck to consciousness (i.e. the perceptual boundary between partial and full conscious perception) would emerge at a major key hub region for word-form recognition during reading, namely the left occipito-temporal junction. We applied a real-time staircase procedure and titrated subjective reports at the threshold between partial (letters) and full (whole word) conscious perception. This experimental approach allowed us to collect trials with identical physical stimulation, yet reflecting distinct perceptual experience levels. Oscillatory brain activity was monitored with magnetoencephalography and revealed that the transition from partial-to-full word-form perception was accompanied by alpha-band (7-11 Hz) power suppression in the posterior left occipito-temporal cortex. This modulation of rhythmic activity extended anteriorly towards the visual word form area (VWFA), a region whose selectivity for word-forms in perception is highly debated. The current findings provide electrophysiological evidence for a functional bottleneck to consciousness thereby empirically instantiating a recently proposed partial perspective on consciousness. Moreover, the findings provide an entirely new outlook on the functioning of the VWFA as a late bottleneck to full-blown conscious word-form perception.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Top-down contextual influences play a major part in speech understanding, especially in hearing-impaired patients with deteriorated auditory input. Those influences are most obvious in difficult listening situations, such as listening to sentences in noise but can also be observed at the word level under more favorable conditions, as in one of the most commonly used tasks in audiology, i.e., repeating isolated words in silence. This study aimed to explore the role of top-down contextual influences and their dependence on lexical factors and patient-specific factors using standard clinical linguistic material. Spondaic word perception was tested in 160 hearing-impaired patients aged 23-88 years with a four-frequency average pure-tone threshold ranging from 21 to 88 dB HL. Sixty spondaic words were randomly presented at a level adjusted to correspond to a speech perception score ranging between 40 and 70% of the performance intensity function obtained using monosyllabic words. Phoneme and whole-word recognition scores were used to calculate two context-influence indices (the j factor and the ratio of word scores to phonemic scores) and were correlated with linguistic factors, such as the phonological neighborhood density and several indices of word occurrence frequencies. Contextual influence was greater for spondaic words than in similar studies using monosyllabic words, with an overall j factor of 2.07 (SD = 0.5). For both indices, context use decreased with increasing hearing loss once the average hearing loss exceeded 55 dB HL. In right-handed patients, significantly greater context influence was observed for words presented in the right ears than for words presented in the left, especially in patients with many years of education. The correlations between raw word scores (and context influence indices) and word occurrence frequencies showed a significant age-dependent effect, with a stronger correlation between perception scores and word occurrence frequencies when the occurrence frequencies were based on the years corresponding to the patients' youth, showing a "historic" word frequency effect. This effect was still observed for patients with few years of formal education, but recent occurrence frequencies based on current word exposure had a stronger influence for those patients, especially for younger ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, the ventral occipito-temporal (vOT) area, but not the superior parietal lobules (SPLs), is thought as belonging to the neural system of visual word recognition. However, some dyslexic children who exhibit a visual attention span disorder - i.e. poor multi-element parallel processing - further show reduced SPLs activation when engaged in visual multi-element categorization tasks. We investigated whether these parietal regions further contribute to letter-identity processing within strings. Adult skilled readers and dyslexic participants with a visual attention span disorder were administered a letter-string comparison task under fMRI. Dyslexic adults were less accurate than skilled readers to detect letter identity substitutions within strings. In skilled readers, letter identity differs related to enhanced activation of the left vOT. However, specific neural responses were further found in the superior and inferior parietal regions, including the SPLs bilaterally. Two brain regions that are specifically related to substituted letter detection, the left SPL and the left vOT, were less activated in dyslexic participants. These findings suggest that the left SPL, like the left vOT, may contribute to letter string processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Morphology is the aspect of language concerned with the internal structure of words. In the past decades, a large body of masked priming (behavioral and neuroimaging) data has suggested that the visual word recognition system automatically decomposes any morphologically complex word into a stem and its constituent morphemes. Yet the reliance of morphology on other reading processes (e.g., orthography and semantics), as well as its underlying neuronal mechanisms are yet to be determined. In the current magnetoencephalography study, we addressed morphology from the perspective of the unification framework, that is, by applying the Hold/Release paradigm, morphological unification was simulated via the assembly of internal morphemic units into a whole word. Trials representing real words were divided into words with a transparent (true) or a nontransparent (pseudo) morphological relationship. Morphological unification of truly suffixed words was faster and more accurate and additionally enhanced induced oscillations in the narrow gamma band (60-85 Hz, 260-440 ms) in the left posterior occipitotemporal junction. This neural signature could not be explained by a mere automatic lexical processing (i.e., stem perception), but more likely it related to a semantic access step during the morphological unification process. By demonstrating the validity of unification at the morphological level, this study contributes to the vast empirical evidence on unification across other language processes. Furthermore, we point out that morphological unification relies on the retrieval of lexical semantic associations via induced gamma band oscillations in a cerebral hub region for visual word form processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here we adopt a novel strategy to investigate phonological assembly. Participants performed a visual lexical decision task in English in which the letters in words and letterstrings were delivered either sequentially (promoting phonological assembly) or simultaneously (not promoting phonological assembly). A region of interest analysis confirmed that regions previously associated with phonological assembly, in studies contrasting different word types (e.g. words versus pseudowords), were also identified using our novel task that controls for a number of confounding variables. Specifically, the left pars opercularis, the superior part of the ventral precentral gyrus and the supramarginal gyrus were all recruited more during sequential delivery than simultaneous delivery, even when various psycholinguistic characteristics of the stimuli were controlled. This suggests that sequential delivery of orthographic stimuli is a useful tool to explore how readers, with various levels of proficiency, use sublexical phonological processing during visual word recognition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using optimized voxel-based morphometry, we performed grey matter density analyses on 59 age-, sex- and intelligence-matched young adults with three distinct, progressive levels of musical training intensity or expertise. Structural brain adaptations in musicians have been repeatedly demonstrated in areas involved in auditory perception and motor skills. However, musical activities are not confined to auditory perception and motor performance, but are entangled with higher-order cognitive processes. In consequence, neuronal systems involved in such higher-order processing may also be shaped by experience-driven plasticity. We modelled expertise as a three-level regressor to study possible linear relationships of expertise with grey matter density. The key finding of this study resides in a functional dissimilarity between areas exhibiting increase versus decrease of grey matter as a function of musical expertise. Grey matter density increased with expertise in areas known for their involvement in higher-order cognitive processing: right fusiform gyrus (visual pattern recognition), right mid orbital gyrus (tonal sensitivity), left inferior frontal gyrus (syntactic processing, executive function, working memory), left intraparietal sulcus (visuo-motor coordination) and bilateral posterior cerebellar Crus II (executive function, working memory) and in auditory processing: left Heschl's gyrus. Conversely, grey matter density decreased with expertise in bilateral perirolandic and striatal areas that are related to sensorimotor function, possibly reflecting high automation of motor skills. Moreover, a multiple regression analysis evidenced that grey matter density in the right mid orbital area and the inferior frontal gyrus predicted accuracy in detecting fine-grained incongruities in tonal music.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multisensory experiences enhance perceptions and facilitate memory retrieval processes, even when only unisensory information is available for accessing such memories. Using fMRI, we identified human brain regions involved in discriminating visual stimuli according to past multisensory vs. unisensory experiences. Subjects performed a completely orthogonal task, discriminating repeated from initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were exclusively visual. Despite only single-trial exposures to initial image presentations, accuracy in indicating image repetitions was significantly improved by past auditory-visual multisensory experiences over images only encountered visually. Similarly, regions within the lateral-occipital complex-areas typically associated with visual object recognition processes-were more active to visual stimuli with multisensory than unisensory pasts. Additional differential responses were observed in the anterior cingulate and frontal cortices. Multisensory experiences are registered by the brain even when of no immediate behavioral relevance and can be used to categorize memories. These data reveal the functional efficacy of multisensory processing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We perceive our environment through multiple sensory channels. Nonetheless, research has traditionally focused on the investigation of sensory processing within single modalities. Thus, investigating how our brain integrates multisensory information is of crucial importance for understanding how organisms cope with a constantly changing and dynamic environment. During my thesis I have investigated how multisensory events impact our perception and brain responses, either when auditory-visual stimuli were presented simultaneously or how multisensory events at one point in time impact later unisensory processing. In "Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012) we investigated the neuronal substrates involved in motion detection in depth under multisensory vs. unisensory conditions. We have shown that congruent auditory-visual looming (i.e. approaching) signals are preferentially integrated by the brain. Further, we show that early effects under these conditions are relevant for behavior, effectively speeding up responses to these combined stimulus presentations. In "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), we investigated the behavioral impact of single encounters with meaningless auditory-visual object parings upon subsequent visual object recognition. In addition to showing that these encounters lead to impaired recognition accuracy upon repeated visual presentations, we have shown that the brain discriminates images as soon as ~100ms post-stimulus onset according to the initial encounter context. In "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review) we have addressed whether auditory object recognition is affected by single-trial multisensory memories, and whether recognition accuracy of sounds was similarly affected by the initial encounter context as visual objects. We found that this is in fact the case. We propose that a common underlying brain network is differentially involved during encoding and retrieval of images and sounds based on our behavioral findings. - Nous percevons l'environnement qui nous entoure à l'aide de plusieurs organes sensoriels. Antérieurement, la recherche sur la perception s'est focalisée sur l'étude des systèmes sensoriels indépendamment les uns des autres. Cependant, l'étude des processus cérébraux qui soutiennent l'intégration de l'information multisensorielle est d'une importance cruciale pour comprendre comment notre cerveau travail en réponse à un monde dynamique en perpétuel changement. Pendant ma thèse, j'ai ainsi étudié comment des événements multisensoriels impactent notre perception immédiate et/ou ultérieure et comment ils sont traités par notre cerveau. Dans l'étude " Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012), nous nous sommes intéressés aux processus neuronaux impliqués dans la détection de mouvements à l'aide de l'utilisation de stimuli audio-visuels seuls ou combinés. Nos résultats ont montré que notre cerveau intègre de manière préférentielle des stimuli audio-visuels combinés s'approchant de l'observateur. De plus, nous avons montré que des effets précoces, observés au niveau de la réponse cérébrale, influencent notre comportement, en accélérant la détection de ces stimuli. Dans l'étude "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), nous nous sommes intéressés à l'impact qu'a la présentation d'un stimulus audio-visuel sur l'exactitude de reconnaissance d'une image. Nous avons étudié comment la présentation d'une combinaison audio-visuelle sans signification, impacte, au niveau comportementale et cérébral, sur la reconnaissance ultérieure de l'image. Les résultats ont montré que l'exactitude de la reconnaissance d'images, présentées dans le passé, avec un son sans signification, est inférieure à celle obtenue dans le cas d'images présentées seules. De plus, notre cerveau différencie ces deux types de stimuli très tôt dans le traitement d'images. Dans l'étude "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review), nous nous sommes posés la question si l'exactitude de ia reconnaissance de sons était affectée de manière semblable par la présentation d'événements multisensoriels passés. Ceci a été vérifié par nos résultats. Nous avons proposé que cette similitude puisse être expliquée par le recrutement différentiel d'un réseau neuronal commun.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability to identify letters and encode their position is a crucial step of the word recognition process. However and despite their word identification problem, the ability of dyslexic children to encode letter identity and letter-position within strings was not systematically investigated. This study aimed at filling this gap and further explored how letter identity and letter-position encoding is modulated by letter context in developmental dyslexia. For this purpose, a letter-string comparison task was administered to French dyslexic children and two chronological age (CA) and reading age (RA)-matched control groups. Children had to judge whether two successively and briefly presented four-letter strings were identical or different. Letter-position and letter identity were manipulated through the transposition (e.g., RTGM vs. RMGT) or substitution of two letters (e.g., TSHF vs. TGHD). Non-words, pseudo-words, and words were used as stimuli to investigate sub-lexical and lexical effects on letter encoding. Dyslexic children showed both substitution and transposition detection problems relative to CA-controls. A substitution advantage over transpositions was only found for words in dyslexic children whereas it extended to pseudo-words in RA-controls and to all type of items in CA-controls. Letters were better identified in the dyslexic group when belonging to orthographically familiar strings. Letter-position encoding was very impaired in dyslexic children who did not show any word context effect in contrast to CA-controls. Overall, the current findings point to a strong letter identity and letter-position encoding disorder in developmental dyslexia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cortical electrical stimulation mapping was used to study neural substrates of the function of writing in the temporoparietal cortex. We identified the sites involved in oral language (sentence reading and naming) and writing from dictation, in order to spare these areas during removal of brain tumours in 30 patients (23 in the left, and 7 in the right hemisphere). Electrostimulation of the cortex impaired writing ability in 62 restricted cortical areas (.25 cm2). These were found in left temporoparietal lobes and were mostly located along the superior temporal gyrus (Brodmann's areas 22 and 42). Stimulation of right temporoparietal lobes in right-handed patients produced no writing impairments. However there was a high variability of location between individuals. Stimulation resulted in combined symptoms (affecting oral language and writing) in fourteen patients, whereas in eight other patients, stimulation-induced pure agraphia symptoms with no oral language disturbance in twelve of the identified areas. Each detected area affected writing in a different way. We detected the various different stages of the auditory-to-motor pathway of writing from dictation: either through comprehension of the dictated sentences (word deafness areas), lexico-semantic retrieval, or phonologic processing. In group analysis, barycentres of all different types of writing interferences reveal a hierarchical functional organization along the superior temporal gyrus from initial word recognition to lexico-semantic and phonologic processes along the ventral and the dorsal comprehension pathways, supporting the previously described auditory-to-motor process. The left posterior Sylvian region supports different aspects of writing function that are extremely specialized and localized, sometimes being segregated in a way that could account for the occurrence of pure agraphia that has long-been described in cases of damage to this region.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies have demonstrated that a region in the left ventral occipito-temporal (LvOT) cortex is highly selective to the visual forms of written words and objects relative to closely matched visual stimuli. Here, we investigated why LvOT activation is not higher for reading than picture naming even though written words and pictures of objects have grossly different visual forms. To compare neuronal responses for words and pictures within the same LvOT area, we used functional magnetic resonance imaging adaptation and instructed participants to name target stimuli that followed briefly presented masked primes that were either presented in the same stimulus type as the target (word-word, picture-picture) or a different stimulus type (picture-word, word-picture). We found that activation throughout posterior and anterior parts of LvOT was reduced when the prime had the same name/response as the target irrespective of whether the prime-target relationship was within or between stimulus type. As posterior LvOT is a visual form processing area, and there was no visual form similarity between different stimulus types, we suggest that our results indicate automatic top-down influences from pictures to words and words to pictures. This novel perspective motivates further investigation of the functional properties of this intriguing region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Brittle cornea syndrome (BCS) is an autosomal recessive disorder characterised by extreme corneal thinning and fragility. Corneal rupture can therefore occur either spontaneously or following minimal trauma in affected patients. Two genes, ZNF469 and PRDM5, have now been identified, in which causative pathogenic mutations collectively account for the condition in nearly all patients with BCS ascertained to date. Therefore, effective molecular diagnosis is now available for affected patients, and those at risk of being heterozygous carriers for BCS. We have previously identified mutations in ZNF469 in 14 families (in addition to 6 reported by others in the literature), and in PRDM5 in 8 families (with 1 further family now published by others). Clinical features include extreme corneal thinning with rupture, high myopia, blue sclerae, deafness of mixed aetiology with hypercompliant tympanic membranes, and variable skeletal manifestations. Corneal rupture may be the presenting feature of BCS, and it is possible that this may be incorrectly attributed to non-accidental injury. Mainstays of management include the prevention of ocular rupture by provision of protective polycarbonate spectacles, careful monitoring of visual and auditory function, and assessment for skeletal complications such as developmental dysplasia of the hip. Effective management depends upon appropriate identification of affected individuals, which may be challenging given the phenotypic overlap of BCS with other connective tissue disorders.