68 resultados para Visual Speaker Recognition, Visual Speech Recognition, Cascading Appearance-Based Features
em Université de Lausanne, Switzerland
Resumo:
We analyzed the coherence of electroencephalographic (EEG) signals recorded symmetrically from the two hemispheres, while subjects (n = 9) were viewing visual stimuli. Considering the many common features of the callosal connectivity in mammals, we expected that, as in our animal studies, interhemispheric coherence (ICoh) would increase only with bilateral iso-oriented gratings located close to the vertical meridian of the visual field, or extending across it. Indeed, a single grating that extended across the vertical meridian significantly increased the EEG ICoh in normal adult subjects. These ICoh responses were obtained from occipital and parietal derivations and were restricted to the gamma frequency band. They were detectable with different EEG references and were robust across and within subjects. Other unilateral and bilateral stimuli, including identical gratings that were effective in anesthetized animals, did not affect ICoh in humans. This fact suggests the existence of regulatory influences, possibly of a top-down kind, on the pattern of callosal activation in conscious human subjects. In addition to establishing the validity of EEG coherence analysis for assaying cortico-cortical connectivity, this study extends to the human brain the finding that visual stimuli cause interhemispheric synchronization, particularly in frequencies of the gamma band. It also indicates that the synchronization is carried out by cortico-cortical connection and suggests similarities in the organization of visual callosal connections in animals and in man.
Resumo:
We report two unrelated patients with a multisystem disease involving liver, eye, immune system, connective tissue, and bone, caused by biallelic mutations in the neuroblastoma amplified sequence (NBAS) gene. Both presented as infants with recurrent episodes triggered by fever with vomiting, dehydration, and elevated transaminases. They had frequent infections, hypogammaglobulinemia, reduced natural killer cells, and the Pelger-Huët anomaly of their granulocytes. Their facial features were similar with a pointed chin and proptosis; loose skin and reduced subcutaneous fat gave them a progeroid appearance. Skeletal features included short stature, slender bones, epiphyseal dysplasia with multiple phalangeal pseudo-epiphyses, and small C1-C2 vertebrae causing cervical instability and myelopathy. Retinal dystrophy and optic atrophy were present in one patient. NBAS is a component of the synthaxin-18 complex and is involved in nonsense-mediated mRNA decay control. Putative loss-of-function mutations in NBAS are already known to cause disease in humans. A specific founder mutation has been associated with short stature, optic nerve atrophy and Pelger-Huët anomaly of granulocytes (SOPH) in the Siberian Yakut population. A more recent report associates NBAS mutations with recurrent acute liver failure in infancy in a group of patients of European descent. Our observations indicate that the phenotypic spectrum of NBAS deficiency is wider than previously known and includes skeletal, hepatic, metabolic, and immunologic aspects. Early recognition of the skeletal phenotype is important for preventive management of cervical instability. © 2015 Wiley Periodicals, Inc.
Resumo:
OBJECTIVES: To describe the clinical features of idiopathic chiasmal neuritis in a large cohort of patients and to report their visual and neurologic outcomes. DESIGN: A retrospective medical record review of consecutive patients with chiasmal neuritis at a single institution. Patients with clinical or radiographic evidence of inflammation involving the intraorbital optic nerve and patients with a systemic inflammatory or neoplastic disorder were excluded. RESULTS: Twenty patients were identified (14 female, 6 male; mean age, 37 years). Visual acuity at initial examination ranged from 20/15 to light perception. Progressive visual loss beyond 1 month was documented in 1 patient. Twelve of 15 patients who underwent magnetic resonance imaging demonstrated chiasmal enlargement and/or enhancement; 6 patients had 1 or more white matter lesions. Follow-up time ranged from 2 weeks to 22 years, with a mean of 5.7 years. The final median visual acuity was 20/20 (range, 20/15-20/50) and visual fields were normal or improved. Of 15 patients with a minimum follow-up interval of 1 year, 6 developed multiple sclerosis. CONCLUSIONS: The demographic and clinical features of idiopathic chiasmal neuritis resemble those of idiopathic optic neuritis. Visual prognosis is excellent. In this series, 40% of patients subsequently developed multiple sclerosis.
Resumo:
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.
Resumo:
Brittle cornea syndrome (BCS) is an autosomal recessive disorder characterised by extreme corneal thinning and fragility. Corneal rupture can therefore occur either spontaneously or following minimal trauma in affected patients. Two genes, ZNF469 and PRDM5, have now been identified, in which causative pathogenic mutations collectively account for the condition in nearly all patients with BCS ascertained to date. Therefore, effective molecular diagnosis is now available for affected patients, and those at risk of being heterozygous carriers for BCS. We have previously identified mutations in ZNF469 in 14 families (in addition to 6 reported by others in the literature), and in PRDM5 in 8 families (with 1 further family now published by others). Clinical features include extreme corneal thinning with rupture, high myopia, blue sclerae, deafness of mixed aetiology with hypercompliant tympanic membranes, and variable skeletal manifestations. Corneal rupture may be the presenting feature of BCS, and it is possible that this may be incorrectly attributed to non-accidental injury. Mainstays of management include the prevention of ocular rupture by provision of protective polycarbonate spectacles, careful monitoring of visual and auditory function, and assessment for skeletal complications such as developmental dysplasia of the hip. Effective management depends upon appropriate identification of affected individuals, which may be challenging given the phenotypic overlap of BCS with other connective tissue disorders.
Resumo:
A right-handed man developed a sudden transient, amnestic syndrome associated with bilateral hemorrhage of the hippocampi, probably due to Urbach-Wiethe disease. In the 3rd month, despite significant hippocampal structural damage on imaging, only a milder degree of retrograde and anterograde amnesia persisted on detailed neuropsychological examination. On systematic testing of recognition of facial and vocal expression of emotion, we found an impairment of the vocal perception of fear, but not that of other emotions, such as joy, sadness and anger. Such selective impairment of fear perception was not present in the recognition of facial expression of emotion. Thus emotional perception varies according to the different aspects of emotions and the different modality of presentation (faces versus voices). This is consistent with the idea that there may be multiple emotion systems. The study of emotional perception in this unique case of bilateral involvement of hippocampus suggests that this structure may play a critical role in the recognition of fear in vocal expression, possibly dissociated from that of other emotions and from that of fear in facial expression. In regard of recent data suggesting that the amygdala is playing a role in the recognition of fear in the auditory as well as in the visual modality this could suggest that the hippocampus may be part of the auditory pathway of fear recognition.
Resumo:
The current state of empirical investigations refers to consciousness as an all-or-none phenomenon. However, a recent theoretical account opens up this perspective by proposing a partial level (between nil and full) of conscious perception. In the well-studied case of single-word reading, short-lived exposure can trigger incomplete word-form recognition wherein letters fall short of forming a whole word in one's conscious perception thereby hindering word-meaning access and report. Hence, the processing from incomplete to complete word-form recognition straightforwardly mirrors a transition from partial to full-blown consciousness. We therefore hypothesized that this putative functional bottleneck to consciousness (i.e. the perceptual boundary between partial and full conscious perception) would emerge at a major key hub region for word-form recognition during reading, namely the left occipito-temporal junction. We applied a real-time staircase procedure and titrated subjective reports at the threshold between partial (letters) and full (whole word) conscious perception. This experimental approach allowed us to collect trials with identical physical stimulation, yet reflecting distinct perceptual experience levels. Oscillatory brain activity was monitored with magnetoencephalography and revealed that the transition from partial-to-full word-form perception was accompanied by alpha-band (7-11 Hz) power suppression in the posterior left occipito-temporal cortex. This modulation of rhythmic activity extended anteriorly towards the visual word form area (VWFA), a region whose selectivity for word-forms in perception is highly debated. The current findings provide electrophysiological evidence for a functional bottleneck to consciousness thereby empirically instantiating a recently proposed partial perspective on consciousness. Moreover, the findings provide an entirely new outlook on the functioning of the VWFA as a late bottleneck to full-blown conscious word-form perception.
Resumo:
Evidence from neuropsychological and activation studies (Clarke et al., 2oo0, Maeder et al., 2000) suggests that sound recognitionand localisation are processed by two anatomically and functionally distinct cortical networks. We report here on a case of a patientthat had an interruption of auditory information and we show: i) the effects of this interruption on cortical auditory processing; ii)the effect of the workload on activation pattern.A 36 year old man suffered from a small left mesencephalic haemotrhage, due to cavernous angioma; the let% inferior colliculuswas resected in the surgical approach of the vascular malformation. In the acute stage, the patient complained of auditoryhallucinations and of auditory loss in right ear, while tonal audiometry was normal. At 12 months, auditory recognition, auditorylocalisation (assessed by lTD and IID cues) and auditory motion perception were normal (Clarke et al., 2000), while verbal dichoticlistening was deficient on the right side.Sound recognition and sound localisation activation patterns were investigated with fMRI, using a passive and an activeparadigm. In normal subjects, distinct cortical networks were involved in sound recognition and localisation, both in passive andactive paradigm (Maeder et al., 2OOOa, 2000b).Passive listening of environmental and spatial stimuli as compared to rest strongly activated right auditory cortex, but failed toactivate left primary auditory cortex. The specialised networks for sound recognition and localisation could not be visual&d onthe right and only minimally on the left convexity. A very different activation pattern was obtained in the active condition wherea motor response was required. Workload not only increased the activation of the right auditory cortex, but also allowed theactivation of the left primary auditory cortex. The specialised networks for sound recognition and localisation were almostcompletely present in both hemispheres.These results show that increasing the workload can i) help to recruit cortical region in the auditory deafferented hemisphere;and ii) lead to processing auditory information within specific cortical networks.References:Clarke et al. (2000). Neuropsychologia 38: 797-807.Mae.der et al. (2OOOa), Neuroimage 11: S52.Maeder et al. (2OOOb), Neuroimage 11: S33
Resumo:
Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.
Resumo:
Increasing evidence suggests that working memory and perceptual processes are dynamically interrelated due to modulating activity in overlapping brain networks. However, the direct influence of working memory on the spatio-temporal brain dynamics of behaviorally relevant intervening information remains unclear. To investigate this issue, subjects performed a visual proximity grid perception task under three different visual-spatial working memory (VSWM) load conditions. VSWM load was manipulated by asking subjects to memorize the spatial locations of 6 or 3 disks. The grid was always presented between the encoding and recognition of the disk pattern. As a baseline condition, grid stimuli were presented without a VSWM context. VSWM load altered both perceptual performance and neural networks active during intervening grid encoding. Participants performed faster and more accurately on a challenging perceptual task under high VSWM load as compared to the low load and the baseline condition. Visual evoked potential (VEP) analyses identified changes in the configuration of the underlying sources in one particular period occurring 160-190 ms post-stimulus onset. Source analyses further showed an occipito-parietal down-regulation concurrent to the increased involvement of temporal and frontal resources in the high VSWM context. Together, these data suggest that cognitive control mechanisms supporting working memory may selectively enhance concurrent visual processing related to an independent goal. More broadly, our findings are in line with theoretical models implicating the engagement of frontal regions in synchronizing and optimizing mnemonic and perceptual resources towards multiple goals.
Resumo:
Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand.
Resumo:
Impaired visual search is a hallmark of spatial neglect. When searching for an unique feature (e.g., color) neglect patients often show only slight visual field asymmetries. In contrast, when the target is defined by a combination of features (e.g., color and form) they exhibit a severe deficit of contralesional search. This finding suggests a selective impairment of the serial deployment of spatial attention. Here, we examined this deficit with a preview paradigm. Neglect patients searched for a target defined by the conjunction of shape and color, presented together with varying numbers of distracters. The presentation time was varied such that on some trials participants previewed the target together with same-shape/different-color distracters, for 300 or 600 ms prior to the appearance of additional different-shape/same-color distracters. On the remaining trials the target and all distracters were shown simultaneously. Healthy participants exhibited a serial search strategy only when all items were presented simultaneously, whereas in both preview conditions a pop-out effect was observed. Neglect patients showed a similar pattern when the target was presented in the right hemifield. In contrast, when searching for a target in the left hemifield they showed serial search in the no-preview condition, as well as with a preview of 300 ms, and partly even at 600 ms. A control experiment suggested that the failure to fully benefit from item preview was probably independent of accurate perception of time. Our results, when viewed in the context of existing literature, lead us to conclude that the visual search deficit in neglect reflects two additive factors: a biased representation of attentional priority in favor of ipsilesional information and exaggerated capture of attention by ipsilesional abrupt onsets.
Resumo:
Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT are functionally linked and temporally synchronized during time encoding whereas they are functionally independent and operate serially (V1 followed by V5/MT) while maintaining temporal information in working memory. These data challenge the traditional view of V1 and V5/MT as visuo-spatial features detectors and highlight the functional contribution and the temporal dynamics of these brain regions in the processing of time in millisecond range. The present project resulted in the paper entitled: 'How the visual brain encodes and keeps track of time' by Paolo Salvioni, Lysiann Kalmbach, Micah Murray and Domenica Bueti that is now submitted for publication to the Journal of Neuroscience.
Resumo:
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.