985 resultados para Auditory-visual Integration


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The McGurk effect, in which auditory [ba] dubbed onto [go] lip movements is perceived as da or tha, was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4(1)/(2)-month-olds were tested in a habituation-test paradigm, in which 2 an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [deltaa] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [deltaa], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [deltaa] were no more familiar than [ba]. These results are consistent with infants'perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. (C) 2004 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evidence of multisensory interactions within low-level cortices and at early post-stimulus latencies has prompted a paradigm shift in conceptualizations of sensory organization. However, the mechanisms of these interactions and their link to behavior remain largely unknown. One behaviorally salient stimulus is a rapidly approaching (looming) object, which can indicate potential threats. Based on findings from humans and nonhuman primates suggesting there to be selective multisensory (auditory-visual) integration of looming signals, we tested whether looming sounds would selectively modulate the excitability of visual cortex. We combined transcranial magnetic stimulation (TMS) over the occipital pole and psychophysics for "neurometric" and psychometric assays of changes in low-level visual cortex excitability (i.e., phosphene induction) and perception, respectively. Across three experiments we show that structured looming sounds considerably enhance visual cortex excitability relative to other sound categories and white-noise controls. The time course of this effect showed that modulation of visual cortex excitability started to differ between looming and stationary sounds for sound portions of very short duration (80 ms) that were significantly below (by 35 ms) perceptual discrimination threshold. Visual perceptions are thus rapidly and efficiently boosted by sounds through early, preperceptual and stimulus-selective modulation of neuronal excitability within low-level visual cortex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé: Les récents progrès techniques de l'imagerie cérébrale non invasives ont permis d'améliorer la compréhension des différents systèmes fonctionnels cérébraux. Les approches multimodales sont devenues indispensables en recherche, afin d'étudier dans sa globalité les différentes caractéristiques de l'activité neuronale qui sont à la base du fonctionnement cérébral. Dans cette étude combinée d'imagerie par résonance magnétique fonctionnelle (IRMf) et d'électroencéphalographie (EEG), nous avons exploité le potentiel de chacune d'elles, soit respectivement la résolution spatiale et temporelle élevée. Les processus cognitifs, de perception et de mouvement nécessitent le recrutement d'ensembles neuronaux. Dans la première partie de cette thèse nous étudions, grâce à la combinaison des techniques IRMf et EEG, la réponse des aires visuelles lors d'une stimulation qui demande le regroupement d'éléments cohérents appartenant aux deux hémi-champs visuels pour en faire une seule image. Nous utilisons une mesure de synchronisation (EEG de cohérence) comme quantification de l'intégration spatiale inter-hémisphérique et la réponse BOLD (Blood Oxygenation Level Dependent) pour évaluer l'activité cérébrale qui en résulte. L'augmentation de la cohérence de l'EEG dans la bande beta-gamma mesurée au niveau des électrodes occipitales et sa corrélation linéaire avec la réponse BOLD dans les aires de VP/V4, reflète et visualise un ensemble neuronal synchronisé qui est vraisemblablement impliqué dans le regroupement spatial visuel. Ces résultats nous ont permis d'étendre la recherche à l'étude de l'impact que le contenu en fréquence des stimuli a sur la synchronisation. Avec la même approche, nous avons donc identifié les réseaux qui montrent une sensibilité différente à l'intégration des caractéristiques globales ou détaillées des images. En particulier, les données montrent que l'implication des réseaux visuels ventral et dorsal est modulée par le contenu en fréquence des stimuli. Dans la deuxième partie nous avons a testé l'hypothèse que l'augmentation de l'activité cérébrale pendant le processus de regroupement inter-hémisphérique dépend de l'activité des axones calleux qui relient les aires visuelles. Comme le Corps Calleux présente une maturation progressive pendant les deux premières décennies, nous avons analysé le développement de la fonction d'intégration spatiale chez des enfants âgés de 7 à 13 ans et le rôle de la myelinisation des fibres calleuses dans la maturation de l'activité visuelle. Nous avons combiné l'IRMf et la technique de MTI (Magnetization Transfer Imaging) afin de suivre les signes de maturation cérébrale respectivement sous l'aspect fonctionnel et morphologique (myelinisation). Chez lés enfants, les activations associées au processus d'intégration entre les hémi-champs visuels sont, comme chez l'adulte, localisées dans le réseau ventral mais se limitent à une zone plus restreinte. La forte corrélation que le signal BOLD montre avec la myelinisation des fibres du splenium est le signe de la dépendance entre la maturation des fonctions visuelles de haut niveau et celle des connections cortico-corticales. Abstract: Recent advances in non-invasive brain imaging allow the visualization of the different aspects of complex brain dynamics. The approaches based on a combination of imaging techniques facilitate the investigation and the link of multiple aspects of information processing. They are getting a leading tool for understanding the neural basis of various brain functions. Perception, motion, and cognition involve the formation of cooperative neuronal assemblies distributed over the cerebral cortex. In this research, we explore the characteristics of interhemispheric assemblies in the visual brain by taking advantage of the complementary characteristics provided by EEG (electroencephalography) and fMRI (Functional Magnetic Resonance Imaging) techniques. These are the high temporal resolution for EEG and high spatial resolution for fMRI. In the first part of this thesis we investigate the response of the visual areas to the interhemispheric perceptual grouping task. We use EEG coherence as a measure of synchronization and BOLD (Blood Oxygenar tion Level Dependent) response as a measure of the related brain activation. The increase of the interhemispheric EEG coherence restricted to the occipital electrodes and to the EEG beta band and its linear relation to the BOLD responses in VP/V4 area points to a trans-hemispheric synchronous neuronal assembly involved in early perceptual grouping. This result encouraged us to explore the formation of synchronous trans-hemispheric networks induced by the stimuli of various spatial frequencies with this multimodal approach. We have found the involvement of ventral and medio-dorsal visual networks modulated by the spatial frequency content of the stimulus. Thus, based on the combination of EEG coherence and fMRI BOLD data, we have identified visual networks with different sensitivity to integrating low vs. high spatial frequencies. In the second part of this work we test the hypothesis that the increase of brain activity during perceptual grouping depends on the activity of callosal axons interconnecting the visual areas that are involved. To this end, in children of 7-13 years, we investigated functional (functional activation with fMRI) and morphological (myelination of the corpus callosum with Magnetization Transfer Imaging (MTI)) aspects of spatial integration. In children, the activation associated with the spatial integration across visual fields was localized in visual ventral stream and limited to a part of the area activated in adults. The strong correlation between individual BOLD responses in .this area and the myelination of the splenial system of fibers points to myelination as a significant factor in the development of the spatial integration ability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the auditory, visual and combined audio-visual recognition of vowels by severely and profoundly hearing impaired children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses a test for speech perception and scoring to test likelihood of success with mainstreaming.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Auditory-visual speech perception testing was completed using wordandconsonant-level stimuli in individuals with known degrees of dementia of theAlzheimer’s type. The correlations with the cognitive measures and the speechperception measures (A-only, V-only, AV, VE or AE) did not reveal significantrelationships.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.