934 resultados para Auditory perception
Resumo:
Investigations of gray matter changes in relation with auditory verbal hallucinations (AVH) have reported conflicting results. Assuming that alterations in gray matter might be related to certain symptoms in schizophrenia this study aimed to investigate changes in cortical thickness specific to AVH. It was hypothesized that schizophrenia patients suffering from persistent AVH would show significant differences in cortical thickness in regions involved in language-production and perception when compared to schizophrenia patients which had never experienced any hallucinations.
Resumo:
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Resumo:
The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/V5+). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direction of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5+. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high-level visual cortex in healthy humans.
Resumo:
Patients with schizophrenia spectrum disorders often maintain deviating views on cause-effect relationships, especially when positive and disorganization symptoms are manifest. Altered perceived causality is prominent in delusional ideation, in ideas of reference, and in the mentalizing ability (theory of mind [ToM]) of patients. Perceiving causal relationships may be understood either as higher order cognitive reasoning or as low-level information processing. In the present study, perception of causality was investigated as a low-level, preattentional capability similar to gestalt-like perceptual organization. Thirty-one patients (24 men and 7 women with mean age 27.7 years) and the same number of healthy control subjects matched to patients with respect to age and sex were tested. A visual paradigm was used in which 2 identical discs move, from opposite sides of a monitor, steadily toward and then past one another. Their coincidence generates an ambiguous, bistable percept (discs either "stream through" or "bounce off" one another). The bouncing perception, ie, perceived causality, is enhanced when auditory stimuli are presented at the time of coincidence. Psychopathology was measured using the Positive and Negative Syndrome Scale. It was found that positive symptoms were strongly associated with increased perceived causality and disorganization with attenuated perceived causality. Patients in general were not significantly different from controls, but symptom subgroups showed specifically altered perceived causality. Perceived causality as a basic preattentional process may contribute to higher order cognitive alterations and ToM deficiencies. It is suggested that cognitive remediation therapy should address both increased and reduced perception of causality.
Resumo:
This article describes a series of experiments which were carried out to measure the sense of presence in auditory virtual environments. Within the study a comparison of self-created signals to signals created by the surrounding environment is drawn. Furthermore, it is investigated if the room characteristics of the simulated environment have consequences on the perception of presence during vocalization or when listening to speech. Finally the experiments give information about the influence of background signals on the sense of presence. In the experiments subjects rated the degree of perceived presence in an auditory virtual environment on a perceptual scale. It is described which parameters have the most influence on the perception of presence and which ones are of minor influence. The results show that on the one hand an external speaker has more influence on the sense of presence than an adequate presentation of one’s own voice. On the other hand both room reflections and adequately presented background signals significantly increase the perceived presence in the virtual environment.
Resumo:
We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform and asked to indicate the direction of motion. A total of eleven participants underwent 3,360 practice trials, distributed over twelve (Experiment 1) or 6 days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness.
Resumo:
Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.
Resumo:
BACKGROUND: The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion. METHODOLOGY/PRINCIPAL FINDINGS: We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else's body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms. CONCLUSIONS/SIGNIFICANCE: The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a "vestibular mirror neuron system".
Resumo:
Two and a half millennia ago Pythagoras initiated the scientific study of the pitch of sounds; yet our understanding of the mechanisms of pitch perception remains incomplete. Physical models of pitch perception try to explain from elementary principles why certain physical characteristics of the stimulus lead to particular pitch sensations. There are two broad categories of pitch-perception models: place or spectral models consider that pitch is mainly related to the Fourier spectrum of the stimulus, whereas for periodicity or temporal models its characteristics in the time domain are more important. Current models from either class are usually computationally intensive, implementing a series of steps more or less supported by auditory physiology. However, the brain has to analyze and react in real time to an enormous amount of information from the ear and other senses. How is all this information efficiently represented and processed in the nervous system? A proposal of nonlinear and complex systems research is that dynamical attractors may form the basis of neural information processing. Because the auditory system is a complex and highly nonlinear dynamical system, it is natural to suppose that dynamical attractors may carry perceptual and functional meaning. Here we show that this idea, scarcely developed in current pitch models, can be successfully applied to pitch perception.
Resumo:
Cortical representational plasticity has been well documented after peripheral and central injuries or improvements in perceptual and motor abilities. This has led to inferences that the changes in cortical representations parallel and account for the improvement in performance during the period of skill acquisition. There have also been several examples of rapidly induced changes in cortical neuronal response properties, for example, by intracortical microstimulation or by classical conditioning paradigms. This report describes similar rapidly induced changes in a cortically mediated perception in human subjects, the ventriloquism aftereffect, which presumably reflects a corresponding change in the cortical representation of acoustic space. The ventriloquism aftereffect describes an enduring shift in the perception of the spatial location of acoustic stimuli after a period of exposure of spatially disparate and simultaneously presented acoustic and visual stimuli. Exposure of a mismatch of 8° for 20–30 min is sufficient to shift the perception of acoustic space by approximately the same amount across subjects and acoustic frequencies. Given that the cerebral cortex is necessary for the perception of acoustic space, it is likely that the ventriloquism aftereffect reflects a change in the cortical representation of acoustic space. Comparisons between the responses of single cortical neurons in the behaving macaque monkey and the stimulus parameters that give rise to the ventriloquism aftereffect suggest that the changes in the cortical representation of acoustic space may begin as early as the primary auditory cortex.
Resumo:
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.