22 resultados para Auditory sentence processing


Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: We aimed at further elucidating whether aphasic patients' difficulties in understanding non-canonical sentence structures, such as Passive or Object-Verb-Subject sentences, can be attributed to impaired morphosyntactic cue recognition, and to problems in integrating competing interpretations. METHODS: A sentence-picture matching task with canonical and non-canonical spoken sentences was performed using concurrent eye tracking. Accuracy, reaction time, and eye tracking data (fixations) of 50 healthy subjects and 12 aphasic patients were analysed. RESULTS: Patients showed increased error rates and reaction times, as well as delayed fixation preferences for target pictures in non-canonical sentences. Patients' fixation patterns differed from healthy controls and revealed deficits in recognizing and immediately integrating morphosyntactic cues. CONCLUSION: Our study corroborates the notion that difficulties in understanding syntactically complex sentences are attributable to a processing deficit encompassing delayed and therefore impaired recognition and integration of cues, as well as increased competition between interpretations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Patients with schizophrenia are impaired in many aspects of auditory processing, but indirect evidence suggests that intensity perception is intact. However, because the extraction of meaning from dynamic intensity relies on structures that appear to be altered in schizophrenia, we hypothesized that the perception of auditory looming is impaired as well. Twenty inpatients with schizophrenia and 20 control participants, matched for age, gender, and education, gave intensity ratings of rising (looming) and falling intensity sounds with different mean intensities. Intensity change was overestimated in looming as compared with receding sounds in both groups. However, healthy individuals showed a stronger effect at higher mean intensity, in keeping with previous findings, while patients with schizophrenia lacked this modulation. We discuss how this might support the notion of a more general deficit in extracting emotional meaning from different sensory cues, including intensity and pitch.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among other auditory operations, the analysis of different sound levels received at both ears is fundamental for the localization of a sound source. These so-called interaural level differences, in animals, are coded by excitatory-inhibitory neurons yielding asymmetric hemispheric activity patterns with acoustic stimuli having maximal interaural level differences. In human auditory cortex, the temporal blood oxygen level-dependent (BOLD) response to auditory inputs, as measured by functional magnetic resonance imaging (fMRI), consists of at least two independent components: an initial transient and a subsequent sustained signal, which, on a different time scale, are consistent with electrophysiological human and animal response patterns. However, their specific functional role remains unclear. Animal studies suggest these temporal components being based on different neural networks and having specific roles in representing the external acoustic environment. Here we hypothesized that the transient and sustained response constituents are differentially involved in coding interaural level differences and therefore play different roles in spatial information processing. Healthy subjects underwent monaural and binaural acoustic stimulation and BOLD responses were measured using high signal-to-noise-ratio fMRI. In the anatomically segmented Heschl's gyrus the transient response was bilaterally balanced, independent of the side of stimulation, while in opposite the sustained response was contralateralized. This dissociation suggests a differential role at these two independent temporal response components, with an initial bilateral transient signal subserving rapid sound detection and a subsequent lateralized sustained signal subserving detailed sound characterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Triggered event-related functional magnetic resonance imaging requires sparse intervals of temporally resolved functional data acquisitions, whose initiation corresponds to the occurrence of an event, typically an epileptic spike in the electroencephalographic trace. However, conventional fMRI time series are greatly affected by non-steady-state magnetization effects, which obscure initial blood oxygen level-dependent (BOLD) signals. Here, conventional echo-planar imaging and a post-processing solution based on principal component analysis were employed to remove the dominant eigenimages of the time series, to filter out the global signal changes induced by magnetization decay and to recover BOLD signals starting with the first functional volume. This approach was compared with a physical solution using radiofrequency preparation, which nullifies magnetization effects. As an application of the method, the detectability of the initial transient BOLD response in the auditory cortex, which is elicited by the onset of acoustic scanner noise, was used to demonstrate that post-processing-based removal of magnetization effects allows to detect brain activity patterns identical with those obtained using the radiofrequency preparation. Using the auditory responses as an ideal experimental model of triggered brain activity, our results suggest that reducing the initial magnetization effects by removing a few principal components from fMRI data may be potentially useful in the analysis of triggered event-related echo-planar time series. The implications of this study are discussed with special caution to remaining technical limitations and the additional neurophysiological issues of the triggered acquisition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Speech melody or prosody subserves linguistic, emotional, and pragmatic functions in speech communication. Prosodic perception is based on the decoding of acoustic cues with a predominant function of frequency-related information perceived as speaker's pitch. Evaluation of prosodic meaning is a cognitive function implemented in cortical and subcortical networks that generate continuously updated affective or linguistic speaker impressions. Various brain-imaging methods allow delineation of neural structures involved in prosody processing. In contrast to functional magnetic resonance imaging techniques, DC (direct current, slow) components of the EEG directly measure cortical activation without temporal delay. Activation patterns obtained with this method are highly task specific and intraindividually reproducible. Studies presented here investigated the topography of prosodic stimulus processing in dependence on acoustic stimulus structure and linguistic or affective task demands, respectively. Data obtained from measuring DC potentials demonstrated that the right hemisphere has a predominant role in processing emotions from the tone of voice, irrespective of emotional valence. However, right hemisphere involvement is modulated by diverse speech and language-related conditions that are associated with a left hemisphere participation in prosody processing. The degree of left hemisphere involvement depends on several factors such as (i) articulatory demands on the perceiver of prosody (possibly, also the poser), (ii) a relative left hemisphere specialization in processing temporal cues mediating prosodic meaning, and (iii) the propensity of prosody to act on the segment level in order to modulate word or sentence meaning. The specific role of top-down effects in terms of either linguistically or affectively oriented attention on lateralization of stimulus processing is not clear and requires further investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: It is well known that there are specific peripheral activation patterns associated with the emotional valence of sounds. However, it is unclear how these effects adapt over time. The personality traits influencing these processes are also not clear. Anxiety disorders influence the autonomic activation related to emotional processing. However, personality anxiety traits have never been studied in the context of affective auditory stimuli. METHODS: Heart rate, skin conductance, zygomatic muscle activity and subjective rating of emotional valence and arousal were recorded in healthy subjects during the presentation of pleasant, unpleasant, and neutral sounds. Recordings were repeated 1 week later to examine possible time-dependent changes related to habituation and sensitization processes. RESULTS AND CONCLUSION: There was not a generalized habituation or sensitization process related to the repeated presentation of affective sounds, but rather, specific adaptation processes for each physiological measure. These observations are consistent with previous studies performed with affective pictures and simple tones. Thus, the measures of skin conductance activity showed the strongest changes over time, including habituation during the first presentation session and sensitization at the end of the second presentation session, whereas the facial electromyographic activity habituated only for the neutral stimuli and the heart rate did not habituate at all. Finally, we showed that the measure of personality trait anxiety influenced the orienting reaction to affective sounds, but not the adaptation processes related to the repeated presentation of these sounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amygdala has been studied extensively for its critical role in associative fear conditioning in animals and humans. Noxious stimuli, such as those used for fear conditioning, are most effective in eliciting behavioral responses and amygdala activation when experienced in an unpredictable manner. Here, we show, using a translational approach in mice and humans, that unpredictability per se without interaction with motivational information is sufficient to induce sustained neural activity in the amygdala and to elicit anxiety-like behavior. Exposing mice to mere temporal unpredictability within a time series of neutral sound pulses in an otherwise neutral sensory environment increased expression of the immediate-early gene c-fos and prevented rapid habituation of single neuron activity in the basolateral amygdala. At the behavioral level, unpredictable, but not predictable, auditory stimulation induced avoidance and anxiety-like behavior. In humans, functional magnetic resonance imaging revealed that temporal unpredictably causes sustained neural activity in amygdala and anxiety-like behavior as quantified by enhanced attention toward emotional faces. Our findings show that unpredictability per se is an important feature of the sensory environment influencing habituation of neuronal activity in amygdala and emotional behavior and indicate that regulation of amygdala habituation represents an evolutionary-conserved mechanism for adapting behavior in anticipation of temporally unpredictable events.