998 resultados para Auditory sentence processing
Resumo:
Given that the auditory system is rather well developed at the end of the third trimester of pregnancy, it is likely that couplings between acoustics and motor activity can be integrated as early as at the beginning of postnatal life. The aim of the present mini-review was to summarize and discuss studies on early auditory-motor integration, focusing particularly on upper-limb movements (one of the most crucial means to interact with the environment) in association with auditory stimuli, to develop further understanding of their significance with regard to early infant development. Many studies have investigated the relationship between various infant behaviors (e.g., sucking, visual fixation, head turning) and auditory stimuli, and established that human infants can be observed displaying couplings between action and environmental sensory stimulation already from just after birth, clearly indicating a propensity for intentional behavior. Surprisingly few studies, however, have investigated the associations between upper-limb movements and different auditory stimuli in newborns and young infants, infants born at risk for developmental disorders/delays in particular. Findings from studies of early auditory-motor interaction support that the developing integration of sensory and motor systems is a fundamental part of the process guiding the development of goal-directed action in infancy, of great importance for continued motor, perceptual, and cognitive development. At-risk infants (e.g., those born preterm) may display increasing central auditory processing disorders, negatively affecting early sensorymotor integration, and resulting in long-term consequences on gesturing, language development, and social communication. Consequently, there is a need for more studies on such implications.
Resumo:
It has been demonstrated that, on abrupt withdrawal, patients with chronic exposure can experience a number of symptoms indicative of a dependent state. In clinical patients, the earliest to arise and most persistent signal of withdrawal from chronic benzodiazepine (Bzp) treatment is anxiety. In laboratory animals, anxiety-like effects following abrupt interruption of chronic Bzp treatment can also be reproduced. In fact, signs that oscillate from irritability to extreme fear behaviours and seizures have been described already. As anxiety remains one of the most important symptoms of Bzp withdrawal, in this study we evaluated the anxiety levels of rats withdrawn from diazepam. Also studied were the effects on the motor performance and preattentive sensory gating process of rats under diazepam chronic treatment and upon 48-h withdrawal on three animal models of anxiety, the elevated plus-maze (EPM), ultrasonic vocalizations (USV) and startle + prepulse inhibition tests. Data obtained showed an anxiolytic- and anxiogenic-like profile of the chronic intake of and withdrawal from diazepam regimen in the EPM test, 22-KHz USV and startle reflex. Diazepam chronic effects or its withdrawal were ineffective in promoting any alteration in the prepulse inhibition (PPI). However, an increase of PPI was achieved in both sucrose and diazepam pretreated rats on 48-h withdrawal, suggesting a procedural rather than a specific effect of withdrawal on sensory gating processes. It is also possible that the prepulse can function as a conditioned stimulus to informing the delivery of an aversive event, as the auditory startling-eliciting stimulus. All these findings are indicative of a sensitization of the neural substrates of aversion in diazepam withdrawn animals without concomitant changes on the processing of sensory information
Resumo:
The information presented in this paper demonstrates the author's experience in previews cross-sectional studies conducted in Brazil, in comparison with the current literature. Over the last 10 years, auditory evoked potential (AEP) has been used in children with learning disabilities. This method is critical to analyze the quality of the processing in time and indicates the specific neural demands and circuits of the sensorial and cognitive process in this clinical population. Some studies with children with dyslexia and learning disabilities were shown here to illustrate the use of AEP in this population.
Resumo:
Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude.
Resumo:
This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.
Resumo:
This study verifies the effects of contralateral noise on otoacoustic emissions and auditory evoked potentials. Short, middle and late auditory evoked potentials as well as otoacoustic emissions with and without white noise were assessed. Twenty-five subjects, normal-hearing, both genders, aged 18 to 30 years, were tested. In general, latencies of the various auditory potentials were increased at noise conditions, whereas amplitudes were diminished at noise conditions for short, middle and late latency responses combined in the same subject. The amplitude of otoacoustic emission decreased significantly in the condition with contralateral noise in comparison to the condition without noise. Our results indicate that most subjects presented different responses between conditions (with and without noise) in all tests, thereby suggesting that the efferent system was acting at both caudal and rostral portions of the auditory system.
Resumo:
The caudomedial nidopallium (NCM) is a telencephalic area involved in auditory processing and memorization in songbirds, but the synaptic mechanisms associated with auditory processing in NCM are largely unknown. To identify potential changes in synaptic transmission induced by auditory stimulation in NCM, we used a slice preparation for path-clamp recordings of synaptic currents in the NCM of adult zebra finches (Taenopygia guttata) sacrificed after sound isolation followed by exposure to conspecific song or silence. Although post-synaptic GABAergic and glutamatergic currents in the NCM of control and song-exposed birds did not present any differences regarding their frequency, amplitude and duration after song exposure, we observed a higher probability of generation of bursting glutamatergic currents after blockade of GABAergic transmission in song-exposed birds as compared to controls. Both song-exposed males and females presented an increase in the probability of the expression of bursting glutamatergic currents, however bursting was more commonly seen in males where they appeared even without blocking GABAergic transmission. Our data show that song exposure changes the excitability of the glutamatergic neuronal network, increasing the probability of the generation of bursts of glutamatergic currents, but does not affect basic parameters of glutamatergic and GABAergic synaptic currents.
Resumo:
Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.
Resumo:
Patients with schizophrenia are impaired in many aspects of auditory processing, but indirect evidence suggests that intensity perception is intact. However, because the extraction of meaning from dynamic intensity relies on structures that appear to be altered in schizophrenia, we hypothesized that the perception of auditory looming is impaired as well. Twenty inpatients with schizophrenia and 20 control participants, matched for age, gender, and education, gave intensity ratings of rising (looming) and falling intensity sounds with different mean intensities. Intensity change was overestimated in looming as compared with receding sounds in both groups. However, healthy individuals showed a stronger effect at higher mean intensity, in keeping with previous findings, while patients with schizophrenia lacked this modulation. We discuss how this might support the notion of a more general deficit in extracting emotional meaning from different sensory cues, including intensity and pitch.
Resumo:
The group analysed some syntactic and phonological phenomena that presuppose the existence of interrelated components within the lexicon, which motivate the assumption that there are some sublexicons within the global lexicon of a speaker. This result is confirmed by experimental findings in neurolinguistics. Hungarian speaking agrammatic aphasics were tested in several ways, the results showing that the sublexicon of closed-class lexical items provides a highly automated complex device for processing surface sentence structure. Analysing Hungarian ellipsis data from a semantic-syntactic aspect, the group established that the lexicon is best conceived of being as split into at least two main sublexicons: the store of semantic-syntactic feature bundles and a separate store of sound forms. On this basis they proposed a format for representing open-class lexical items whose meanings are connected via certain semantic relations. They also proposed a new classification of verbs to account for the contribution of the aspectual reading of the sentence depending on the referential type of the argument, and a new account of the syntactic and semantic behaviour of aspectual prefixes. The partitioned sets of lexical items are sublexicons on phonological grounds. These sublexicons differ in terms of phonotactic grammaticality. The degrees of phonotactic grammaticality are tied up with the problem of psychological reality, of how many degrees of this native speakers are sensitive to. The group developed a hierarchical construction network as an extension of the original General Inheritance Network formalism and this framework was then used as a platform for the implementation of the grammar fragments.
Resumo:
Among other auditory operations, the analysis of different sound levels received at both ears is fundamental for the localization of a sound source. These so-called interaural level differences, in animals, are coded by excitatory-inhibitory neurons yielding asymmetric hemispheric activity patterns with acoustic stimuli having maximal interaural level differences. In human auditory cortex, the temporal blood oxygen level-dependent (BOLD) response to auditory inputs, as measured by functional magnetic resonance imaging (fMRI), consists of at least two independent components: an initial transient and a subsequent sustained signal, which, on a different time scale, are consistent with electrophysiological human and animal response patterns. However, their specific functional role remains unclear. Animal studies suggest these temporal components being based on different neural networks and having specific roles in representing the external acoustic environment. Here we hypothesized that the transient and sustained response constituents are differentially involved in coding interaural level differences and therefore play different roles in spatial information processing. Healthy subjects underwent monaural and binaural acoustic stimulation and BOLD responses were measured using high signal-to-noise-ratio fMRI. In the anatomically segmented Heschl's gyrus the transient response was bilaterally balanced, independent of the side of stimulation, while in opposite the sustained response was contralateralized. This dissociation suggests a differential role at these two independent temporal response components, with an initial bilateral transient signal subserving rapid sound detection and a subsequent lateralized sustained signal subserving detailed sound characterization.
Resumo:
Triggered event-related functional magnetic resonance imaging requires sparse intervals of temporally resolved functional data acquisitions, whose initiation corresponds to the occurrence of an event, typically an epileptic spike in the electroencephalographic trace. However, conventional fMRI time series are greatly affected by non-steady-state magnetization effects, which obscure initial blood oxygen level-dependent (BOLD) signals. Here, conventional echo-planar imaging and a post-processing solution based on principal component analysis were employed to remove the dominant eigenimages of the time series, to filter out the global signal changes induced by magnetization decay and to recover BOLD signals starting with the first functional volume. This approach was compared with a physical solution using radiofrequency preparation, which nullifies magnetization effects. As an application of the method, the detectability of the initial transient BOLD response in the auditory cortex, which is elicited by the onset of acoustic scanner noise, was used to demonstrate that post-processing-based removal of magnetization effects allows to detect brain activity patterns identical with those obtained using the radiofrequency preparation. Using the auditory responses as an ideal experimental model of triggered brain activity, our results suggest that reducing the initial magnetization effects by removing a few principal components from fMRI data may be potentially useful in the analysis of triggered event-related echo-planar time series. The implications of this study are discussed with special caution to remaining technical limitations and the additional neurophysiological issues of the triggered acquisition.
Resumo:
Speech melody or prosody subserves linguistic, emotional, and pragmatic functions in speech communication. Prosodic perception is based on the decoding of acoustic cues with a predominant function of frequency-related information perceived as speaker's pitch. Evaluation of prosodic meaning is a cognitive function implemented in cortical and subcortical networks that generate continuously updated affective or linguistic speaker impressions. Various brain-imaging methods allow delineation of neural structures involved in prosody processing. In contrast to functional magnetic resonance imaging techniques, DC (direct current, slow) components of the EEG directly measure cortical activation without temporal delay. Activation patterns obtained with this method are highly task specific and intraindividually reproducible. Studies presented here investigated the topography of prosodic stimulus processing in dependence on acoustic stimulus structure and linguistic or affective task demands, respectively. Data obtained from measuring DC potentials demonstrated that the right hemisphere has a predominant role in processing emotions from the tone of voice, irrespective of emotional valence. However, right hemisphere involvement is modulated by diverse speech and language-related conditions that are associated with a left hemisphere participation in prosody processing. The degree of left hemisphere involvement depends on several factors such as (i) articulatory demands on the perceiver of prosody (possibly, also the poser), (ii) a relative left hemisphere specialization in processing temporal cues mediating prosodic meaning, and (iii) the propensity of prosody to act on the segment level in order to modulate word or sentence meaning. The specific role of top-down effects in terms of either linguistically or affectively oriented attention on lateralization of stimulus processing is not clear and requires further investigations.
Resumo:
BACKGROUND: It is well known that there are specific peripheral activation patterns associated with the emotional valence of sounds. However, it is unclear how these effects adapt over time. The personality traits influencing these processes are also not clear. Anxiety disorders influence the autonomic activation related to emotional processing. However, personality anxiety traits have never been studied in the context of affective auditory stimuli. METHODS: Heart rate, skin conductance, zygomatic muscle activity and subjective rating of emotional valence and arousal were recorded in healthy subjects during the presentation of pleasant, unpleasant, and neutral sounds. Recordings were repeated 1 week later to examine possible time-dependent changes related to habituation and sensitization processes. RESULTS AND CONCLUSION: There was not a generalized habituation or sensitization process related to the repeated presentation of affective sounds, but rather, specific adaptation processes for each physiological measure. These observations are consistent with previous studies performed with affective pictures and simple tones. Thus, the measures of skin conductance activity showed the strongest changes over time, including habituation during the first presentation session and sensitization at the end of the second presentation session, whereas the facial electromyographic activity habituated only for the neutral stimuli and the heart rate did not habituate at all. Finally, we showed that the measure of personality trait anxiety influenced the orienting reaction to affective sounds, but not the adaptation processes related to the repeated presentation of these sounds.
Resumo:
The amygdala has been studied extensively for its critical role in associative fear conditioning in animals and humans. Noxious stimuli, such as those used for fear conditioning, are most effective in eliciting behavioral responses and amygdala activation when experienced in an unpredictable manner. Here, we show, using a translational approach in mice and humans, that unpredictability per se without interaction with motivational information is sufficient to induce sustained neural activity in the amygdala and to elicit anxiety-like behavior. Exposing mice to mere temporal unpredictability within a time series of neutral sound pulses in an otherwise neutral sensory environment increased expression of the immediate-early gene c-fos and prevented rapid habituation of single neuron activity in the basolateral amygdala. At the behavioral level, unpredictable, but not predictable, auditory stimulation induced avoidance and anxiety-like behavior. In humans, functional magnetic resonance imaging revealed that temporal unpredictably causes sustained neural activity in amygdala and anxiety-like behavior as quantified by enhanced attention toward emotional faces. Our findings show that unpredictability per se is an important feature of the sensory environment influencing habituation of neuronal activity in amygdala and emotional behavior and indicate that regulation of amygdala habituation represents an evolutionary-conserved mechanism for adapting behavior in anticipation of temporally unpredictable events.