872 resultados para Phonetic Perception
Resumo:
We explored the ability of older (60-80 years old) and younger (18-23 years old) musicians and nonmusicians to judge the similarity of transposed melodies varying on rhythm, mode, and/or contour (Experiment 1) and to discriminate among melodies differing only in rhythm, mode, or contour (Experiment 2). Similarity ratings did not vary greatly among groups, with tunes differing only by mode being rated as most similar. In the same/different discrimination task, musicians performed better than nonmusicians, but we found no age differences. We also found that discrimination of major from minor tunes was difficult for everyone, even for musicians. Mode is apparently a subtle dimension in music, despite its deliberate use in composition and despite people's ability to label minor as "sad" and major as "happy."
Resumo:
Auditory imagery for songs was studied in two groups of patients with left or right temporal-lobe excision for control of epilepsy, and a group of matched normal control subjects. Two tasks were used. In the perceptual task, subjects saw the text of a familiar song and simultaneously heard it sung. On each trial they judged if the second of two capitalized lyrics was higher or lower in pitch than the first. The imagery task was identical in all respects except that no song was presented, so that subjects had to generate an auditory image of the song. The results indicated that all subjects found the imagery task more difficult than the perceptual task, but patients with right temporal-lobe damage performed significantly worse on both tasks than either patients with left temporal-lobe lesions or normal control subjects. These results support the idea that imagery arises from activation of a neural substrate shared with perceptual mechanisms, and provides evidence for a right temporal- lobe specialization for this type of auditory imaginal processing.
Resumo:
This study aimed to assess speech perception and communication skills in adolescents between ages 8 and 18 that received cochlear implants for pre- and peri-lingual deafness.
Resumo:
The real utilisation scenario of non-invasive ventilation (NIV) in Swiss ICUs has never been reported. Using a survey methodology, we developed a questionnaire sent to the directors of the 79 adult ICUs to identify the perceived pattern of NIV utilisation. We obtained a response rate of 62%. The overall utilisation rate for NIV was 26% of all mechanical ventilations, but we found significant differences in the utilisation rates among different linguistic areas, ranging from 20% in the German part to 48% in the French part (p <0.01). NIV was mainly indicated for the acute exacerbations of COPD (AeCOPD), acute cardiogenic pulmonary edema (ACPE) and acute respiratory failure (ARF) in selected do-not-intubate patients. In ACPE, CPAP was much less used than bi-level ventilation and was still applied in AeCOPD. The first line interface was a facial mask (81%) and the preferred type of ventilator was an ICU machine with an NIV module (69%). The perceived use of NIV is generally high in Switzerland, but regional variations are remarkable. The indications of NIV use are in accordance with international guidelines. A high percentage of units consider selected do-not-intubate conditions as an important additional indication.
Resumo:
Neuropsychological studies have suggested that imagery processes may be mediated by neuronal mechanisms similar to those used in perception. To test this hypothesis, and to explore the neural basis for song imagery, 12 normal subjects were scanned using the water bolus method to measure cerebral blood flow (CBF) during the performance of three tasks. In the control condition subjects saw pairs of words on each trial and judged which word was longer. In the perceptual condition subjects also viewed pairs of words, this time drawn from a familiar song; simultaneously they heard the corresponding song, and their task was to judge the change in pitch of the two cued words within the song. In the imagery condition, subjects performed precisely the same judgment as in the perceptual condition, but with no auditory input. Thus, to perform the imagery task correctly an internal auditory representation must be accessed. Paired-image subtraction of the resulting pattern of CBF, together with matched MRI for anatomical localization, revealed that both perceptual and imagery. tasks produced similar patterns of CBF changes, as compared to the control condition, in keeping with the hypothesis. More specifically, both perceiving and imagining songs are associated with bilateral neuronal activity in the secondary auditory cortices, suggesting that processes within these regions underlie the phenomenological impression of imagined sounds. Other CBF foci elicited in both tasks include areas in the left and right frontal lobes and in the left parietal lobe, as well as the supplementary motor area. This latter region implicates covert vocalization as one component of musical imagery. Direct comparison of imagery and perceptual tasks revealed CBF increases in the inferior frontal polar cortex and right thalamus. We speculate that this network of regions may be specifically associated with retrieval and/or generation of auditory information from memory.
Resumo:
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.