993 resultados para Speech perception
Resumo:
An open prospective study was conducted among the patients visiting an urban medical policlinic for the first time without an appointment to assess whether the immigrants (who represent more than half of our patients) are aware of the health effects of smoking, whether the level of acculturation influences knowledge, and whether doctors give similar advice to Swiss and foreign smokers. 226 smokers, 105 Swiss (46.5%), and 121 foreign-born (53.5%), participated in the study. 32.2% (95% CI [24.4%; 41.1%]) of migrants and 9.6% [5.3%; 16.8%] of Swiss patients were not aware of negative effects of smoking. After adjustment for age, the multivariate model showed that the estimated odds of "ignorance of health effects of smoking" was higher for people lacking mastery of the local language compared with those mastering it (odds ratio (OR) = 7.5 [3.6; 15.8], p < 0.001), and higher for men (OR = 4.3 [1.9; 10.0], p < 0.001). Advice to stop smoking was given with similar frequency to immigrants (31.9% [24.2%; 40.8%] and Swiss patients (29.0% [21.0%; 38.5%]). Nonintegrated patients did not appear to receive less counselling than integrated patients (OR = 1.1 [0.6; 2.1], p = 0.812). We conclude that the level of knowledge among male immigrants not integrated or unable to speak the local language is lower than among integrated foreign-born and Swiss patients. Smoking cessation counselling by a doctor was only given to a minority of patients, but such counselling seemed irrespective of nationality.
Resumo:
Puhe
Resumo:
Speech by Governor Culver.
Resumo:
Behavioral and brain responses to identical stimuli can vary with experimental and task parameters, including the context of stimulus presentation or attention. More surprisingly, computational models suggest that noise-related random fluctuations in brain responses to stimuli would alone be sufficient to engender perceptual differences between physically identical stimuli. In two experiments combining psychophysics and EEG in healthy humans, we investigated brain mechanisms whereby identical stimuli are (erroneously) perceived as different (higher vs lower in pitch or longer vs shorter in duration) in the absence of any change in the experimental context. Even though, as expected, participants' percepts to identical stimuli varied randomly, a classification algorithm based on a mixture of Gaussians model (GMM) showed that there was sufficient information in single-trial EEG to reliably predict participants' judgments of the stimulus dimension. By contrasting electrical neuroimaging analyses of auditory evoked potentials (AEPs) to the identical stimuli as a function of participants' percepts, we identified the precise timing and neural correlates (strength vs topographic modulations) as well as intracranial sources of these erroneous perceptions. In both experiments, AEP differences first occurred ∼100 ms after stimulus onset and were the result of topographic modulations following from changes in the configuration of active brain networks. Source estimations localized the origin of variations in perceived pitch of identical stimuli within right temporal and left frontal areas and of variations in perceived duration within right temporoparietal areas. We discuss our results in terms of providing neurophysiologic evidence for the contribution of random fluctuations in brain activity to conscious perception.
Resumo:
The physiological basis of human cerebral asymmetry for language remains mysterious. We have used simultaneous physiological and anatomical measurements to investigate the issue. Concentrating on neural oscillatory activity in speech-specific frequency bands and exploring interactions between gestural (motor) and auditory-evoked activity, we find, in the absence of language-related processing, that left auditory, somatosensory, articulatory motor, and inferior parietal cortices show specific, lateralized, speech-related physiological properties. With the addition of ecologically valid audiovisual stimulation, activity in auditory cortex synchronizes with left-dominant input from the motor cortex at frequencies corresponding to syllabic, but not phonemic, speech rhythms. Our results support theories of language lateralization that posit a major role for intrinsic, hardwired perceptuomotor processing in syllabic parsing and are compatible both with the evolutionary view that speech arose from a combination of syllable-sized vocalizations and meaningful hand gestures and with developmental observations suggesting phonemic analysis is a developmentally acquired process.