62 resultados para SPEECH
Resumo:
Telephone communication is a challenge for many hearing-impaired individuals. One important technical reason for this difficulty is the restricted frequency range (0.3-3.4 kHz) of conventional landline telephones. Internet telephony (voice over Internet protocol [VoIP]) is transmitted with a larger frequency range (0.1-8 kHz) and therefore includes more frequencies relevant to speech perception. According to a recently published, laboratory-based study, the theoretical advantage of ideal VoIP conditions over conventional telephone quality has translated into improved speech perception by hearing-impaired individuals. However, the speech perception benefits of nonideal VoIP network conditions, which may occur in daily life, have not been explored. VoIP use cannot be recommended to hearing-impaired individuals before its potential under more realistic conditions has been examined.
Resumo:
The aim was to investigate the effect of different speech tasks, i.e. recitation of prose (PR), alliteration (AR) and hexameter (HR) verses and a control task (mental arithmetic (MA) with voicing of the result on end-tidal CO2 (PETCO2), cerebral hemodynamics and oxygenation. CO2 levels in the blood are known to strongly affect cerebral blood flow. Speech changes breathing pattern and may affect CO2 levels. Measurements were performed on 24 healthy adult volunteers during the performance of the 4 tasks. Tissue oxygen saturation (StO2) and absolute concentrations of oxyhemoglobin ([O2Hb]), deoxyhemoglobin ([HHb]) and total hemoglobin ([tHb]) were measured by functional near-infrared spectroscopy (fNIRS) and PETCO2 by a gas analyzer. Statistical analysis was applied to the difference between baseline before the task, 2 recitation and 5 baseline periods after the task. The 2 brain hemispheres and 4 tasks were tested separately. A significant decrease in PETCO2 was found during all 4 tasks with the smallest decrease during the MA task. During the recitation tasks (PR, AR and HR) a statistically significant (p < 0.05) decrease occurred for StO2 during PR and AR in the right prefrontal cortex (PFC) and during AR and HR in the left PFC. [O2Hb] decreased significantly during PR, AR and HR in both hemispheres. [HHb] increased significantly during the AR task in the right PFC. [tHb] decreased significantly during HR in the right PFC and during PR, AR and HR in the left PFC. During the MA task, StO2 increased and [HHb] decreased significantly during the MA task. We conclude that changes in breathing (hyperventilation) during the tasks led to lower CO2 pressure in the blood (hypocapnia), predominantly responsible for the measured changes in cerebral hemodynamics and oxygenation. In conclusion, our findings demonstrate that PETCO2 should be monitored during functional brain studies investigating speech using neuroimaging modalities, such as fNIRS, fMRI to ensure a correct interpretation of changes in hemodynamics and oxygenation.
Resumo:
BACKGROUND AND OBJECTIVE: In the Swiss version of the Freiburg speech intelligibility test five test words from the original German recording which are rarely used in Switzerland have been exchanged. Furthermore, differences in the transfer functions between headphone and loudspeaker presentation are not taken into account during calibration. New settings for the levels of the individual test words in the recommended recording and small changes in calibration procedures led us to make a verification of the currently used normative values.PATIENTS AND METHODS: Speech intelligibility was measured in 20 subjects with normal hearing using monosyllabic words and numbers via headphones and loudspeakers.RESULTS: On average, 50% speech intelligibility was reached at levels which were 7.5 dB lower under free-field conditions than for headphone presentation. The average difference between numbers and monosyllabic words was found to be 9.6 dB, which is considerably lower than the 14 dB of the current normative curves.CONCLUSIONS: There is a good agreement between our measurements and the normative values for tests using monosyllabic words and headphones, but not for numbers or free-field measurements.
Resumo:
Speech melody or prosody subserves linguistic, emotional, and pragmatic functions in speech communication. Prosodic perception is based on the decoding of acoustic cues with a predominant function of frequency-related information perceived as speaker's pitch. Evaluation of prosodic meaning is a cognitive function implemented in cortical and subcortical networks that generate continuously updated affective or linguistic speaker impressions. Various brain-imaging methods allow delineation of neural structures involved in prosody processing. In contrast to functional magnetic resonance imaging techniques, DC (direct current, slow) components of the EEG directly measure cortical activation without temporal delay. Activation patterns obtained with this method are highly task specific and intraindividually reproducible. Studies presented here investigated the topography of prosodic stimulus processing in dependence on acoustic stimulus structure and linguistic or affective task demands, respectively. Data obtained from measuring DC potentials demonstrated that the right hemisphere has a predominant role in processing emotions from the tone of voice, irrespective of emotional valence. However, right hemisphere involvement is modulated by diverse speech and language-related conditions that are associated with a left hemisphere participation in prosody processing. The degree of left hemisphere involvement depends on several factors such as (i) articulatory demands on the perceiver of prosody (possibly, also the poser), (ii) a relative left hemisphere specialization in processing temporal cues mediating prosodic meaning, and (iii) the propensity of prosody to act on the segment level in order to modulate word or sentence meaning. The specific role of top-down effects in terms of either linguistically or affectively oriented attention on lateralization of stimulus processing is not clear and requires further investigations.
Resumo:
CONCLUSIONS: Speech understanding is better with the Baha Divino than with the Baha Compact in competing noise from the rear. No difference was found for speech understanding in quiet. Subjectively, overall sound quality and speech understanding were rated better for the Baha Divino. OBJECTIVES: To compare speech understanding in quiet and in noise and subjective ratings for two different bone-anchored hearing aids: the recently developed Baha Divino and the Baha Compact. PATIENTS AND METHODS: Seven adults with bilateral conductive or mixed hearing losses who were users of a bone-anchored hearing aid were tested with the Baha Compact in quiet and in noise. Tests were repeated after 3 months of use with the Baha Divino. RESULTS: There was no significant difference between the two types of Baha for speech understanding in quiet when tested with German numbers and monosyllabic words at presentation levels between 50 and 80 dB. For speech understanding in noise, an advantage of 2.3 dB for the Baha Divino vs the Baha Compact was found, if noise was emitted from a loudspeaker to the rear of the listener and the directional microphone noise reduction system was activated. Subjectively, the Baha Divino was rated statistically significantly better in terms of overall sound quality.
Resumo:
Speech coding might have an impact on music perception of cochlear implant users. This questionnaire study compares the musical activities and perception of postlingually deafened cochlear implant users with three different coding strategies (CIS, ACE, SPEAK) using the Munich Music Questionnaire. Overall, the self-reported perception of music of CIS, SPEAK, and ACE users did not differ by very much.
Resumo:
Open-ended interviews of 90 min length of 38 patients were analyzed with respect to speech stylistics, shown by Schucker and Jacobs to differentiate individuals with type A personality features from those with type B. In our patients, Type A/B had been assessed by the Bortner Personality Inventory. The stylistics studied were: repeated words swallowed words, interruptions, simultaneous speech, silence latency (between question and answer) (SL), speed of speech, uneven speed of speech (USS), explosive words (PW), uneven speech volume (USV), and speech volume. Correlations between both raters for all speech categories were high. Positive correlations between extent of type A and SL (r = 0.33; p = 0.022), USS (r = 0.51; p = 0.002), PW (r = 0.46; p = 0.003) and USV (r = 0.39; p = 0.012) were found. Our results indicate that the speech in nonstress open-ended interviews of type A individuals tends to show a higher emotional tension (positive correlations for USS PW and USV) and is more controlled in conversation (positive correlation for SL).