960 resultados para audiovisual speech perception


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Word deafness is a rare condition where pathologically degraded speech perception results in impaired repetition and comprehension but otherwise intact linguistic skills. Although impaired linguistic systems in aphasias resulting from damage to the neural language system (here termed central impairments), have been consistently shown to be amenable to external influences such as linguistic or contextual information (e.g. cueing effects in naming), it is not known whether similar influences can be shown for aphasia arising from damage to a perceptual system (here termed peripheral impairments). Aims: This study aimed to investigate the extent to which pathologically degraded speech perception could be facilitated or disrupted by providing visual as well as auditory information. Methods and Procedures: In three word repetition tasks, the participant with word deafness (AB) repeated words under different conditions: words were repeated in the context of a pictorial or written target, a distractor (semantic, unrelated, rhyme or phonological neighbour) or a blank page (nothing). Accuracy and error types were analysed. Results: AB was impaired at repetition in the blank condition, confirming her degraded speech perception. Repetition was significantly facilitated when accompanied by a picture or written example of the word and significantly impaired by the presence of a written rhyme. Errors in the blank condition were primarily formal whereas errors in the rhyme condition were primarily miscues (saying the distractor word rather than the target). Conclusions: Cross-modal input can both facilitate and further disrupt repetition in word deafness. The cognitive mechanisms behind these findings are discussed. Both top-down influence from the lexical layer on perceptual processes as well as intra-lexical competition within the lexical layer may play a role.  

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary purpose of this study was to evaluate speech perception and localization abilities in children who have received sequential cochlear implants, with the first implant received before age 4 and the second implant received before age 12. Results indicate performance in the bilateral cochlear implant condition is significantly better than listening with each implant alone for the outcome measures used in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New technology in the Freedom (R) speech processor for cochlear implants was developed to improve how incoming acoustic sound is processed; this applies not only for new users, but also for previous generations of cochlear implants. Aim: To identify the contribution of this technology - the Nucleus 22 (R) - on speech perception tests in silence and in noise, and on audiometric thresholds. Methods: A cross-sectional cohort study was undertaken. Seventeen patients were selected. The last map based on the Spectra (R) was revised and optimized before starting the tests. Troubleshooting was used to identify malfunction. To identify the contribution of the Freedom (R) technology for the Nucleus22 (R), auditory thresholds and speech perception tests were performed in free field in soundproof booths. Recorded monosyllables and sentences in silence and in noise (SNR = 0dB) were presented at 60 dBSPL. The nonparametric Wilcoxon test for paired data was used to compare groups. Results: Freedom (R) applied for the Nucleus22 (R) showed a statistically significant difference in all speech perception tests and audiometric thresholds. Conclusion: The reedom (R) technology improved the performance of speech perception and audiometric thresholds of patients with Nucleus 22 (R).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To characterize the PI component of long latency auditory evoked potentials (LLAEPs) in cochlear implant users with auditory neuropathy spectrum disorder (ANSD) and determine firstly whether they correlate with speech perception performance and secondly whether they correlate with other variables related to cochlear implant use. Methods: This study was conducted at the Center for Audiological Research at the University of Sao Paulo. The sample included 14 pediatric (4-11 years of age) cochlear implant users with ANSD, of both sexes, with profound prelingual hearing loss. Patients with hypoplasia or agenesis of the auditory nerve were excluded from the study. LLAEPs produced in response to speech stimuli were recorded using a Smart EP USB Jr. system. The subjects' speech perception was evaluated using tests 5 and 6 of the Glendonald Auditory Screening Procedure (GASP). Results: The P-1 component was detected in 12/14 (85.7%) children with ANSD. Latency of the P-1 component correlated with duration of sensorial hearing deprivation (*p = 0.007, r = 0.7278), but not with duration of cochlear implant use. An analysis of groups assigned according to GASP performance (k-means clustering) revealed that aspects of prior central auditory system development reflected in the P-1 component are related to behavioral auditory skills. Conclusions: In children with ANSD using cochlear implants, the P-1 component can serve as a marker of central auditory cortical development and a predictor of the implanted child's speech perception performance. (c) 2012 Elsevier Ireland Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: In recent years, the benefits associated with the use of cochlear implants (CIs), especially with regard to speech perception, have proven to surpass those produced by the use of hearing aids, making CIs a highly efficient resource for patients with severe/profound hearing loss. However, few studies so far have assessed the satisfaction of adult users of CIs. Objective: To analyze the relationship between the level of speech perception and degree of satisfaction of adult users of CI. Method: This was a prospective cross-sectional study conducted in the Audiological Research Center (CPA) of the Hospital of Craniofacial Anomalies, University of São Paulo (HRAC/USP), in Bauru, São Paulo, Brazil. A total of 12 users of CIs with pre-lingual or post-lingual hearing loss participated in this study. The following tools were used in the assessment: a questionnaire, "Satisfaction with Amplification in Daily Life" (SADL), culturally adapted to Brazilian Portuguese, as well as its relationship with the speech perception results; a speech perception test under quiet conditions; and the Hearing in Noise Test (HINT)Brazil under free field conditions. Results: The participants in the study were on the whole satisfied with their devices, and the degree of satisfaction correlated positively with the ability to perceive monosyllabic words under quiet conditions. The satisfaction did not correlate with the level of speech perception in noisy environments. Conclusion: Assessments of satisfaction may help professionals to predict what other factors, in addition to speech perception, may contribute to the satisfaction of CI users in order to reorganize the intervention process to improve the users' quality of life.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to assess speech perception and communication skills in adolescents between ages 8 and 18 that received cochlear implants for pre- and peri-lingual deafness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Telephone communication is a challenge for many hearing-impaired individuals. One important technical reason for this difficulty is the restricted frequency range (0.3-3.4 kHz) of conventional landline telephones. Internet telephony (voice over Internet protocol [VoIP]) is transmitted with a larger frequency range (0.1-8 kHz) and therefore includes more frequencies relevant to speech perception. According to a recently published, laboratory-based study, the theoretical advantage of ideal VoIP conditions over conventional telephone quality has translated into improved speech perception by hearing-impaired individuals. However, the speech perception benefits of nonideal VoIP network conditions, which may occur in daily life, have not been explored. VoIP use cannot be recommended to hearing-impaired individuals before its potential under more realistic conditions has been examined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. METHODS Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). CONCLUSION Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How speech is separated perceptually from other speech remains poorly understood. Recent research suggests that the ability of an extraneous formant to impair intelligibility depends on the modulation of its frequency, but not its amplitude, contour. This study further examined the effect of formant-frequency variation on intelligibility by manipulating the rate of formant-frequency change. Target sentences were synthetic three-formant (F1?+?F2?+?F3) analogues of natural utterances. Perceptual organization was probed by presenting stimuli dichotically (F1?+?F2C?+?F3C; F2?+?F3), where F2C?+?F3C constitute a competitor for F2 and F3 that listeners must reject to optimize recognition. Competitors were derived using formant-frequency contours extracted from extended passages spoken by the same talker and processed to alter the rate of formant-frequency variation, such that rate scale factors relative to the target sentences were 0, 0.25, 0.5, 1, 2, and 4 (0?=?constant frequencies). Competitor amplitude contours were either constant, or time-reversed and rate-adjusted in parallel with the frequency contour. Adding a competitor typically reduced intelligibility; this reduction increased with competitor rate until the rate was at least twice that of the target sentences. Similarity in the results for the two amplitude conditions confirmed that formant amplitude contours do not influence across-formant grouping. The findings indicate that competitor efficacy is not tuned to the rate of the target sentences; most probably, it depends primarily on the overall rate of frequency variation in the competitor formants. This suggests that, when segregating the speech of concurrent talkers, differences in speech rate may not be a significant cue for across-frequency grouping of formants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an isolated syllable, a formant will tend to be segregated perceptually if its fundamental frequency (F0) differs from that of the other formants. This study explored whether similar results are found for sentences, and specifically whether differences in F0 (?F0) also influence across-formant grouping in circumstances where the exclusion or inclusion of the manipulated formant critically determines speech intelligibility. Three-formant (F1 + F2 + F3) analogues of almost continuously voiced natural sentences were synthesized using a monotonous glottal source (F0 = 150 Hz). Perceptual organization was probed by presenting stimuli dichotically (F1 + F2C + F3; F2), where F2C is a competitor for F2 that listeners must resist to optimize recognition. Competitors were created using time-reversed frequency and amplitude contours of F2, and F0 was manipulated (?F0 = ±8, ±2, or 0 semitones relative to the other formants). Adding F2C typically reduced intelligibility, and this reduction was greatest when ?F0 = 0. There was an additional effect of absolute F0 for F2C, such that competitor efficacy was greater for higher F0s. However, competitor efficacy was not due to energetic masking of F3 by F2C. The results are consistent with the proposal that a grouping “primitive” based on common F0 influences the fusion and segregation of concurrent formants in sentence perception.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How speech is separated perceptually from other speech remains poorly understood. In a series of experiments, perceptual organisation was probed by presenting three-formant (F1+F2+F3) analogues of target sentences dichotically, together with a competitor for F2 (F2C), or for F2+F3, which listeners must reject to optimise recognition. To control for energetic masking, the competitor was always presented in the opposite ear to the corresponding target formant(s). Sine-wave speech was used initially, and different versions of F2C were derived from F2 using separate manipulations of its amplitude and frequency contours. F2Cs with time-varying frequency contours were highly effective competitors, whatever their amplitude characteristics, whereas constant-frequency F2Cs were ineffective. Subsequent studies used synthetic-formant speech to explore the effects of manipulating the rate and depth of formant-frequency change in the competitor. Competitor efficacy was not tuned to the rate of formant-frequency variation in the target sentences; rather, the reduction in intelligibility increased with competitor rate relative to the rate for the target sentences. Therefore, differences in speech rate may not be a useful cue for separating the speech of concurrent talkers. Effects of competitors whose depth of formant-frequency variation was scaled by a range of factors were explored using competitors derived either by inverting the frequency contour of F2 about its geometric mean (plausibly speech-like pattern) or by using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Competitor efficacy depended on the overall depth of frequency variation, not depth relative to that for the other formants. Furthermore, the triangle-wave competitors were as effective as their more speech-like counterparts. Overall, the results suggest that formant-frequency variation is critical for the across-frequency grouping of formants but that this grouping does not depend on speech-specific constraints.