31 resultados para auditory cues
em University of Queensland eSpace - Australia
Resumo:
In this paper, we describe an algorithm that automatically detects and labels peaks I - VII of the normal, suprathreshold auditory brainstem response (ABR). The algorithm proceeds in three stages, with the option of a fourth: ( 1) all candidate peaks and troughs in the ABR waveform are identified using zero crossings of the first derivative, ( 2) peaks I - VII are identified from these candidate peaks based on their latency and morphology, ( 3) if required, peaks II and IV are identified as points of inflection using zero crossings of the second derivative and ( 4) interpeak troughs are identified before peak latencies and amplitudes are measured. The performance of the algorithm was estimated on a set of 240 normal ABR waveforms recorded using a stimulus intensity of 90 dBnHL. When compared to an expert audiologist, the algorithm correctly identified the major ABR peaks ( I, III and V) in 96 - 98% of the waveforms and the minor ABR peaks ( II, IV, VI and VII) in 45 - 83% of waveforms. Whilst peak II was correctly identified in only 83% and peak IV in 77% of waveforms, it was shown that 5% of the peak II identifications and 31% of the peak IV identifications came as a direct result of allowing these peaks to be found as points of inflection. Copyright (C) 2005 S. Karger AG, Basel.
Resumo:
Despite several decades of research, neither clinicians nor academics can agree on a single definition of central auditory processing (CAP) or central auditory processing disorder (CAPD). This article considers why this is the case, and comments on the resulting implications for CAP assessment and CAPD rehabilitation in the clinic.
Resumo:
The McGurk effect, in which auditory [ba] dubbed onto [go] lip movements is perceived as da or tha, was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4(1)/(2)-month-olds were tested in a habituation-test paradigm, in which 2 an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [deltaa] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [deltaa], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [deltaa] were no more familiar than [ba]. These results are consistent with infants'perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The lack of standardized tests of central auditory processing disorder (CAPD) in South Africa (SA) led to the formation of a SA CAPD Taskforce, and the interim development of a "low linguistically loaded" CAPD test protocol using test recordings from the 'Tonal and Speech Materials for Auditory Perceptual Assessment Disc 2.0'. This study inferentially compared the performance of 16 SA English first, and 16 SA English second, language adult speakers on this test protocol, and descriptively compared their performances to previously published American normative data. Comparisons between the SA English first and second language speakers showed a poorer right ear performance (p < .05) by the second language speakers on the two-pair dichotic digits test only. Equivalent performances (p < .05) were observed on the left ear performance on the two pair dichotic digits test, and the frequency patterns test, the duration patterns test, the low-pass filtered speech test, the 45% time compressed speech test, the speech masking level difference test, and the consonant vowel consonant (CVC) binaural fusion test. Comparisons between the SA English and the American normative data showed many large differences (up to 37.1% with respect to predicted pass criteria as calculated by mean-2SD cutoffs), with the SA English speakers performing both better and worse depending on the test involved. As a result, the American normative data was not considered appropriate for immediate use as normative data in SA. Instead, the preliminary data provided in this study was recommended as interim normative data for both SA English first and second language adult speakers, until larger scale SA normative data can be obtained.
Resumo:
Objective: To examine the relationship between the auditory brain-stem response (ABR) and its reconstructed waveforms following discrete wavelet transformation (DWT), and to comment on the resulting implications for ABR DWT time-frequency analysis. Methods: ABR waveforms were recorded from 120 normal hearing subjects at 90, 70, 50, 30, 10 and 0 dBnHL, decomposed using a 6 level discrete wavelet transformation (DWT), and reconstructed at individual wavelet scales (frequency ranges) A6, D6, D5 and D4. These waveforms were then compared for general correlations, and for patterns of change due to stimulus level, and subject age, gender and test ear. Results: The reconstructed ABR DWT waveforms showed 3 primary components: a large-amplitude waveform in the low-frequency A6 scale (0-266.6 Hz) with its single peak corresponding in latency with ABR waves III and V; a mid-amplitude waveform in the mid-frequency D6 scale (266.6-533.3 Hz) with its first 5 waves corresponding in latency to ABR waves 1, 111, V, VI and VII; and a small-amplitude, multiple-peaked waveform in the high-frequency D5 scale (533.3-1066.6 Hz) with its first 7 waves corresponding in latency to ABR waves 1, 11, 111, IV, V, VI and VII. Comparisons between ABR waves 1, 111 and V and their corresponding reconstructed ABR DWT waves showed strong correlations and similar, reliable, and statistically robust changes due to stimulus level and subject age, gender and test ear groupings. Limiting these findings, however, was the unexplained absence of a small number (2%, or 117/6720) of reconstructed ABR DWT waves, despite their corresponding ABR waves being present. Conclusions: Reconstructed ABR DWT waveforms can be used as valid time-frequency representations of the normal ABR, but with some limitations. In particular, the unexplained absence of a small number of reconstructed ABR DWT waves in some subjects, probably resulting from 'shift invariance' inherent to the DWT process, needs to be addressed. Significance: This is the first report of the relationship between the ABR and its reconstructed ABR DWT waveforms in a large normative sample. (C) 2004 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Resumo:
This study examined the role of global processing speed in mediating age increases in auditory memory span in 5- to 13-year-olds. Children were tested on measures of memory span, processing speed, single-word speech rate, phonological sensitivity, and vocabulary. Structural equation modeling supported a model in which age-associated increases in processing speed predicted the availability of long-term memory phonological representations for redintegration processes. The availability of long-term phonological representations, in turn, explained variance in memory span. Maximum speech rate did not predict independent variance in memory span. (c) 2005 Elsevier Inc. All rights reserved.