2 resultados para Concurrent Java components

em Aston University Research Archive


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Keyword identification in one of two simultaneous sentences is improved when the sentences differ in F0, particularly when they are almost continuously voiced. Sentences of this kind were recorded, monotonised using PSOLA, and re-synthesised to give a range of harmonic ?F0s (0, 1, 3, and 10 semitones). They were additionally re-synthesised by LPC with the LPC residual frequency shifted by 25% of F0, to give excitation with inharmonic but regularly spaced components. Perceptual identification of frequency-shifted sentences showed a similar large improvement with nominal ?F0 as seen for harmonic sentences, although overall performance was about 10% poorer. We compared performance with that of two autocorrelation-based computational models comprising four stages: (i) peripheral frequency selectivity and half-wave rectification; (ii) within-channel periodicity extraction; (iii) identification of the two major peaks in the summary autocorrelation function (SACF); (iv) a template-based approach to speech recognition using dynamic time warping. One model sampled the correlogram at the target-F0 period and performed spectral matching; the other deselected channels dominated by the interferer and performed matching on the short-lag portion of the residual SACF. Both models reproduced the monotonic increase observed in human performance with increasing ?F0 for the harmonic stimuli, but not for the frequency-shifted stimuli. A revised version of the spectral-matching model, which groups patterns of periodicity that lie on a curve in the frequency-delay plane, showed a closer match to the perceptual data for frequency-shifted sentences. The results extend the range of phenomena originally attributed to harmonic processing to grouping by common spectral pattern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sudden increase in the amplitude of a component often causes its segregation from a complex tone, and shorter rise times enhance this effect. We explored whether this also occurs in implant listeners (n?=?8). Condition 1 used a 3.5-s “complex tone” comprising concurrent stimulation on five electrodes distributed across the array of the Nucleus CI24 implant. For each listener, the baseline stimulus level on each electrode was set at 50% of the dynamic range (DR). Two 1-s increments of 12.5%, 25%, or 50% DR were introduced in succession on adjacent electrodes within the “inner” three of those activated. Both increments had rise and fall times of 30 and 970 ms or vice versa. Listeners reported which increment was higher in pitch. Some listeners performed above chance for all increment sizes, but only for 50% increments did all listeners perform above chance. No significant effect of rise time was found. Condition 2 replaced amplitude increments with decrements. Only three listeners performed above chance even for 50% decrements. One exceptional listener performed well for 50% decrements with fall and rise times of 970 and 30 ms but around chance for fall and rise times of 30 and 970 ms, indicating successful discrimination based on a sudden rise back to baseline stimulation. Overall, the results suggest that implant listeners can use amplitude changes against a constant background to pick out components from a complex, but generally these must be large compared with those required in normal hearing. For increments, performance depended mainly on above-baseline stimulation of the target electrodes, not rise time. With one exception, performance for decrements was typically very poor.