29 resultados para Sound Reverberation
em CentAUR: Central Archive University of Reading - UK
Resumo:
Perceptual compensation for reverberation was measured by embedding test words in contexts that were either spoken phrases or processed versions of this speech. The processing gave steady-spectrum contexts with no changes in the shape of the short-term spectral envelope over time, but with fluctuations in the temporal envelope. Test words were from a continuum between "sir" and "stir." When the amount of reverberation in test words was increased, to a level above the amount in the context, they sounded more like "sir." However, when the amount of reverberation in the context was also increased, to the level present in the test word, there was perceptual compensation in some conditions so that test words sounded more like "stir" again. Experiments here found compensation with speech contexts and with some steady-spectrum contexts, indicating that fluctuations in the context's temporal envelope can be sufficient for compensation. Other results suggest that the effectiveness of speech contexts is partly due to the narrow-band "frequency-channels" of the auditory periphery, where temporal-envelope fluctuations can be more pronounced than they are in the sound's broadband temporal envelope. Further results indicate that for compensation to influence speech, the context needs to be in a broad range of frequency channels. (c) 2007 Acoustical Society of America.
Resumo:
Perceptual effects of room reverberation on a "sir" or "stir" test-word can be observed when the level of reverberation in the word is increased, while the reverberation in a surrounding 'context I utterance remains at a minimal level. The result is that listeners make more "sit" identifications. When the context's reverberation is also increased, to approach the level in the test word, extrinsic perceptual compensation is observed, so that the number of listeners' "sir" identifications reduces to a value similar to that found with minimal reverberation. Thus far, compensation effects have only been observed with speech or speech-like contexts in which the short-term spectrum changes as the speaker's articulators move. The results reported here show that some noise contexts with static short-term spectra can also give rise to compensation. From these experiments it would appear that compensation requires a context with a temporal envelope that fluctuates to some extent, so that parts of it resemble offsets. These findings are consistent with a rather general kind of perceptual compensation mechanism; one that is informed by the 'tails' that reverberation adds at offsets. Other results reported here show that narrow-band contexts do not bring about compensation, even when their temporal-envelopes are the same as those of the more effective wideband contexts. These results suggest that compensation is confined to the frequency range occupied by the context, and that in a wideband sound it might operate in a 'band by band' manner.
Resumo:
Listeners were asked to identify modified recordings of the words "sir" and "stir," which were spoken by an adult male British-English speaker. Steps along a continuum between the words were obtained by a pointwise interpolation of their temporal-envelopes. These test words were embedded in a longer "context" utterance, and played with different amounts of reverberation. Increasing only the test-word's reverberation shifts the listener's category boundary so that more "sir"-identifications are made. This effect reduces when the context's reverberation is also increased, indicating perceptual compensation that is informed by the context. Experiment I finds that compensation is more prominent in rapid speech, that it varies between rooms, that it is more prominent when the test-word's reverberation is high, and that it increases with the context's reverberation. Further experiments show that compensation persists when the room is switched between the context and the test word, when presentation is monaural, and when the context is reversed. However, compensation reduces when the context's reverberation pattern is reversed, as well as when noise-versions of the context are used. "Tails" that reverberation introduces at the ends of sounds and at spectral transitions may inform the compensation mechanism about the amount of reflected sound in the signal. (c) 2005 Acoustical Society of America.
Resumo:
In an ideal "reverberant" room, the energy of the impulse responses decays smoothly, at a constant rate of dB/s, so that gradually-decaying tails are added at the ends of sounds. Conversely, a single echo gives a flat energy-decay up to the echo's arrival time, which then drops abruptly, so that sounds with only echoes lack the decaying-tail feature of reverberation. The perceptual effects of these types of reflection pattern were measured with test-words from a continuum of steps between "sir" and "stir", which were each embedded in a carrier phrase. When the proportion of reflected sound in test-words is increased, to a level above the amount in the carrier, the test words sound more like "sir". However, when the proportion of reflected sound in the carrier is also increased, to match the amount in the test word, there can be a perceptual compensation where test words sound more like "stir" again. A reference condition used real-room reverberation from recordings at different source to receiver distances. In a synthetic-reverberation condition, the reflection pattern was from a "colorless" impulse response, comprising exponentially-decaying reflections that were spaced at intervals. In a synthetic-echo condition, the reflection pattern was obtained from the synthetic reverberation by removing the intervals between reflections before delaying the resulting cluster relative to the direct sound. Compensation occurred in the reference condition and in different types of synthetic reverberation, but not in synthetic-echo conditions. This result indicates that the presence of tails from reverberation informs the compensation mechanism.
Resumo:
Perceptual constancy effects are observed when differing amounts of reverberation are applied to a context sentence and a test‐word embedded in it. Adding reverberation to members of a “sir”‐“stir” test‐word continuum causes temporal‐envelope distortion, which has the effect of eliciting more sir responses from listeners. If the same amount of reverberation is also applied to the context sentence, the number of sir responses decreases again, indicating an “extrinsic” compensation for the effects of reverberation. Such a mechanism would effect perceptual constancy of phonetic perception when temporal envelopes vary in reverberation. This experiment asks whether such effects precede or follow grouping. Eight auditory‐filter shaped noise‐bands were modulated with the temporal envelopes that arise when speech is played through these filters. The resulting “gestalt” percept is the appropriate speech rather than the sound of noise‐bands, presumably due to across‐channel “grouping.” These sounds were played to listeners in “matched” conditions, where reverberation was present in the same bands in both context and test‐word, and in “mismatched” conditions, where the bands in which reverberation was added differed between context and test‐word. Constancy effects were obtained in matched conditions, but not in mismatched conditions, indicating that this type of constancy in hearing precedes across‐channel grouping.
Resumo:
Infants' responses in speech sound discrimination tasks can be nonmonotonic over time. Stager and Werker (1997) reported such data in a bimodal habituation task. In this task, 8-month-old infants were capable of discriminations that involved minimal contrast pairs, whereas 14-month-old infants were not. It was argued that the older infants' attenuated performance was linked to their processing of the stimuli for meaning. The authors suggested that these data are diagnostic of a qualitative shift in infant cognition. We describe an associative connectionist model showing a similar decrement in discrimination without any qualitative shift in processing. The model suggests that responses to phonemic contrasts may be a nonmonotonic function of experience with language. The implications of this idea are discussed. The model also provides a formal framework for studying habituation-dishabituation behaviors in infancy.
Resumo:
Three experiments investigated irrelevant sound interference of lip-read lists. In Experiment 1, an acoustically changing sequence of nine irrelevant utterances was more disruptive to spoken immediate identification of lists of nine lip-read digits than nine repetitions of the same utterances (the changing-state effect; Jones, Madden, & Miles, 1992). Experiment 2 replicated this finding when lip-read items were sampled with replacement from the nine digits to form the lip-read lists. In Experiment 3, when the irrelevant sound was confined to the retention interval of a delayed recall task, a changing-state pattern of disruption also occurred. Results confirm a changing-state effect in memory for lip-read items but also point to the possibility that, for lip-reading, changing-state effects may occur at an earlier, perceptual stage.
Resumo:
Four experiments investigate the hypothesis that irrelevant sound interferes with serial recall of auditory items in the same fashion as with visually presented items. In Experiment 1 an acoustically changing sequence of 30 irrelevant utterances was more disruptive than 30 repetitions of the same utterance (the changing-state effect; Jones, Madden, & Miles, 1992) whether the to-be-remembered items were visually or auditorily presented. Experiment 2 showed that two different utterances spoken once (a heterogeneous compound suffix; LeCompte & Watkins, 1995) produced less disruption to serial recall than 15 repetitions of the same sequence. Disruption thus depends on the number of sounds in the irrelevant sequence. In Experiments 3a and 3b the number of different sounds, the "token-set" size (Tremblay & Jones, 1998), in an irrelevant sequence also influenced the magnitude of disruption in both irrelevant sound and compound suffix conditions. The results support the view that the disruption of memory for auditory items, like memory for visually presented items, is dependent on the number of different irrelevant sounds presented and the size of the set from which these sounds are taken. Theoretical implications are discussed.
Resumo:
Three experiments attempted to clarify the effect of altering the spatial presentation of irrelevant auditory information. Previous research using serial recall tasks demonstrated a left-ear disadvantage for the presentation of irrelevant sounds (Hadlington, Bridges, & Darby, 2004). Experiments 1 and 2 examined the effects of manipulating the location of irrelevant sound on either a mental arithmetic task (Banbury & Berry, 1998) or a missing-item task (Jones & Macken, 1993; Experiment 4). Experiment 3 altered the amount of change in the irrelevant stream to assess how this affected the level of interference elicited. Two prerequisites appear necessary to produce the left-ear disadvantage; the presence of ordered structural changes in the irrelevant sound and the requirement for serial order processing of the attended information. The existence of a left-ear disadvantage highlights the role of the right hemisphere in the obligatory processing of auditory information. (c) 2006 Published by Elsevier Inc.
Resumo:
Two experiments examine the effect on an immediate recall test of simulating a reverberant auditory environment in which auditory distracters in the form of speech are played to the participants (the 'irrelevant sound effect'). An echo-intensive environment simulated by the addition of reverberation to the speech reduced the extent of 'changes in state' in the irrelevant speech stream by smoothing the profile of the waveform. In both experiments, the reverberant auditory environment produced significantly smaller irrelevant sound distraction effects than an echo-free environment. Results are interpreted in terms of changing-state hypothesis, which states that acoustic content of irrelevant sound, rather than phonology or semantics, determines the extent of the irrelevant sound effect (ISE). Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
The assumption that ignoring irrelevant sound in a serial recall situation is identical to ignoring a non-target channel in dichotic listening is challenged. Dichotic listening is open to moderating effects of working memory capacity (Conway et al., 2001) whereas irrelevant sound effects (ISE) are not (Beaman, 2004). A right ear processing bias is apparent in dichotic listening, whereas the bias is to the left ear in the ISE (Hadlington et al., 2004). Positron emission tomography (PET) imaging data (Scott et al., 2004, submitted) show bilateral activation of the superior temporal gyrus (STG) in the presence of intelligible, but ignored, background speech and right hemisphere activation of the STG in the presence of unintelligible background speech. It is suggested that the right STG may be involved in the ISE and a particularly strong left ear effect might occur because of the contralateral connections in audition. It is further suggested that left STG activity is associated with dichotic listening effects and may be influenced by working memory span capacity. The relationship of this functional and neuroanatomical model to known neural correlates of working memory is considered.
Resumo:
Two experiments examine the effects of extraneous speech and nonspeech noise on a visual short-term memory task administered to younger and older adults. Experiment 1 confirms an earlier report that playing task-irrelevant speech is no more distracting for older adults than for younger adults (Rouleau T Belleville, 1996), indicating that "irrelevant sound effects" in short-term memory operate in a different manner to recalling targets in the presence of competing speech (Tun, O'Kane, T Wingfield, 2002). Experiment 2 extends this result to nonspeech noise and demonstrates that the result cannot be ascribed to hearing difficulties amongst the older age group, although the data also show that older adults rated the noise as less annoying and uncomfortable than younger adults. Implications for theories of the irrelevant sound effect, and for cognitive ageing, are discussed.
Resumo:
High-span individuals (as measured by the operation span [CSPAN] technique) are less likely than low-span individuals to notice their own names in an unattended auditory stream (A. R. A. Conway, N. Cowan, & M F. Bunting, 2001). The possibility that OSPAN accounts for individual differences in auditory distraction on an immediate recall test was examined. There was no evidence that high-OSPAN participants were more resistant to the disruption caused by irrelevant speech in serial or in free recall. Low-OSPAN participants did, however, make more semantically related intrusion errors from the irrelevant sound stream in a free recall test (Experiment 4). Results suggest that OSPAN mediates semantic components of auditory distraction dissociable from other aspects of the irrelevant sound effect.