61 resultados para Hate Speech
em CentAUR: Central Archive University of Reading - UK
Resumo:
Infants' responses in speech sound discrimination tasks can be nonmonotonic over time. Stager and Werker (1997) reported such data in a bimodal habituation task. In this task, 8-month-old infants were capable of discriminations that involved minimal contrast pairs, whereas 14-month-old infants were not. It was argued that the older infants' attenuated performance was linked to their processing of the stimuli for meaning. The authors suggested that these data are diagnostic of a qualitative shift in infant cognition. We describe an associative connectionist model showing a similar decrement in discrimination without any qualitative shift in processing. The model suggests that responses to phonemic contrasts may be a nonmonotonic function of experience with language. The implications of this idea are discussed. The model also provides a formal framework for studying habituation-dishabituation behaviors in infancy.
Resumo:
The authors examined whether background noise can be habituated to in the laboratory by using memory for prose tasks in 3 experiments. Experiment 1 showed that background speech can be habituated to after 20 min exposure and that meaning and repetition had no effect on the degree of habituation seen. Experiment 2 showed that office noise without speech can also be habituated to. Finally, Experiment 3 showed that a 5-min period of quiet, but not a change in voice, was sufficient to partially restore the disruptive effects of the background noise previously habituated to. These results are interpreted in light of current theories regarding the effects of background noise and habituation; practical implications for office planning are discussed.
Resumo:
It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of “glimpsing” of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.
Resumo:
Perceptual compensation for reverberation was measured by embedding test words in contexts that were either spoken phrases or processed versions of this speech. The processing gave steady-spectrum contexts with no changes in the shape of the short-term spectral envelope over time, but with fluctuations in the temporal envelope. Test words were from a continuum between "sir" and "stir." When the amount of reverberation in test words was increased, to a level above the amount in the context, they sounded more like "sir." However, when the amount of reverberation in the context was also increased, to the level present in the test word, there was perceptual compensation in some conditions so that test words sounded more like "stir" again. Experiments here found compensation with speech contexts and with some steady-spectrum contexts, indicating that fluctuations in the context's temporal envelope can be sufficient for compensation. Other results suggest that the effectiveness of speech contexts is partly due to the narrow-band "frequency-channels" of the auditory periphery, where temporal-envelope fluctuations can be more pronounced than they are in the sound's broadband temporal envelope. Further results indicate that for compensation to influence speech, the context needs to be in a broad range of frequency channels. (c) 2007 Acoustical Society of America.
Resumo:
Perceptual effects of room reverberation on a "sir" or "stir" test-word can be observed when the level of reverberation in the word is increased, while the reverberation in a surrounding 'context I utterance remains at a minimal level. The result is that listeners make more "sit" identifications. When the context's reverberation is also increased, to approach the level in the test word, extrinsic perceptual compensation is observed, so that the number of listeners' "sir" identifications reduces to a value similar to that found with minimal reverberation. Thus far, compensation effects have only been observed with speech or speech-like contexts in which the short-term spectrum changes as the speaker's articulators move. The results reported here show that some noise contexts with static short-term spectra can also give rise to compensation. From these experiments it would appear that compensation requires a context with a temporal envelope that fluctuates to some extent, so that parts of it resemble offsets. These findings are consistent with a rather general kind of perceptual compensation mechanism; one that is informed by the 'tails' that reverberation adds at offsets. Other results reported here show that narrow-band contexts do not bring about compensation, even when their temporal-envelopes are the same as those of the more effective wideband contexts. These results suggest that compensation is confined to the frequency range occupied by the context, and that in a wideband sound it might operate in a 'band by band' manner.
Resumo:
Listeners were asked to identify modified recordings of the words "sir" and "stir," which were spoken by an adult male British-English speaker. Steps along a continuum between the words were obtained by a pointwise interpolation of their temporal-envelopes. These test words were embedded in a longer "context" utterance, and played with different amounts of reverberation. Increasing only the test-word's reverberation shifts the listener's category boundary so that more "sir"-identifications are made. This effect reduces when the context's reverberation is also increased, indicating perceptual compensation that is informed by the context. Experiment I finds that compensation is more prominent in rapid speech, that it varies between rooms, that it is more prominent when the test-word's reverberation is high, and that it increases with the context's reverberation. Further experiments show that compensation persists when the room is switched between the context and the test word, when presentation is monaural, and when the context is reversed. However, compensation reduces when the context's reverberation pattern is reversed, as well as when noise-versions of the context are used. "Tails" that reverberation introduces at the ends of sounds and at spectral transitions may inform the compensation mechanism about the amount of reflected sound in the signal. (c) 2005 Acoustical Society of America.
Resumo:
In an ideal "reverberant" room, the energy of the impulse responses decays smoothly, at a constant rate of dB/s, so that gradually-decaying tails are added at the ends of sounds. Conversely, a single echo gives a flat energy-decay up to the echo's arrival time, which then drops abruptly, so that sounds with only echoes lack the decaying-tail feature of reverberation. The perceptual effects of these types of reflection pattern were measured with test-words from a continuum of steps between "sir" and "stir", which were each embedded in a carrier phrase. When the proportion of reflected sound in test-words is increased, to a level above the amount in the carrier, the test words sound more like "sir". However, when the proportion of reflected sound in the carrier is also increased, to match the amount in the test word, there can be a perceptual compensation where test words sound more like "stir" again. A reference condition used real-room reverberation from recordings at different source to receiver distances. In a synthetic-reverberation condition, the reflection pattern was from a "colorless" impulse response, comprising exponentially-decaying reflections that were spaced at intervals. In a synthetic-echo condition, the reflection pattern was obtained from the synthetic reverberation by removing the intervals between reflections before delaying the resulting cluster relative to the direct sound. Compensation occurred in the reference condition and in different types of synthetic reverberation, but not in synthetic-echo conditions. This result indicates that the presence of tails from reverberation informs the compensation mechanism.
Resumo:
It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of "glimpsing" of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.
Resumo:
While the beneficial effect of levodopa on traditional motor control tasks have been well documented over the decades. its effect on speech motor control has rarely been objectively examined and the existing literature remains inconclusive. This paper aims to examine the effect of levodopa on speech in patients with Parkinson's disease. It was hypothesized that levodopa would improve preparatory motor set related activity and alleviate hypophonia. Patients fasted and abstained from levodopa overnight. Motor examination and speech testing was performed the following day, pre-levodopa during their "off' state, then at hourly intervals post-medication to obtain the best "on" state. All speech stimuli showed a consistent tendency for increased loudness and faster rate during the "on" state, but this was accompanied by a greater extent of intensity decay. Pitch and articulation remained unchanged. Levodopa effectively upscaled the overall gain setting of vocal amplitude and tempo, similar to its well-known effect on limb movement. However, unlike limb movement, this effect on the final acoustic product of speech may or may not be advantageous, depending on the existing speech profile of individual patients. (C) 2007 Movement Disorder Society.