35 resultados para Whale sounds

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The experiment asks whether constancy in hearing precedes or follows grouping. Listeners heard speech-like sounds comprising 8 auditory-filter shaped noise-bands that had temporal envelopes corresponding to those arising in these filters when a speech message is played. The „context‟ words in the message were “next you‟ll get _to click on”, into which a “sir” or “stir” test word was inserted. These test words were from an 11-step continuum that was formed by amplitude modulation. Listeners identified the test words appropriately and quite consistently, even though they had the „robotic‟ quality typical of this type of 8-band speech. The speech-like effects of these sounds appears to be a consequence of auditory grouping. Constancy was assessed by comparing the influence of room reflections on the test word across conditions where the context had either the same level of reflections, or where it had a much lower level. Constancy effects were obtained with these 8-band sounds, but only in „matched‟ conditions, where the room reflections were in the same bands in both the context and the test word. This was not the case in a comparison „mismatched‟ condition, and here, no constancy effects were found. It would appear that this type of constancy in hearing precedes the across-channel grouping whose effects are so apparent in these sounds. This result is discussed in terms of the ubiquity of grouping across different levels of representation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adults diagnosed with autism spectrum disorder (ASD) show a reduced sensitivity (degree of selective response) to social stimuli such as human voices. In order to determine whether this reduced sensitivity is a consequence of years of poor social interaction and communication or is present prior to significant experience, we used functional MRI to examine cortical sensitivity to auditory stimuli in infants at high familial risk for later emerging ASD (HR group, N = 15), and compared this to infants with no family history of ASD (LR group, N = 18). The infants (aged between 4 and 7 months) were presented with voice and environmental sounds while asleep in the scanner and their behaviour was also examined in the context of observed parent-infant interaction. Whereas LR infants showed early specialisation for human voice processing in right temporal and medial frontal regions, the HR infants did not. Similarly, LR infants showed stronger sensitivity than HR infants to sad vocalisations in the right fusiform gyrus and left hippocampus. Also, in the HR group only, there was an association between each infant's degree of engagement during social interaction and the degree of voice sensitivity in key cortical regions. These results suggest that at least some infants at high-risk for ASD have atypical neural responses to human voice with and without emotional valence. Further exploration of the relationship between behaviour during social interaction and voice processing may help better understand the mechanisms that lead to different outcomes in at risk populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Why are humans musical? Why do people in all cultures sing or play instruments? Why do we appear to have specialized neurological apparatus for hearing and interpreting music as distinct from other sounds? And how does our musicality relate to language and to our evolutionary history? Anthropologists and archaeologists have paid little attention to the origin of music and musicality — far less than for either language or ‘art’. While art has been seen as an index of cognitive complexity and language as an essential tool of communication, music has suffered from our perception that it is an epiphenomenal ‘leisure activity’, and archaeologically inaccessible to boot. Nothing could be further from the truth, according to Steven Mithen; music is integral to human social life, he argues, and we can investigate its ancestry with the same rich range of analyses — neurological, physiological, ethnographic, linguistic, ethological and even archaeological — which have been deployed to study language. In The Singing Neanderthals Steven Mithen poses these questions and proposes a bold hypothesis to answer them. Mithen argues that musicality is a fundamental part of being human, that this capacity is of great antiquity, and that a holistic protolanguage of musical emotive expression predates language and was an essential precursor to it. This is an argument with implications which extend far beyond the mere origins of music itself into the very motives of human origins. Any argument of such range is bound to attract discussion and critique; we here present commentaries by archaeologists Clive Gamble and Iain Morley and linguists Alison Wray and Maggie Tallerman, along with Mithen's response to them. Whether right or wrong, Mithen has raised fascinating and important issues. And it adds a great deal of charm to the time-honoured, perhaps shopworn image of the Neanderthals shambling ineffectively through the pages of Pleistocene prehistory to imagine them humming, crooning or belting out a cappella harmonies as they went.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this research, a cross-model paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were presented simultaneously with liked and disliked odours and sounds, with neutral-neutral pairings serving as controls. The results confirm previous findings that the affective evaluation of previously neutral visual stimuli shifts in the direction of contingently presented affective auditory stimuli. In addition, this research shows the presence of conditioning with affective odours having no logical connection with the pictures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Four experiments investigate the hypothesis that irrelevant sound interferes with serial recall of auditory items in the same fashion as with visually presented items. In Experiment 1 an acoustically changing sequence of 30 irrelevant utterances was more disruptive than 30 repetitions of the same utterance (the changing-state effect; Jones, Madden, & Miles, 1992) whether the to-be-remembered items were visually or auditorily presented. Experiment 2 showed that two different utterances spoken once (a heterogeneous compound suffix; LeCompte & Watkins, 1995) produced less disruption to serial recall than 15 repetitions of the same sequence. Disruption thus depends on the number of sounds in the irrelevant sequence. In Experiments 3a and 3b the number of different sounds, the "token-set" size (Tremblay & Jones, 1998), in an irrelevant sequence also influenced the magnitude of disruption in both irrelevant sound and compound suffix conditions. The results support the view that the disruption of memory for auditory items, like memory for visually presented items, is dependent on the number of different irrelevant sounds presented and the size of the set from which these sounds are taken. Theoretical implications are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of “glimpsing” of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The story of cocaine: from royalty to popular refresher... Surprising as it sounds, Coca Cola gets its name from one of its original ingredients - cocaine. And although the drug's now illegal, this wasn't always the case. But how did it become popular, what led to its downfall, and how does Coca Cola come in to it?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Listeners were asked to identify modified recordings of the words "sir" and "stir," which were spoken by an adult male British-English speaker. Steps along a continuum between the words were obtained by a pointwise interpolation of their temporal-envelopes. These test words were embedded in a longer "context" utterance, and played with different amounts of reverberation. Increasing only the test-word's reverberation shifts the listener's category boundary so that more "sir"-identifications are made. This effect reduces when the context's reverberation is also increased, indicating perceptual compensation that is informed by the context. Experiment I finds that compensation is more prominent in rapid speech, that it varies between rooms, that it is more prominent when the test-word's reverberation is high, and that it increases with the context's reverberation. Further experiments show that compensation persists when the room is switched between the context and the test word, when presentation is monaural, and when the context is reversed. However, compensation reduces when the context's reverberation pattern is reversed, as well as when noise-versions of the context are used. "Tails" that reverberation introduces at the ends of sounds and at spectral transitions may inform the compensation mechanism about the amount of reflected sound in the signal. (c) 2005 Acoustical Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an ideal "reverberant" room, the energy of the impulse responses decays smoothly, at a constant rate of dB/s, so that gradually-decaying tails are added at the ends of sounds. Conversely, a single echo gives a flat energy-decay up to the echo's arrival time, which then drops abruptly, so that sounds with only echoes lack the decaying-tail feature of reverberation. The perceptual effects of these types of reflection pattern were measured with test-words from a continuum of steps between "sir" and "stir", which were each embedded in a carrier phrase. When the proportion of reflected sound in test-words is increased, to a level above the amount in the carrier, the test words sound more like "sir". However, when the proportion of reflected sound in the carrier is also increased, to match the amount in the test word, there can be a perceptual compensation where test words sound more like "stir" again. A reference condition used real-room reverberation from recordings at different source to receiver distances. In a synthetic-reverberation condition, the reflection pattern was from a "colorless" impulse response, comprising exponentially-decaying reflections that were spaced at intervals. In a synthetic-echo condition, the reflection pattern was obtained from the synthetic reverberation by removing the intervals between reflections before delaying the resulting cluster relative to the direct sound. Compensation occurred in the reference condition and in different types of synthetic reverberation, but not in synthetic-echo conditions. This result indicates that the presence of tails from reverberation informs the compensation mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of "glimpsing" of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated insula and anterior cingulate cortices in the empathic response to another's pain. However, virtually nothing is known about the impact of the voluntary generation of compassion on this network. To investigate these questions we assessed brain activity using fMRI while novice and expert meditation practitioners generated a loving-kindness-compassion meditation state. To probe affective reactivity, we presented emotional and neutral sounds during the meditation and comparison periods. Our main hypothesis was that the concern for others cultivated during this form of meditation enhances affective processing, in particular in response to sounds of distress, and that this response to emotional sounds is modulated by the degree of meditation training. The presentation of the emotional sounds was associated with increased pupil diameter and activation of limbic regions (insula and cingulate cortices) during meditation (versus rest). During meditation, activation in insula was greater during presentation of negative sounds than positive or neutral sounds in expert than it was in novice meditators. The strength of activation in insula was also associated with self-reported intensity of the meditation for both groups. These results support the role of the limbic circuitry in emotion sharing. The comparison between meditation vs. rest states between experts and novices also showed increased activation in amygdala, right temporo-parietal junction (TPJ), and right posterior superior temporal sulcus (pSTS) in response to all sounds, suggesting, greater detection of the emotional sounds, and enhanced mentation in response to emotional human vocalizations for experts than novices during meditation. Together these data indicate that the mental expertise to cultivate positive emotion alters the activation of circuitries previously linked to empathy and theory of mind in response to emotional stimuli.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three experiments attempted to clarify the effect of altering the spatial presentation of irrelevant auditory information. Previous research using serial recall tasks demonstrated a left-ear disadvantage for the presentation of irrelevant sounds (Hadlington, Bridges, & Darby, 2004). Experiments 1 and 2 examined the effects of manipulating the location of irrelevant sound on either a mental arithmetic task (Banbury & Berry, 1998) or a missing-item task (Jones & Macken, 1993; Experiment 4). Experiment 3 altered the amount of change in the irrelevant stream to assess how this affected the level of interference elicited. Two prerequisites appear necessary to produce the left-ear disadvantage; the presence of ordered structural changes in the irrelevant sound and the requirement for serial order processing of the attended information. The existence of a left-ear disadvantage highlights the role of the right hemisphere in the obligatory processing of auditory information. (c) 2006 Published by Elsevier Inc.