747 resultados para Sounds.
Resumo:
A pesar de que la poesía, en tanto texto literario, es escritura, no deja de conservar vínculos profundos con la palabra oral. Gonzalo Rojas, íntimo conocedor de las leyes de la poesía, maneja con especial destreza esa relación. De hecho su escritura es especialmente sensible a las evocaciones orales que los versos, las palabras, las sílabas y hasta los sonidos suscitan en el lector de poesía. De ahí la importancia del silencio en su escritura, no solo como tema o como procedimiento retórico, ni siquiera como el silencio que significa toda expresión interiorizada, como la poesía, sino, por sobre todo, el silencio como lo otro de la palabra, como aquello que está siempre presente sin decirse y que apunta a los niveles más profundos de lo innombrable.
Resumo:
The Ivory-billed Woodpecker (Campephilus principalis) disappeared from the forests of southeastern North America in the early 20th Century and for more than 50 years has been widely considered extinct. On 21 May 2005, we detected a bird that we identified as an Ivory-billed Woodpecker in the mature swamp forest along the Choctawhatchee River in the panhandle of Florida. During a subsequent year of research, members of our small search team observed birds that we identified as Ivory-billed Woodpeckers on 14 occasions. We heard sounds that matched descriptions of Ivory-billed Woodpecker acoustic signals on 41 occasions. We recorded 99 putative double knocks and 210 putative kent calls. We located cavities in the size range reported for Ivory-billed Woodpeckers and larger than those of Pileated Woodpeckers (Dryocopus pileatus) that have been reported in the literature or that we measured in Alabama. We documented unique foraging signs consistent with the feeding behavior of Ivory-billed Woodpeckers. Our evidence suggests that Ivory-billed Woodpeckers may be present in the forests along the Choctawhatchee River and warrants an expanded search of this bottomland forest habitat.
Resumo:
Why are humans musical? Why do people in all cultures sing or play instruments? Why do we appear to have specialized neurological apparatus for hearing and interpreting music as distinct from other sounds? And how does our musicality relate to language and to our evolutionary history? Anthropologists and archaeologists have paid little attention to the origin of music and musicality — far less than for either language or ‘art’. While art has been seen as an index of cognitive complexity and language as an essential tool of communication, music has suffered from our perception that it is an epiphenomenal ‘leisure activity’, and archaeologically inaccessible to boot. Nothing could be further from the truth, according to Steven Mithen; music is integral to human social life, he argues, and we can investigate its ancestry with the same rich range of analyses — neurological, physiological, ethnographic, linguistic, ethological and even archaeological — which have been deployed to study language. In The Singing Neanderthals Steven Mithen poses these questions and proposes a bold hypothesis to answer them. Mithen argues that musicality is a fundamental part of being human, that this capacity is of great antiquity, and that a holistic protolanguage of musical emotive expression predates language and was an essential precursor to it. This is an argument with implications which extend far beyond the mere origins of music itself into the very motives of human origins. Any argument of such range is bound to attract discussion and critique; we here present commentaries by archaeologists Clive Gamble and Iain Morley and linguists Alison Wray and Maggie Tallerman, along with Mithen's response to them. Whether right or wrong, Mithen has raised fascinating and important issues. And it adds a great deal of charm to the time-honoured, perhaps shopworn image of the Neanderthals shambling ineffectively through the pages of Pleistocene prehistory to imagine them humming, crooning or belting out a cappella harmonies as they went.
Resumo:
In this research, a cross-model paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were presented simultaneously with liked and disliked odours and sounds, with neutral-neutral pairings serving as controls. The results confirm previous findings that the affective evaluation of previously neutral visual stimuli shifts in the direction of contingently presented affective auditory stimuli. In addition, this research shows the presence of conditioning with affective odours having no logical connection with the pictures.
Resumo:
Four experiments investigate the hypothesis that irrelevant sound interferes with serial recall of auditory items in the same fashion as with visually presented items. In Experiment 1 an acoustically changing sequence of 30 irrelevant utterances was more disruptive than 30 repetitions of the same utterance (the changing-state effect; Jones, Madden, & Miles, 1992) whether the to-be-remembered items were visually or auditorily presented. Experiment 2 showed that two different utterances spoken once (a heterogeneous compound suffix; LeCompte & Watkins, 1995) produced less disruption to serial recall than 15 repetitions of the same sequence. Disruption thus depends on the number of sounds in the irrelevant sequence. In Experiments 3a and 3b the number of different sounds, the "token-set" size (Tremblay & Jones, 1998), in an irrelevant sequence also influenced the magnitude of disruption in both irrelevant sound and compound suffix conditions. The results support the view that the disruption of memory for auditory items, like memory for visually presented items, is dependent on the number of different irrelevant sounds presented and the size of the set from which these sounds are taken. Theoretical implications are discussed.
Resumo:
It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of “glimpsing” of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.
Resumo:
The story of cocaine: from royalty to popular refresher... Surprising as it sounds, Coca Cola gets its name from one of its original ingredients - cocaine. And although the drug's now illegal, this wasn't always the case. But how did it become popular, what led to its downfall, and how does Coca Cola come in to it?
Resumo:
Listeners were asked to identify modified recordings of the words "sir" and "stir," which were spoken by an adult male British-English speaker. Steps along a continuum between the words were obtained by a pointwise interpolation of their temporal-envelopes. These test words were embedded in a longer "context" utterance, and played with different amounts of reverberation. Increasing only the test-word's reverberation shifts the listener's category boundary so that more "sir"-identifications are made. This effect reduces when the context's reverberation is also increased, indicating perceptual compensation that is informed by the context. Experiment I finds that compensation is more prominent in rapid speech, that it varies between rooms, that it is more prominent when the test-word's reverberation is high, and that it increases with the context's reverberation. Further experiments show that compensation persists when the room is switched between the context and the test word, when presentation is monaural, and when the context is reversed. However, compensation reduces when the context's reverberation pattern is reversed, as well as when noise-versions of the context are used. "Tails" that reverberation introduces at the ends of sounds and at spectral transitions may inform the compensation mechanism about the amount of reflected sound in the signal. (c) 2005 Acoustical Society of America.
Resumo:
In an ideal "reverberant" room, the energy of the impulse responses decays smoothly, at a constant rate of dB/s, so that gradually-decaying tails are added at the ends of sounds. Conversely, a single echo gives a flat energy-decay up to the echo's arrival time, which then drops abruptly, so that sounds with only echoes lack the decaying-tail feature of reverberation. The perceptual effects of these types of reflection pattern were measured with test-words from a continuum of steps between "sir" and "stir", which were each embedded in a carrier phrase. When the proportion of reflected sound in test-words is increased, to a level above the amount in the carrier, the test words sound more like "sir". However, when the proportion of reflected sound in the carrier is also increased, to match the amount in the test word, there can be a perceptual compensation where test words sound more like "stir" again. A reference condition used real-room reverberation from recordings at different source to receiver distances. In a synthetic-reverberation condition, the reflection pattern was from a "colorless" impulse response, comprising exponentially-decaying reflections that were spaced at intervals. In a synthetic-echo condition, the reflection pattern was obtained from the synthetic reverberation by removing the intervals between reflections before delaying the resulting cluster relative to the direct sound. Compensation occurred in the reference condition and in different types of synthetic reverberation, but not in synthetic-echo conditions. This result indicates that the presence of tails from reverberation informs the compensation mechanism.
Resumo:
It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of "glimpsing" of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.
Resumo:
Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated insula and anterior cingulate cortices in the empathic response to another's pain. However, virtually nothing is known about the impact of the voluntary generation of compassion on this network. To investigate these questions we assessed brain activity using fMRI while novice and expert meditation practitioners generated a loving-kindness-compassion meditation state. To probe affective reactivity, we presented emotional and neutral sounds during the meditation and comparison periods. Our main hypothesis was that the concern for others cultivated during this form of meditation enhances affective processing, in particular in response to sounds of distress, and that this response to emotional sounds is modulated by the degree of meditation training. The presentation of the emotional sounds was associated with increased pupil diameter and activation of limbic regions (insula and cingulate cortices) during meditation (versus rest). During meditation, activation in insula was greater during presentation of negative sounds than positive or neutral sounds in expert than it was in novice meditators. The strength of activation in insula was also associated with self-reported intensity of the meditation for both groups. These results support the role of the limbic circuitry in emotion sharing. The comparison between meditation vs. rest states between experts and novices also showed increased activation in amygdala, right temporo-parietal junction (TPJ), and right posterior superior temporal sulcus (pSTS) in response to all sounds, suggesting, greater detection of the emotional sounds, and enhanced mentation in response to emotional human vocalizations for experts than novices during meditation. Together these data indicate that the mental expertise to cultivate positive emotion alters the activation of circuitries previously linked to empathy and theory of mind in response to emotional stimuli.
Resumo:
Three experiments attempted to clarify the effect of altering the spatial presentation of irrelevant auditory information. Previous research using serial recall tasks demonstrated a left-ear disadvantage for the presentation of irrelevant sounds (Hadlington, Bridges, & Darby, 2004). Experiments 1 and 2 examined the effects of manipulating the location of irrelevant sound on either a mental arithmetic task (Banbury & Berry, 1998) or a missing-item task (Jones & Macken, 1993; Experiment 4). Experiment 3 altered the amount of change in the irrelevant stream to assess how this affected the level of interference elicited. Two prerequisites appear necessary to produce the left-ear disadvantage; the presence of ordered structural changes in the irrelevant sound and the requirement for serial order processing of the attended information. The existence of a left-ear disadvantage highlights the role of the right hemisphere in the obligatory processing of auditory information. (c) 2006 Published by Elsevier Inc.
Office noise and employee concentration: identifying causes of disruption and potential improvements
Resumo:
A field study assessed subjective reports of distraction from various office sounds among 88 employees at two sites. In addition, the study examined the amount of exposure the workers had to the noise in order to determine any evidence for habituation. Finally, respondents were asked how they would improve their environment ( with respect to noise), and to rate examples of improvements with regards to their job satisfaction and performance. Out of the sample, 99% reported that their concentration was impaired by various components of office noise, especially telephones left ringing at vacant desks and people talking in the background. No evidence for habituation to these sounds was found. These results are interpreted in the light of previous research regarding the effects of noise in offices and the 'irrelevant sound effect'.
Resumo:
Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user’s mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user’s emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal-Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music.