42 resultados para Consonance dissonance sounds


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been previously demonstrated that extensive activation in the dorsolateral temporal lobes associated with masking a speech target with a speech masker, consistent with the hypothesis that competition for central auditory processes is an important factor in informational masking. Here, masking from speech and two additional maskers derived from the original speech were investigated. One of these is spectrally rotated speech, which is unintelligible and has a similar (inverted) spectrotemporal profile to speech. The authors also controlled for the possibility of "glimpsing" of the target signal during modulated masking sounds by using speech-modulated noise as a masker in a baseline condition. Functional imaging results reveal that masking speech with speech leads to bilateral superior temporal gyrus (STG) activation relative to a speech-in-noise baseline, while masking speech with spectrally rotated speech leads solely to right STG activation relative to the baseline. This result is discussed in terms of hemispheric asymmetries for speech perception, and interpreted as showing that masking effects can arise through two parallel neural systems, in the left and right temporal lobes. This has implications for the competition for resources caused by speech and rotated speech maskers, and may illuminate some of the mechanisms involved in informational masking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated insula and anterior cingulate cortices in the empathic response to another's pain. However, virtually nothing is known about the impact of the voluntary generation of compassion on this network. To investigate these questions we assessed brain activity using fMRI while novice and expert meditation practitioners generated a loving-kindness-compassion meditation state. To probe affective reactivity, we presented emotional and neutral sounds during the meditation and comparison periods. Our main hypothesis was that the concern for others cultivated during this form of meditation enhances affective processing, in particular in response to sounds of distress, and that this response to emotional sounds is modulated by the degree of meditation training. The presentation of the emotional sounds was associated with increased pupil diameter and activation of limbic regions (insula and cingulate cortices) during meditation (versus rest). During meditation, activation in insula was greater during presentation of negative sounds than positive or neutral sounds in expert than it was in novice meditators. The strength of activation in insula was also associated with self-reported intensity of the meditation for both groups. These results support the role of the limbic circuitry in emotion sharing. The comparison between meditation vs. rest states between experts and novices also showed increased activation in amygdala, right temporo-parietal junction (TPJ), and right posterior superior temporal sulcus (pSTS) in response to all sounds, suggesting, greater detection of the emotional sounds, and enhanced mentation in response to emotional human vocalizations for experts than novices during meditation. Together these data indicate that the mental expertise to cultivate positive emotion alters the activation of circuitries previously linked to empathy and theory of mind in response to emotional stimuli.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three experiments attempted to clarify the effect of altering the spatial presentation of irrelevant auditory information. Previous research using serial recall tasks demonstrated a left-ear disadvantage for the presentation of irrelevant sounds (Hadlington, Bridges, & Darby, 2004). Experiments 1 and 2 examined the effects of manipulating the location of irrelevant sound on either a mental arithmetic task (Banbury & Berry, 1998) or a missing-item task (Jones & Macken, 1993; Experiment 4). Experiment 3 altered the amount of change in the irrelevant stream to assess how this affected the level of interference elicited. Two prerequisites appear necessary to produce the left-ear disadvantage; the presence of ordered structural changes in the irrelevant sound and the requirement for serial order processing of the attended information. The existence of a left-ear disadvantage highlights the role of the right hemisphere in the obligatory processing of auditory information. (c) 2006 Published by Elsevier Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A field study assessed subjective reports of distraction from various office sounds among 88 employees at two sites. In addition, the study examined the amount of exposure the workers had to the noise in order to determine any evidence for habituation. Finally, respondents were asked how they would improve their environment ( with respect to noise), and to rate examples of improvements with regards to their job satisfaction and performance. Out of the sample, 99% reported that their concentration was impaired by various components of office noise, especially telephones left ringing at vacant desks and people talking in the background. No evidence for habituation to these sounds was found. These results are interpreted in the light of previous research regarding the effects of noise in offices and the 'irrelevant sound effect'.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user’s mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user’s emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal-Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the crucial problem of wayfinding assistance in the Virtual Environments (VEs). A number of navigation aids such as maps, agents, trails and acoustic landmarks are available to support the user for navigation in VEs, however it is evident that most of the aids are visually dominated. This work-in-progress describes a sound based approach that intends to assist the task of 'route decision' during navigation in a VE using music. Furthermore, with use of musical sounds it aims to reduce the cognitive load associated with other visually as well as physically dominated tasks. To achieve these goals, the approach exploits the benefits provided by music to ease and enhance the task of wayfinding, whilst making the user experience in the VE smooth and enjoyable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inspired by a type of synesthesia where colour typically induces musical notes the MusiCam project investigates this unusual condition, particularly the transition from colour to sound. MusiCam explores the potential benefits of this idiosyncrasy as a mode of human computer interaction (1-10), providing a host of meaningful applications spanning control, communication and composition. Colour data is interpreted by means of an off-the-shelf webcam, and music is generated in real-time through regular speakers. By making colour-based gestures users can actively control the parameters of sounds, compose melodies and motifs or mix multiple tracks on the fly. The system shows great potential as an interactive medium and as a musical controller. The trials conducted to date have produced encouraging results, and only hint at the new possibilities achievable by such a device.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Perceptual constancy effects are observed when differing amounts of reverberation are applied to a context sentence and a test‐word embedded in it. Adding reverberation to members of a “sir”‐“stir” test‐word continuum causes temporal‐envelope distortion, which has the effect of eliciting more sir responses from listeners. If the same amount of reverberation is also applied to the context sentence, the number of sir responses decreases again, indicating an “extrinsic” compensation for the effects of reverberation. Such a mechanism would effect perceptual constancy of phonetic perception when temporal envelopes vary in reverberation. This experiment asks whether such effects precede or follow grouping. Eight auditory‐filter shaped noise‐bands were modulated with the temporal envelopes that arise when speech is played through these filters. The resulting “gestalt” percept is the appropriate speech rather than the sound of noise‐bands, presumably due to across‐channel “grouping.” These sounds were played to listeners in “matched” conditions, where reverberation was present in the same bands in both context and test‐word, and in “mismatched” conditions, where the bands in which reverberation was added differed between context and test‐word. Constancy effects were obtained in matched conditions, but not in mismatched conditions, indicating that this type of constancy in hearing precedes across‐channel grouping.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Listeners can attend to one of several simultaneous messages by tracking one speaker’s voice characteristics. Using differences in the location of sounds in a room, we ask how well cues arising from spatial position compete with these characteristics. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played simultaneously with a different “distracter” context. Talker difference was in competition with position difference, so the response indicates which cue‐type the listener was tracking. Spatial position was found to override talker difference in dichotic conditions when the talkers are similar (male). The salience of cues associated with differences in sounds, bearings decreased with distance between listener and sources. These cues are more effective binaurally. However, there appear to be other cues that increase in salience with distance between sounds. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. Distances between spectra calculated using a gammatone filterbank (with ERB‐spaced CFs) of the room’s impulse responses at different locations were computed, and comparison with listeners’ responses suggested some slight monaural loudness cues, but also monaural “timbre” cues arising from the temporal‐ and spectral‐envelope differences in the speech from different locations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a “busy” auditory environment listeners can selectively attend to one of several simultaneous messages by tracking one listener's voice characteristics. Here we ask how well other cues compete for attention with such characteristics, using variations in the spatial position of sound sources in a (virtual) seminar room. Listeners decided which of two simultaneous target words belonged in an attended “context” phrase when it was played with a simultaneous “distracter” context that had a different wording. Talker difference was in competition with a position difference, so that the target‐word chosen indicates which cue‐type the listener was tracking. The main findings are that room‐acoustic factors provide some tracking cues, whose salience increases with distance separation. This increase is more prominent in diotic conditions, indicating that these cues are largely monaural. The room‐acoustic factors might therefore be the spectral‐ and temporal‐envelope effects of reverberation on the timbre of speech. By contrast, the salience of cues associated with differences in sounds' bearings tends to decrease with distance, and these cues are more effective in dichotic conditions. In other conditions, where a distance and a bearing difference cooperate, they can completely override a talker difference at various distances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A speech message played several metres from the listener in a room is usually heard to have much the same phonetic content as it does when played nearby, even though the different amounts of reflected sound make the temporal envelopes of these signals very different. To study this ‘constancy’ effect, listeners heard speech messages and speech-like sounds comprising 8 auditory-filter shaped noise-bands that had temporal envelopes corresponding to those in these filters when the speech message is played. The ‘contexts’ were “next you’ll get _to click on”, into which a “sir” or “stir” test word was inserted. These test words were from an 11-step continuum, formed by amplitude modulation. Listeners identified the test words appropriately, even in the 8-band conditions where the speech had a ‘robotic’ quality. Constancy was assessed by comparing the influence of room reflections on the test word across conditions where the context had either the same level of room reflections (i.e. from the same, far distance), or where it had a much lower level (i.e. from nearby). Constancy effects were obtained with both the natural- and the 8-band speech. Results are considered in terms of the degree of ‘matching’ between the context’s and test-word’s bands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Another Proof of the Preceding Theory was produced as part of a residency run by Artists in Archeology in conjunction with the Stonehenge Riverside project. The film explores the relationship between science, work and ritual, imagining archaeology as a future cult. As two robed disciples stray off from the dig, they are drawn to the drone of the stones and proceed to play the henge like a gigantic Theremin. Just as a Theremin is played with the hand interfering in an electric circuit and producing sound without contact, so the stones respond to the choreographed bodily proximity. Finally, one of the two continues alone to the avenue at Avebury, where the magnetic pull of the stones reaches its climax. Shot on VHS, the film features a score by Zuzushi Monkey, with percussion and theremin sounds mirroring the action. The performers are mostly artists and archeologists from the art and archaeology teams. The archeologists were encouraged to perform their normal work in the robes, in an attempt to explore the meeting points of science and ritual and interrogate our relationship to an ultimately unknowable prehistoric past where activities we do not understand are relegated to the realm of religion. Stonehenge has unique acoustic properties, it’s large sarsen stones are finely worked on the inside, left rough on the outside, intensifying sound waves within the inner horseshoe, but since their real use, having been built over centuries, remains ambiguous, the film proposes that our attempts to decode them may themselves become encoded in their cumulative meaning for future researchers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When speech is in competition with interfering sources in rooms, monaural indicators of intelligibility fail to take account of the listener’s abilities to separate target speech from interfering sounds using the binaural system. In order to incorporate these segregation abilities and their susceptibility to reverberation, Lavandier and Culling [J. Acoust. Soc. Am. 127, 387–399 (2010)] proposed a model which combines effects of better-ear listening and binaural unmasking. A computationally efficient version of this model is evaluated here under more realistic conditions that include head shadow, multiple stationary noise sources, and real-room acoustics. Three experiments are presented in which speech reception thresholds were measured in the presence of one to three interferers using real-room listening over headphones, simulated by convolving anechoic stimuli with binaural room impulse-responses measured with dummy-head transducers in five rooms. Without fitting any parameter of the model, there was close correspondence between measured and predicted differences in threshold across all tested conditions. The model’s components of better-ear listening and binaural unmasking were validated both in isolation and in combination. The computational efficiency of this prediction method allows the generation of complex “intelligibility maps” from room designs. © 2012 Acoustical Society of America

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The fundamental principles of the teaching methodology followed for dyslexic learners evolve around the need for a multisensory approach, which would advocate repetition of learning tasks in an enjoyable way. The introduction of multimedia technologies in the field of education has supported the merging of new tools (digital camera, scanner) and techniques (sounds, graphics, animation) in a meaningful whole. Dyslexic learners are now given the opportunity to express their ideas using these alternative media and participate actively in the educational process. This paper discussed the preliminary findings of a single case study of two English monolingual dyslexic children working together to create an open-ended multimedia project on a laptop computer. The project aimed to examine whether and if the multimedia environment could enhance the dyslexic learners’ skills in composition. Analysis of the data has indicated that the technological facilities gave the children the opportunity to enhance the style and content of their work for a variety of audiences and to develop responsibilities connected to authorship.