747 resultados para Sounds.


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundamental Sounds was a live, intercultural and multidisciplinary concert that presented a new synthesis of music, performance & visual arts addressing the imperative of sustainability in a new and evocative form. The outcome was a ninety-minute concert, performed at a major concert hall venue, involving four live musicians, numerous performers & large-scale projections. The images and the concert were scripted in three key phases that spoke to three epochs of human evolution identified by ontological designer and futurist Tony Fry - ‘Pre-Settlement’, ‘Settlement’ and the era that he suggests that we have now entered – ‘Unsettlement’ (in mind body and spirit). The entire work was professionally recorded for presentation on DVD and audio CD.----- Fundamental Sounds achieved a new synthesis between quality performance forms and cogent critical ideas, engendering an increasingly reflective position for audiences around today’s “era of unsettlement” – an epoch Fry has recognized that we must now move to quickly displace through adopting fundamentally sustainable modes of being and becoming.----- The concert was well attended and evoked a range of strong, reflective reactions from its audiences who were also invited to join and participate within a subsequent ‘community of change’ initiated at that time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sounds of the Suburb was a commissioned public art proposal based upon a brief set by Queensland Rail for the major redevelopment at their Brunswick Street Railway Station, Fortitude Valley, Brisbane. I proposed a large scale, electronic artwork to be distributed across the glass fronted structure of their station’s new concourse building. It was designed as a network of LED based ‘tracking’ - along which would travel electronically animated, ‘trains’ of text synchronised to the actual train timetables. Each message packet moved endlessly through a complex spatial network of ‘tracks’ and ‘stations’ set both inside, outside and via the concourse. The design was underpinned by large scale image of sound waves etched onto the architecture’s glass and was accompanied by two inset monitors each presenting ghosted images of passenger movements within the concourse, time-delay recorded and then cross-combined in realtime to form new composites.----- Each moving, reprogrammable phrase was conceived as a ‘train of thought’ and ostensibly contained an idea or concept about popular cultures surrounding contemporary music – thereby meeting the brief that the work should speak to the diverse musical cultures central to Fortitude Valley’s image as an entertainment hub. These cultural ‘memes’, gathered from both passengers and the music press were situated alongside quotes from philosophies of networking, speed and digital ecologies. These texts would continually propagate, replicate and cross fertlise as they moved throughout the ‘network’, thereby writing a constantly evolving ‘textual soundcape’ of that place. This idea was further cemented through the pace, scale and rhythm of passenger movements continually recorded and re-presented on the smaller screens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The creative practice: the adaptation of picture book The Empty City (Megarrity/Oxlade, Hachette 2007) into an innovative, interdisciplinary performance for children which combines live performance, music, projected animation and performing objects. The researcher, in the combined roles of writer/composer proposes deliberate experiments in music, narrative and emotion in the various drafts of the adaptation, and tests them in process and performance product. A particular method of composing music for live performance is tested in against the emergent needs of a collaborative, intermedial process. The unpredictable site of research means that this project is both looking to address both pre-determined and emerging points of inquiry. This analysis (directed by audience reception) finds that critical incidents of intermediality between music, narrative, action and emotion translate directly into highlights of the performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

These wordless songs were composed as music first, and soundtrack second. There is a difference. A soundtrack will always be connected with whatever it is accompanying. Music doesn’t neccessarily need to reference anything else. The Empty City transformed a picture book into a non-verbal performance combining the live and animated. Without spoken words the show would dance on the dangerous intersection of music, image and action. In both theatre and film (and this production drew on both traditions) soundtrack and music are often added on at the end when everything’s been pre-determined, a passive, responsive mode for such a powerful artform. It’s literally added in ‘post’. In The Empty City, music was present from its inception and grew with the show. It was active in process and product. It frequently led rehearsals and shaped other key decisions in virtual and live performance. Rather than tailor-make music towards pre-determined moments, independent compositions created without specific reference to narrative experimented with the creation of a flock of small musical pieces. I was interested in seeing how they flew and where they roosted, rather than having them born and raised in (narrative) captivity. The sonic palette is largely acoustic, incorporating ukulele, prepared piano and supported by a range of other elements tending towards electronica. Eventually more than seventy pieces of music were made for this show, twice the number used. These pieces were then placed in relation to the emerging scenes, then adapted in duration, texture and progression to develop a relationship with the scene. In this way, music (even when it’s synced) has a conversation with a performance, an exchange that may result in surprise rather than fulfillment of expectation. Leitmotif emerged from loops and layers, as the pieces of music ‘conversed’ with each other, rather than being premeditated and imposed. Nineteen of these tracks are compiled for this release, which finds the compositions (which progressed through many versions) poised at the moment between their fullest iteration as ‘music’ and their editing and full incorporation into a sychronised soundtrack. They are released as the began: as 'music-alone' (Kivy) In picture-book writing, the mutual interplay of text and image is sometimes referred to as interanimation , and this is the kind of symbiosis this project sought in the creation of the soundtrack. Reviewers of the noted the important role of the soundtrack in two separate productions of The Empty City: “The original score…takes centre stage” (Borhani, 2013) “…swept up in its repetition of sounds and images, like a Bach fugue” (Zampatti, 2013)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semantic knowledge is supported by a widely distributed neuronal network, with differential patterns of activation depending upon experimental stimulus or task demands. Despite a wide body of knowledge on semantic object processing from the visual modality, the response of this semantic network to environmental sounds remains relatively unknown. Here, we used fMRI to investigate how access to different conceptual attributes from environmental sound input modulates this semantic network. Using a range of living and manmade sounds, we scanned participants whilst they carried out an object attribute verification task. Specifically, we tested visual perceptual, encyclopedic, and categorical attributes about living and manmade objects relative to a high-level auditory perceptual baseline to investigate the differential patterns of response to these contrasting types of object-related attributes, whilst keeping stimulus input constant across conditions. Within the bilateral distributed network engaged for processing environmental sounds across all conditions, we report here a highly significant dissociation within the left hemisphere between the processing of visual perceptual and encyclopedic attributes of objects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A travel article about Nova Scotia, and the area's annual Celtic music festival. I ARRIVED in Cape Breton on the occasion of the Fibre Festival, run not only by the South Haven Guild of Weavers but also the Baddeck Quilters Guild. And yet I might not have noticed that it was on, had it not been for a car, shrouded entirely by a quilt cover, that was parked outside the Volunteer Fire Department Hall. I was on my way to the Alexander Graham Bell Museum a little further along Baddeck's main street. But I stopped, for who wouldn't stop to look at the various fibres of Cape Breton. The hall had been divided between weavers and quilters. Naturally, I left hoping that one day this ancient divide might be healed...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprehension of a complex acoustic signal - speech - is vital for human communication, with numerous brain processes required to convert the acoustics into an intelligible message. In four studies in the present thesis, cortical correlates for different stages of speech processing in a mature linguistic system of adults were investigated. In two further studies, developmental aspects of cortical specialisation and its plasticity in adults were examined. In the present studies, electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings of the mismatch negativity (MMN) response elicited by changes in repetitive unattended auditory events and the phonological mismatch negativity (PMN) response elicited by unexpected speech sounds in attended speech inputs served as the main indicators of cortical processes. Changes in speech sounds elicited the MMNm, the magnetic equivalent of the electric MMN, that differed in generator loci and strength from those elicited by comparable changes in non-speech sounds, suggesting intra- and interhemispheric specialisation in the processing of speech and non-speech sounds at an early automatic processing level. This neuronal specialisation for the mother tongue was also reflected in the more efficient formation of stimulus representations in auditory sensory memory for typical native-language speech sounds compared with those formed for unfamiliar, non-prototype speech sounds and simple tones. Further, adding a speech or non-speech sound context to syllable changes was found to modulate the MMNm strength differently in the left and right hemispheres. Following the acoustic-phonetic processing of speech input, phonological effort related to the selection of possible lexical (word) candidates was linked with distinct left-hemisphere neuronal populations. In summary, the results suggest functional specialisation in the neuronal substrates underlying different levels of speech processing. Subsequently, plasticity of the brain's mature linguistic system was investigated in adults, in whom representations for an aurally-mediated communication system, Morse code, were found to develop within the same hemisphere where representations for the native-language speech sounds were already located. Finally, recording and localization of the MMNm response to changes in speech sounds was successfully accomplished in newborn infants, encouraging future MEG investigations on, for example, the state of neuronal specialisation at birth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Autism and Asperger syndrome (AS) are neurodevelopmental disorders characterised by deficient social and communication skills, as well as restricted, repetitive patterns of behaviour. The language development in individuals with autism is significantly delayed and deficient, whereas in individuals with AS, the structural aspects of language develop quite normally. Both groups, however, have semantic-pragmatic language deficits. The present thesis investigated auditory processing in individuals with autism and AS. In particular, the discrimination of and orienting to speech and non-speech sounds was studied, as well as the abstraction of invariant sound features from speech-sound input. Altogether five studies were conducted with auditory event-related brain potentials (ERP); two studies also included a behavioural sound-identification task. In three studies, the subjects were children with autism, in one study children with AS, and in one study adults with AS. In children with autism, even the early stages of sound encoding were deficient. In addition, these children had altered sound-discrimination processes characterised by enhanced spectral but deficient temporal discrimination. The enhanced pitch discrimination may partly explain the auditory hypersensitivity common in autism, and it may compromise the filtering of relevant auditory information from irrelevant information. Indeed, it was found that when sound discrimination required abstracting invariant features from varying input, children with autism maintained their superiority in pitch processing, but lost it in vowel processing. Finally, involuntary orienting to sound changes was deficient in children with autism in particular with respect to speech sounds. This finding is in agreement with previous studies on autism suggesting deficits in orienting to socially relevant stimuli. In contrast to children with autism, the early stages of sound encoding were fairly unimpaired in children with AS. However, sound discrimination and orienting were rather similarly altered in these children as in those with autism, suggesting correspondences in the auditory phenotype in these two disorders which belong to the same continuum. Unlike children with AS, adults with AS showed enhanced processing of duration changes, suggesting developmental changes in auditory processing in this disorder.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data on the influence of unilateral vocal fold paralysis on breathing, especially other than information obtained by spirometry, are relatively scarce. Even less is known about the effect of its treatment by vocal fold medialization. Consequently, there was a need to study the issue by combining multiple instruments capable of assessing airflow dynamics and voice. This need was emphasized by a recently developed medialization technique, autologous fascia injection; its effects on breathing have not previously been investigated. A cohort of ten patients with unilateral vocal fold paralysis was studied before and after autologous fascia injection by using flow-volume spirometry, body plethysmography and acoustic analysis of breathing and voice. Preoperative results were compared with those of ten healthy controls. A second cohort of 11 subjects with unilateral vocal fold paralysis was studied pre- and postoperatively by using flow-volume spirometry, impulse oscillometry, acoustic analysis of voice, voice handicap index and subjective assessment of dyspnoea. Preoperative peak inspiratory flow and specific airway conductance were significantly lower and airway resistance was significantly higher in the patients than in the healthy controls (78% vs. 107%, 73% vs. 116% and 182% vs. 125% of predicted; p = 0.004, p = 0.004 and p = 0.026, respectively). Patients had a higher root mean square of spectral power of tracheal sounds than controls, and three of them had wheezes as opposed to no wheezing in healthy subjects. Autologous fascia injection significantly improved acoustic parameters of the voice in both cohorts and voice handicap index in the latter cohort, indicating that this procedure successfully improved voice in unilateral vocal fold paralysis. Peak inspiratory flow decreased significantly as a consequence of this procedure (from 4.54 ± 1.68 l to 4.21 ± 1.26 l, p = 0.03, in pooled data of both cohorts), but no change occurred in the other variables of flow-volume spirometry, body-plethysmography and impulse oscillometry. Eight of the ten patients studied by acoustic analysis of breathing had wheezes after vocal fold medialization compared with only three patients before the procedure, and the numbers of wheezes per recorded inspirium and expirium increased significantly (from 0.02 to 0.42 and from 0.03 to 0.36; p = 0.028 and p = 0.043, respectively). In conclusion, unilateral vocal fold paralysis was observed to disturb forced breathing and also to cause some signs of disturbed tidal breathing. Findings of flow volume spirometry were consistent with variable extra-thoracic obstruction. Vocal fold medialization by autologous fascia injection improved the quality of the voice in patients with unilateral vocal fold paralysis, but also decreased peak inspiratory flow and induced wheezing during tidal breathing. However, these airflow changes did not appear to cause significant symptoms in patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speech has both auditory and visual components (heard speech sounds and seen articulatory gestures). During all perception, selective attention facilitates efficient information processing and enables concentration on high-priority stimuli. Auditory and visual sensory systems interact at multiple processing levels during speech perception and, further, the classical motor speech regions seem also to participate in speech perception. Auditory, visual, and motor-articulatory processes may thus work in parallel during speech perception, their use possibly depending on the information available and the individual characteristics of the observer. Because of their subtle speech perception difficulties possibly stemming from disturbances at elemental levels of sensory processing, dyslexic readers may rely more on motor-articulatory speech perception strategies than do fluent readers. This thesis aimed to investigate the neural mechanisms of speech perception and selective attention in fluent and dyslexic readers. We conducted four functional magnetic resonance imaging experiments, during which subjects perceived articulatory gestures, speech sounds, and other auditory and visual stimuli. Gradient echo-planar images depicting blood oxygenation level-dependent contrast were acquired during stimulus presentation to indirectly measure brain hemodynamic activation. Lip-reading activated the primary auditory cortex, and selective attention to visual speech gestures enhanced activity within the left secondary auditory cortex. Attention to non-speech sounds enhanced auditory cortex activity bilaterally; this effect showed modulation by sound presentation rate. A comparison between fluent and dyslexic readers' brain hemodynamic activity during audiovisual speech perception revealed stronger activation of predominantly motor speech areas in dyslexic readers during a contrast test that allowed exploration of the processing of phonetic features extracted from auditory and visual speech. The results show that visual speech perception modulates hemodynamic activity within auditory cortex areas once considered unimodal, and suggest that the left secondary auditory cortex specifically participates in extracting the linguistic content of seen articulatory gestures. They are strong evidence for the importance of attention as a modulator of auditory cortex function during both sound processing and visual speech perception, and point out the nature of attention as an interactive process (influenced by stimulus-driven effects). Further, they suggest heightened reliance on motor-articulatory and visual speech perception strategies among dyslexic readers, possibly compensating for their auditory speech perception difficulties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Limited data on cervical auscultation (CA) sounds in normal swallows of various food and fluid textures during the transitional feeding period of 4-36 months exists. This study documents the acoustic and perceptual parameters of swallowing sounds in healthy children aged 4–36 months over a range of food and fluid consistencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rain forest dusk chorus consists of a large number of individuals of acoustically communicating species signaling at the same time. How different species achieve effective intra-specific communication in this complex and noisy acoustic environment is not well understood. In this study we examined acoustic masking interference in an assemblage of rain forest crickets and katydids. We used signal structures and spacing of signalers to estimate temporal, spectral and active space overlap between species. We then examined these overlaps for evidence of strategies of masking avoidance in the assemblage: we asked whether species whose signals have high temporal or spectral overlap avoid calling together. Whereas we found evidence that species with high temporal overlap may avoid calling together, there was no relation between spectral overlap and calling activity. There was also no correlation between the spectral and temporal overlaps of the signals of different species. In addition, we found little evidence that species calling in the understorey actively use spacing to minimize acoustic overlap. Increasing call intensity and tuning receivers however emerged as powerful strategies to minimize acoustic overlap. Effective acoustic overlaps were on average close to zero for most individuals in natural, multispecies choruses, even in the absence of behavioral avoidance mechanisms such as inhibition of calling or active spacing. Thus, call temporal structure, intensity and frequency together provide sufficient parameter space for several species to call together yet communicate effectively with little interference in the apparent cacophony of a rain forest dusk chorus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natural sounds are structured on many time-scales. A typical segment of speech, for example, contains features that span four orders of magnitude: Sentences ($\sim1$s); phonemes ($\sim10$−$1$ s); glottal pulses ($\sim 10$−$2$s); and formants ($\sim 10$−$3$s). The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis [1]. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscience-inspired algorithms which solve similar tasks and to compare the properties of these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter time-scale structures in sounds, and the longer structures are ignored. The reason for this is two-fold. Firstly, it is a difficult technical problem to construct an algorithm that utilises both sorts of information. Secondly, it is computationally demanding to simultaneously process data both at high resolution (to extract short temporal information) and for long duration (to extract long temporal information). The contribution of this work is to develop a new statistical model for natural sounds that captures structure across a wide range of time-scales, and to provide efficient learning and inference algorithms. We demonstrate the success of this approach on a missing data task.