70 resultados para Sounds.
em Queensland University of Technology - ePrints Archive
Resumo:
Fundamental Sounds was a live, intercultural and multidisciplinary concert that presented a new synthesis of music, performance & visual arts addressing the imperative of sustainability in a new and evocative form. The outcome was a ninety-minute concert, performed at a major concert hall venue, involving four live musicians, numerous performers & large-scale projections. The images and the concert were scripted in three key phases that spoke to three epochs of human evolution identified by ontological designer and futurist Tony Fry - ‘Pre-Settlement’, ‘Settlement’ and the era that he suggests that we have now entered – ‘Unsettlement’ (in mind body and spirit). The entire work was professionally recorded for presentation on DVD and audio CD.----- Fundamental Sounds achieved a new synthesis between quality performance forms and cogent critical ideas, engendering an increasingly reflective position for audiences around today’s “era of unsettlement” – an epoch Fry has recognized that we must now move to quickly displace through adopting fundamentally sustainable modes of being and becoming.----- The concert was well attended and evoked a range of strong, reflective reactions from its audiences who were also invited to join and participate within a subsequent ‘community of change’ initiated at that time.
Resumo:
Sounds of the Suburb was a commissioned public art proposal based upon a brief set by Queensland Rail for the major redevelopment at their Brunswick Street Railway Station, Fortitude Valley, Brisbane. I proposed a large scale, electronic artwork to be distributed across the glass fronted structure of their station’s new concourse building. It was designed as a network of LED based ‘tracking’ - along which would travel electronically animated, ‘trains’ of text synchronised to the actual train timetables. Each message packet moved endlessly through a complex spatial network of ‘tracks’ and ‘stations’ set both inside, outside and via the concourse. The design was underpinned by large scale image of sound waves etched onto the architecture’s glass and was accompanied by two inset monitors each presenting ghosted images of passenger movements within the concourse, time-delay recorded and then cross-combined in realtime to form new composites.----- Each moving, reprogrammable phrase was conceived as a ‘train of thought’ and ostensibly contained an idea or concept about popular cultures surrounding contemporary music – thereby meeting the brief that the work should speak to the diverse musical cultures central to Fortitude Valley’s image as an entertainment hub. These cultural ‘memes’, gathered from both passengers and the music press were situated alongside quotes from philosophies of networking, speed and digital ecologies. These texts would continually propagate, replicate and cross fertlise as they moved throughout the ‘network’, thereby writing a constantly evolving ‘textual soundcape’ of that place. This idea was further cemented through the pace, scale and rhythm of passenger movements continually recorded and re-presented on the smaller screens.
Resumo:
The creative practice: the adaptation of picture book The Empty City (Megarrity/Oxlade, Hachette 2007) into an innovative, interdisciplinary performance for children which combines live performance, music, projected animation and performing objects. The researcher, in the combined roles of writer/composer proposes deliberate experiments in music, narrative and emotion in the various drafts of the adaptation, and tests them in process and performance product. A particular method of composing music for live performance is tested in against the emergent needs of a collaborative, intermedial process. The unpredictable site of research means that this project is both looking to address both pre-determined and emerging points of inquiry. This analysis (directed by audience reception) finds that critical incidents of intermediality between music, narrative, action and emotion translate directly into highlights of the performance.
Resumo:
These wordless songs were composed as music first, and soundtrack second. There is a difference. A soundtrack will always be connected with whatever it is accompanying. Music doesn’t neccessarily need to reference anything else. The Empty City transformed a picture book into a non-verbal performance combining the live and animated. Without spoken words the show would dance on the dangerous intersection of music, image and action. In both theatre and film (and this production drew on both traditions) soundtrack and music are often added on at the end when everything’s been pre-determined, a passive, responsive mode for such a powerful artform. It’s literally added in ‘post’. In The Empty City, music was present from its inception and grew with the show. It was active in process and product. It frequently led rehearsals and shaped other key decisions in virtual and live performance. Rather than tailor-make music towards pre-determined moments, independent compositions created without specific reference to narrative experimented with the creation of a flock of small musical pieces. I was interested in seeing how they flew and where they roosted, rather than having them born and raised in (narrative) captivity. The sonic palette is largely acoustic, incorporating ukulele, prepared piano and supported by a range of other elements tending towards electronica. Eventually more than seventy pieces of music were made for this show, twice the number used. These pieces were then placed in relation to the emerging scenes, then adapted in duration, texture and progression to develop a relationship with the scene. In this way, music (even when it’s synced) has a conversation with a performance, an exchange that may result in surprise rather than fulfillment of expectation. Leitmotif emerged from loops and layers, as the pieces of music ‘conversed’ with each other, rather than being premeditated and imposed. Nineteen of these tracks are compiled for this release, which finds the compositions (which progressed through many versions) poised at the moment between their fullest iteration as ‘music’ and their editing and full incorporation into a sychronised soundtrack. They are released as the began: as 'music-alone' (Kivy) In picture-book writing, the mutual interplay of text and image is sometimes referred to as interanimation , and this is the kind of symbiosis this project sought in the creation of the soundtrack. Reviewers of the noted the important role of the soundtrack in two separate productions of The Empty City: “The original score…takes centre stage” (Borhani, 2013) “…swept up in its repetition of sounds and images, like a Bach fugue” (Zampatti, 2013)
Resumo:
Semantic knowledge is supported by a widely distributed neuronal network, with differential patterns of activation depending upon experimental stimulus or task demands. Despite a wide body of knowledge on semantic object processing from the visual modality, the response of this semantic network to environmental sounds remains relatively unknown. Here, we used fMRI to investigate how access to different conceptual attributes from environmental sound input modulates this semantic network. Using a range of living and manmade sounds, we scanned participants whilst they carried out an object attribute verification task. Specifically, we tested visual perceptual, encyclopedic, and categorical attributes about living and manmade objects relative to a high-level auditory perceptual baseline to investigate the differential patterns of response to these contrasting types of object-related attributes, whilst keeping stimulus input constant across conditions. Within the bilateral distributed network engaged for processing environmental sounds across all conditions, we report here a highly significant dissociation within the left hemisphere between the processing of visual perceptual and encyclopedic attributes of objects.
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).
Resumo:
A travel article about Nova Scotia, and the area's annual Celtic music festival. I ARRIVED in Cape Breton on the occasion of the Fibre Festival, run not only by the South Haven Guild of Weavers but also the Baddeck Quilters Guild. And yet I might not have noticed that it was on, had it not been for a car, shrouded entirely by a quilt cover, that was parked outside the Volunteer Fire Department Hall. I was on my way to the Alexander Graham Bell Museum a little further along Baddeck's main street. But I stopped, for who wouldn't stop to look at the various fibres of Cape Breton. The hall had been divided between weavers and quilters. Naturally, I left hoping that one day this ancient divide might be healed...
Resumo:
Limited data on cervical auscultation (CA) sounds in normal swallows of various food and fluid textures during the transitional feeding period of 4-36 months exists. This study documents the acoustic and perceptual parameters of swallowing sounds in healthy children aged 4–36 months over a range of food and fluid consistencies.
Resumo:
Amphibian is an 10’00’’ musical work which explores new musical interfaces and approaches to hybridising performance practices from the popular music, electronic dance music and computer music traditions. The work is designed to be presented in a range of contexts associated with the electro-acoustic, popular and classical music traditions. The work is for two performers using two synchronised laptops, an electric guitar and a custom designed gestural interface for vocal performers - the e-Mic (Extended Mic-stand Interface Controller). This interface was developed by one of the co-authors, Donna Hewitt. The e-Mic allows a vocal performer to manipulate the voice in real time through the capture of physical gestures via an array of sensors - pressure, distance, tilt - along with ribbon controllers and an X-Y joystick microphone mount. Performance data are then sent to a computer, running audio-processing software, which is used to transform the audio signal from the microphone. In this work, data is also exchanged between performers via a local wireless network, allowing performers to work with shared data streams. The duo employs the gestural conventions of guitarist and singer (i.e. 'a band' in a popular music context), but transform these sounds and gestures into new digital music. The gestural language of popular music is deliberately subverted and taken into a new context. The piece thus explores the nexus between the sonic and performative practices of electro acoustic music and intelligent electronic dance music (‘idm’). This work was situated in the research fields of new musical interfacing, interaction design, experimental music composition and performance. The contexts in which the research was conducted were live musical performance and studio music production. The work investigated new methods for musical interfacing, performance data mapping, hybrid performance and compositional practices in electronic music. The research methodology was practice-led. New insights were gained from the iterative experimental workshopping of gestural inputs, musical data mapping, inter-performer data exchange, software patch design, data and audio processing chains. In respect of interfacing, there were innovations in the design and implementation of a novel sensor-based gestural interface for singers, the e-Mic, one of the only existing gestural controllers for singers. This work explored the compositional potential of sharing real time performance data between performers and deployed novel methods for inter-performer data exchange and mapping. As regards stylistic and performance innovation, the work explored and demonstrated an approach to the hybridisation of the gestural and sonic language of popular music with recent ‘post-digital’ approaches to laptop based experimental music The development of the work was supported by an Australia Council Grant. Research findings have been disseminated via a range of international conference publications, recordings, radio interviews (ABC Classic FM), broadcasts, and performances at international events and festivals. The work was curated into the major Australian international festival, Liquid Architecture, and was selected by an international music jury (through blind peer review) for presentation at the International Computer Music Conference in Belfast, N. Ireland.
Resumo:
Sleeper is an 18'00" musical work for live performer and laptop computer which exists as both a live performance work and a recorded work for audio CD. The work has been presented at a range of international performance events and survey exhibitions. These include the 2003 International Computer Music Conference (Singapore) where it was selected for CD publication, Variable Resistance (San Francisco Museum of Modern Art, USA), and i.audio, a survey of experimental sound at the Performance Space, Sydney. The source sound materials are drawn from field recordings made in acoustically resonant spaces in the Australian urban environment, amplified and acoustic instruments, radio signals, and sound synthesis procedures. The processing techniques blur the boundaries between, and exploit, the perceptual ambiguities of de-contextualised and processed sound. The work thus challenges the arbitrary distinctions between sound, noise and music and attempts to reveal the inherent musicality in so-called non-musical materials via digitally re-processed location audio. Thematically the work investigates Paul Virilio’s theory that technology ‘collapses space’ via the relationship of technology to speed. Technically this is explored through the design of a music composition process that draws upon spatially and temporally dispersed sound materials treated using digital audio processing technologies. One of the contributions to knowledge in this work is a demonstration of how disparate materials may be employed within a compositional process to produce music through the establishment of musically meaningful morphological, spectral and pitch relationships. This is achieved through the design of novel digital audio processing networks and a software performance interface. The work explores, tests and extends the music perception theories of ‘reduced listening’ (Schaeffer, 1967) and ‘surrogacy’ (Smalley, 1997), by demonstrating how, through specific audio processing techniques, sounds may shifted away from ‘causal’ listening contexts towards abstract aesthetic listening contexts. In doing so, it demonstrates how various time and frequency domain processing techniques may be used to achieve this shift.
Resumo:
The SoundCipher software library provides an easy way to create music in the Processing development environment. With the SoundCipher library added to Processing you can write software programs that make music to go along with your graphics and you can add sounds to enhance your Processing animations or games. SoundCipher provides an easy interface for playing 'notes' on the JavaSound synthesizer, for playback of audio files, and comunicating via MIDI. It provides accurate scheduling and allows events to be organised in musical time; using beats and tempo. It uses a 'score' metaphor that allows the construction of simple or complex musical arrangements. SoundCipher is designed to facilitate the basics of algorithmic music and interactive sound design as well as providing a platform for sophisticated computational music, it allows integration with the Minim library when more sophisticated audio and synthesis functionality is required and integration with the oscP5 library for communicating via open sound control.
Resumo:
This paper explores a method of comparative analysis and classification of data through perceived design affordances. Included is discussion about the musical potential of data forms that are derived through eco-structural analysis of musical features inherent in audio recordings of natural sounds. A system of classification of these forms is proposed based on their structural contours. The classifications include four primitive types; steady, iterative, unstable and impulse. The classification extends previous taxonomies used to describe the gestural morphology of sound. The methods presented are used to provide compositional support for eco-structuralism.
Resumo:
Network Jamming systems provide real-time collaborative media performance experiences for novice or inexperienced users. In this paper we will outline the theoretical and developmental drivers for our Network Jamming software, called jam2jam. jam2jam employs generative algorithmic techniques with particular implications for accessibility and learning. We will describe how theories of engagement have directed the design and development of jam2jam and show how iterative testing cycles in numerous international sites have informed the evolution of the system and its educational potential. Generative media systems present an opportunity for users to leverage computational systems to make sense of complex media forms through interactive and collaborative experiences. Generative music and art are a relatively new phenomenon that use procedural invention as a creative technique to produce music and visual media. These kinds of systems present a range of affordances that can facilitate new kinds of relationships with music and media performance and production. Early systems have demonstrated the potential to provide access to collaborative ensemble experiences to users with little formal musical or artistic expertise.This presentation examines the educational affordances of these systems evidenced by field data drawn from the Network Jamming Project. These generative performance systems enable access to a unique kind of music/media’ ensemble performance with very little musical/ media knowledge or skill and they further offer the possibility of unique interactive relationships with artists and creative knowledge through collaborative performance. Through the process of observing, documenting and analysing young people interacting with the generative media software jam2jam a theory of meaningful engagement has emerged from the need to describe and codify how users experience creative engagement with music/media performance and the locations of meaning. In this research we observed that the musical metaphors and practices of ‘ensemble’ or collaborative performance and improvisation as a creative process for experienced musicians can be made available to novice users. The relational meanings of these musical practices afford access to high level personal, social and cultural experiences. Within the creative process of collaborative improvisation lie a series of modes of creative engagement that move from appreciation through exploration, selection, direction toward embodiment. The expressive sounds and visions made in real-time by improvisers collaborating are immediate and compelling. Generative media systems let novices access these experiences with simple interfaces that allow them to make highly professional and expressive sonic and visual content simply by using gestures and being attentive and perceptive to their collaborators. These kinds of experiences present the potential for highly complex expressive interactions with sound and media as a performance. Evidence that has emerged from this research suggest that collaborative performance with generative media is transformative and meaningful. In this presentation we draw out these ideas around an emerging theory of meaningful engagement that has evolved from the development of network jamming software. Primarily we focus on demonstrating how these experiences might lead to understandings that may be of educational and social benefit.