934 resultados para Sounds(waterways)
Resumo:
Humpback whale ‘‘social sounds’’ appear to be used in communication when whales interact but they have received little study in comparison to the song. During experiments as part of the Humpback whale Acoustics Research Collaboration (HARC), whales migrating past the study site on the east coast of Australia produced a wide range of social sounds. Whales were tracked visually using a theodolite and singers were tracked acoustically using an array of five widely spaced hydrophones. Source levels of social sounds were calculated from the received level of the sounds, corrected for measured propagation loss. Playbacks of social sounds were made using a J11 transducer and the consequent reactions were recorded in terms of the change in direction of the migrating whales in relation to the playback position. In one playback, a DTAG was place on a female with calf. Playback of social sounds resulted in significant changes in the course of the migrating whales, in some cases towards the transducer while in others it was away from the transducer. From the estimates of source levels it is possible to assess the effectiveness of the playback and the range of influence of social sounds. [Work supported by ONR and DSTO.]
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Cetaceans are aquatic mammals that rely primarily on sound for most daily tasks. A compendium of sounds is emitted for orientation, prey detection, and predator avoidance, and to communicate. Communicative sounds are among the most studied Cetacean signals, particularly those referred to as tonal sounds. Because tonal sounds have been studied especially well in social dolphins, it has been assumed these sounds evolved as a social adaptation. However, whistles have been reported in ‘solitary’ species and have been secondarily lost three times in social lineages. Clearly, therefore, it is necessary to examine closely the association, if any, between whistles and sociality instead of merely assuming it. Several hypotheses have been proposed to explain the evolutionary history of Cetacean tonal sounds. The main goal of this dissertation is to cast light on the evolutionary history of tonal sounds by testing these hypotheses by combining comparative phylogenetic and field methods. This dissertation provides the first species-level phylogeny of Cetacea and phylogenetic tests of evolutionary hypotheses of cetacean communicative signals. Tonal sounds evolution is complex in that has likely been shaped by a combination of factors that may influence different aspects of their acoustical structure. At the inter-specific level, these results suggest that only tonal sound minimum frequency is constrained by body size. Group size also influences tonal sound minimum frequency. Species that live in large groups tend to produce higher frequency tonal sounds. The evolutionary history of tonal sounds and sociality may be intertwined, but in a complex manner rejecting simplistic views such as the hypothesis that tonal sounds evolved ‘for’ social communication in dolphins. Levels of social and tonal sound complexity nevertheless correlate indicating the importance of tonal sounds in social communication. At the intraspecific level, tonal sound variation in frequency and temporal parameters may be product of genetic isolation and local levels of underwater noise. This dissertation provides one of the first insights into the evolution of Cetacean tonal sounds in a phylogenetic context, and points out key species where future studies would be valuable to enrich our understanding of other factors also playing a role in tonal sound evolution. ^
Resumo:
The present study investigated the development of sensitivity to temporal synchrony between sounds of impact and pauses in the movement of an object by infants of 2 1/2, 4 and 6 months of age. Ninety infants were tested across four experiments with side-by-side videos of a red and white square and a blue and yellow triangle along with a centralized soundtrack which was synchronized with only one of the films. This preference phase was then followed by a search phase, where the two films were accompanied by intermittent bursts of the soundtrack from each object. Twomonth- olds showed no evidence of matching films and soundtracks on the basis of synchrony, however 4-month-olds looked more on the second block of trials to the object which paused when the sound occurred and directed more first looks during the preference phase to the matching object. Six-month-olds demonstrated significantly more first looks to the mismatched object during the search phase only. These results suggest that infants relate impact sounds with synchronous pauses in continuous motion by the age of four months.
Resumo:
Cetaceans are aquatic mammals that rely primarily on sound for most daily tasks. A compendium of sounds is emitted for orientation, prey detection, and predator avoidance, and to communicate. Communicative sounds are among the most studied Cetacean signals, particularly those referred to as tonal sounds. Because tonal sounds have been studied especially well in social dolphins, it has been assumed these sounds evolved as a social adaptation. However, whistles have been reported in ‘solitary’ species and have been secondarily lost three times in social lineages. Clearly, therefore, it is necessary to examine closely the association, if any, between whistles and sociality instead of merely assuming it. Several hypotheses have been proposed to explain the evolutionary history of Cetacean tonal sounds. The main goal of this dissertation is to cast light on the evolutionary history of tonal sounds by testing these hypotheses by combining comparative phylogenetic and field methods. This dissertation provides the first species-level phylogeny of Cetacea and phylogenetic tests of evolutionary hypotheses of cetacean communicative signals. Tonal sounds evolution is complex in that has likely been shaped by a combination of factors that may influence different aspects of their acoustical structure. At the inter-specific level, these results suggest that only tonal sound minimum frequency is constrained by body size. Group size also influences tonal sound minimum frequency. Species that live in large groups tend to produce higher frequency tonal sounds. The evolutionary history of tonal sounds and sociality may be intertwined, but in a complex manner rejecting simplistic views such as the hypothesis that tonal sounds evolved ‘for’ social communication in dolphins. Levels of social and tonal sound complexity nevertheless correlate indicating the importance of tonal sounds in social communication. At the intraspecific level, tonal sound variation in frequency and temporal parameters may be product of genetic isolation and local levels of underwater noise. This dissertation provides one of the first insights into the evolution of Cetacean tonal sounds in a phylogenetic context, and points out key species where future studies would be valuable to enrich our understanding of other factors also playing a role in tonal sound evolution.
Resumo:
Aims and Scope: No sound class requires so much basic knowledge of phonology, acoustics, aerodynamics, and speech production as obstruents (turbulent sounds) do. This book is intended to bridge a gap by introducing the reader to the world of obstruents from a multidisciplinary perspective. It starts with a review of typological processes, continues with various contributions to the phonetics-phonology interface, explains the realization of specific turbulent sounds in endangered languages, and finishes with surveys of obstruents from a sociophonetic, physical and pathological perspective.
Resumo:
In the past canals were developed, and some rivers were heavily altered, driven by the need for good transportation infrastructure. Major investments were made in navigation locks, weirs and artificial embankments, and many of these assets are now reaching the end of their technical lifetime. Since then the concept of integrated water resource management (IWRM) emerged as a concept to manage and develop water-bodies in general. Two pressing problems arise from these developments: (1) major reinvestment is needed in order to maintain the transportation function of these waterways, and (2), it is not clear how the implementation of the concept of IWRM can be brought into harmony with such reinvestment. This paper aims to illustrate the problems in capital-intensive parts of waterway systems, and argues for exploring value-driven solutions that rely on the inclusion of multiple values, thus solving both funding problems and stakeholder conflicts. The focus on value in cooperative strategies is key to defining viable implementation strategies for waterway projects.
Resumo:
Research in various fields has shown that students benefit from teacher action demonstrations during instruction, establishing the need to better understand the effectiveness of different demonstration types across student proficiency levels. This study centres upon a piano learning and teaching environment in which beginners and intermediate piano students (N=48) learning to perform a specific type of staccato were submitted to three different (group exclusive) teaching conditions: audio-only demonstration of the musical task; observation of the teacher's action demonstration followed by student imitation (blockedobservation); and observation of the teacher's action demonstration whilst alternating imitation of the task with the teacher's performance (interleaved-observation). Learning was measured in relation to students' range of wrist amplitude (RWA) and ratio of sound and inter-sound duration (SIDR) before, during and after training. Observation and imitation of the teacher’s action demonstrations had a beneficial effect on students' staccato knowledge retention at different times after training: students submitted to interleaved-observation presented significantly shorter note duration and larger wrist rotation, and as such, were more proficient at the learned technique in each of the lesson and retention tests than students in the other learning conditions. There were no significant differences in performance or retention for students of different proficiency levels. These findings have relevant implications for instrumental music pedagogy and other contexts where embodied action is an essential aspect of the learning process.
Resumo:
The MARS (Media Asset Retrieval System) Project is the collaborative effort of public broadcasters,libraries and schools in the Puget Sound region to create a digital online resource that provides access to content produced by public broadcasters via the public libraries. Convergence ConsortiumThe Convergence Consortium is a model for community collaboration, including organizations such as public broadcasters, libraries, museums, and schools in the Puget Sound region to assess the needs of their constituents and pool resources to develop solutions to meet those needs. Specifically, the archives of public broadcasters have been identified as significant resources for the local communities and nationally. These resources can be accessed on the broadcasters websites, and through libraries and used by schools, and integrated with text and photographic archives from other partners.MARS’ goalCreate an online resource that provides effective access to the content produced locally by KCTS (Seattle PBS affiliate) and KUOW (Seattle NPR affiliate). The broadcasts will be made searchable using the CPB Metadata Element Set (under development) and controlled vocabularies (to be developed). This will ensure a user friendly search and navigation mechanism and user satisfaction.Furthermore, the resource can search the local public library’s catalog concurrently and provide the user with relevant TV material, radio material, and books on a given subject.The ultimate goal is to produce a model that can be used in cities around the country.The current phase of the project assesses the community’s need, analyzes the current operational systems, and makes recommendations for the design of the resource.Deliverables• Literature review of the issues surrounding the organization, description and representation of media assets• Needs assessment report of internal and external stakeholders• Profile of the systems in the area of managing and organizing media assetsfor public broadcasting nationwideActivities• Analysis of information seeking behavior• Analysis of collaboration within the respective organizations• Analysis of the scope and context of the proposed system• Examining the availability of information resources and exchangeof resources among users
Resumo:
The MARS (Media Asset Retrieval System) Project is a collaboration between public broadcasters, libraries and schools in the Puget Sound region to assess the needs of their constituents and pool resources to develop solutions to meet those needs. The Project’s ultimate goal is to create a digital online resource that will provide access to content produced by public broadcasters and libraries. The MARS Project is funded by a grant from the Corporation for Public Broadcasting (CPB) Television Future Fund. Convergence ConsortiumThe Convergence Consortium is a model for community collaboration, including representatives from public broadcasting, libraries and schools in the Puget Sound region. They meet regularly to consider collaborative efforts that will be mutually beneficial to their institutions and constituents. Specifically, the archives of public broadcasters have been identified as significant resources that can be accessed through libraries and used by schools, and integrated with text and photographic archives from other partners.Using the work-centered framework, we collected data through interviews with nine engineers and observation of their searching while they performed their regular, job-related searches on the Web. The framework was used to analyze the data on two levels: 1) the activities and organizational relationships and constrains of work domains, and 2) users’ cognitive and social activities and their subjective preferences during searching.
Resumo:
Amphibian is an 10’00’’ musical work which explores new musical interfaces and approaches to hybridising performance practices from the popular music, electronic dance music and computer music traditions. The work is designed to be presented in a range of contexts associated with the electro-acoustic, popular and classical music traditions. The work is for two performers using two synchronised laptops, an electric guitar and a custom designed gestural interface for vocal performers - the e-Mic (Extended Mic-stand Interface Controller). This interface was developed by one of the co-authors, Donna Hewitt. The e-Mic allows a vocal performer to manipulate the voice in real time through the capture of physical gestures via an array of sensors - pressure, distance, tilt - along with ribbon controllers and an X-Y joystick microphone mount. Performance data are then sent to a computer, running audio-processing software, which is used to transform the audio signal from the microphone. In this work, data is also exchanged between performers via a local wireless network, allowing performers to work with shared data streams. The duo employs the gestural conventions of guitarist and singer (i.e. 'a band' in a popular music context), but transform these sounds and gestures into new digital music. The gestural language of popular music is deliberately subverted and taken into a new context. The piece thus explores the nexus between the sonic and performative practices of electro acoustic music and intelligent electronic dance music (‘idm’). This work was situated in the research fields of new musical interfacing, interaction design, experimental music composition and performance. The contexts in which the research was conducted were live musical performance and studio music production. The work investigated new methods for musical interfacing, performance data mapping, hybrid performance and compositional practices in electronic music. The research methodology was practice-led. New insights were gained from the iterative experimental workshopping of gestural inputs, musical data mapping, inter-performer data exchange, software patch design, data and audio processing chains. In respect of interfacing, there were innovations in the design and implementation of a novel sensor-based gestural interface for singers, the e-Mic, one of the only existing gestural controllers for singers. This work explored the compositional potential of sharing real time performance data between performers and deployed novel methods for inter-performer data exchange and mapping. As regards stylistic and performance innovation, the work explored and demonstrated an approach to the hybridisation of the gestural and sonic language of popular music with recent ‘post-digital’ approaches to laptop based experimental music The development of the work was supported by an Australia Council Grant. Research findings have been disseminated via a range of international conference publications, recordings, radio interviews (ABC Classic FM), broadcasts, and performances at international events and festivals. The work was curated into the major Australian international festival, Liquid Architecture, and was selected by an international music jury (through blind peer review) for presentation at the International Computer Music Conference in Belfast, N. Ireland.
Resumo:
Sleeper is an 18'00" musical work for live performer and laptop computer which exists as both a live performance work and a recorded work for audio CD. The work has been presented at a range of international performance events and survey exhibitions. These include the 2003 International Computer Music Conference (Singapore) where it was selected for CD publication, Variable Resistance (San Francisco Museum of Modern Art, USA), and i.audio, a survey of experimental sound at the Performance Space, Sydney. The source sound materials are drawn from field recordings made in acoustically resonant spaces in the Australian urban environment, amplified and acoustic instruments, radio signals, and sound synthesis procedures. The processing techniques blur the boundaries between, and exploit, the perceptual ambiguities of de-contextualised and processed sound. The work thus challenges the arbitrary distinctions between sound, noise and music and attempts to reveal the inherent musicality in so-called non-musical materials via digitally re-processed location audio. Thematically the work investigates Paul Virilio’s theory that technology ‘collapses space’ via the relationship of technology to speed. Technically this is explored through the design of a music composition process that draws upon spatially and temporally dispersed sound materials treated using digital audio processing technologies. One of the contributions to knowledge in this work is a demonstration of how disparate materials may be employed within a compositional process to produce music through the establishment of musically meaningful morphological, spectral and pitch relationships. This is achieved through the design of novel digital audio processing networks and a software performance interface. The work explores, tests and extends the music perception theories of ‘reduced listening’ (Schaeffer, 1967) and ‘surrogacy’ (Smalley, 1997), by demonstrating how, through specific audio processing techniques, sounds may shifted away from ‘causal’ listening contexts towards abstract aesthetic listening contexts. In doing so, it demonstrates how various time and frequency domain processing techniques may be used to achieve this shift.
Resumo:
The SoundCipher software library provides an easy way to create music in the Processing development environment. With the SoundCipher library added to Processing you can write software programs that make music to go along with your graphics and you can add sounds to enhance your Processing animations or games. SoundCipher provides an easy interface for playing 'notes' on the JavaSound synthesizer, for playback of audio files, and comunicating via MIDI. It provides accurate scheduling and allows events to be organised in musical time; using beats and tempo. It uses a 'score' metaphor that allows the construction of simple or complex musical arrangements. SoundCipher is designed to facilitate the basics of algorithmic music and interactive sound design as well as providing a platform for sophisticated computational music, it allows integration with the Minim library when more sophisticated audio and synthesis functionality is required and integration with the oscP5 library for communicating via open sound control.