962 resultados para Sound recordings
Resumo:
The environmental characteristics can modify the acoustics of a species due to habitat, time of day and year. Therefore, this study investigated the relationships between seasons, tide, daily cycle of tides, times of day and different habitat and noise emission of S. guianensis. Sound recordings occurred in the Curral’s Cove and Lagoon Complex of Guaraíras (CLG) in the municipality of Tibau do Sul/RN. Whistles are emitted with lower frequency during rainy season and spring tide while clicks are higher; whistles, clicks and calls have higher frequency during ebb tide. These modifications can be related with turbidity and prey availability. The whistles and clicks occurrence are higher at night probably because luminosity is lower. Furthermore, the whistles and clicks frequency reduction overnight allows the sound to travel longer distance and helps the view which is limited; but the minimum frequency increase was needed to catch the prey. The low occurrence of calls could be related to the small group size. The acoustic changes at night may be partly influenced by light levels as prey availability that is larger in this period. Whistle frequencies and click initial frequency are higher in CLG than Curral’s cove that permitted good precision. However, click central frequency is lower and may be connected to tracking the area. Several factors may be associated with such modifications as background noise, bottom and others. This study supports the hypothesis that S. guianensis presents an acoustic plasticity according to the local conditions where the species is embedded and adapts to the environmental changes.
Resumo:
Ce mémoire rend compte de la création du « documentaire acousmatique » Littorale, une œuvre musicale à visée informative, élaborée au moyen de prises de son in situ, d’extraits d’archives sonores et des témoignages de sept informateurs. En tissant des liens entre les deux disciplines médiatiques que sont la composition acousmatique et le documentaire, l’œuvre retrace l’histoire d’un impressionnant corpus de chants folkloriques récoltés en 1918 par l’ethnologue Marius Barbeau, dans les villages côtiers de Sainte-Anne-des-Monts et Tourelle, Haute-Gaspésie. La démarche de composition s’élabore ainsi en trois axes communicants : la mise en lumière de liens préexistants mais sous-exploités entre le documentaire et l’acousmatique, la recherche de terrain entourant le répertoire de chansons et sa résurgence dans la population actuelle de la Haute-Gaspésie, ainsi que la composition des trois mouvements musicaux constituant Littorale. À travers l’investigation d’enjeux identitaires qui découlent de la redécouverte du répertoire et la mise en lumière de certains flous historiques qui y sont reliés, cet alliage de deux genres médiatiques vise l’émergence d’une démarche de composition informative et socialement pertinente.
Resumo:
Ce mémoire rend compte de la création du « documentaire acousmatique » Littorale, une œuvre musicale à visée informative, élaborée au moyen de prises de son in situ, d’extraits d’archives sonores et des témoignages de sept informateurs. En tissant des liens entre les deux disciplines médiatiques que sont la composition acousmatique et le documentaire, l’œuvre retrace l’histoire d’un impressionnant corpus de chants folkloriques récoltés en 1918 par l’ethnologue Marius Barbeau, dans les villages côtiers de Sainte-Anne-des-Monts et Tourelle, Haute-Gaspésie. La démarche de composition s’élabore ainsi en trois axes communicants : la mise en lumière de liens préexistants mais sous-exploités entre le documentaire et l’acousmatique, la recherche de terrain entourant le répertoire de chansons et sa résurgence dans la population actuelle de la Haute-Gaspésie, ainsi que la composition des trois mouvements musicaux constituant Littorale. À travers l’investigation d’enjeux identitaires qui découlent de la redécouverte du répertoire et la mise en lumière de certains flous historiques qui y sont reliés, cet alliage de deux genres médiatiques vise l’émergence d’une démarche de composition informative et socialement pertinente.
Resumo:
Copyright markets, it is said, are ‘winner takes all’ markets favouring the interests of corporate investors over the interests of primary creators. However, little is known about popular music creators’ ‘lived experience’ of copyright. This thesis interrogates key aspects of copyright transactions between creators and investors operating in the UK music industries using analysis of various copyright related documents and semi-structured interviews with creators and investors. The research found considerable variety in the types of ‘deal’ creators enter into and considerable divergence in the potential rewards. It was observed that new-entrant creators have little comprehension of the basic tenets of copyright, but with experience they become more ‘copyright aware’. Documentary and interview evidence reveals creators routinely assign copyright to third party investors for the full term of copyright in sound recordings: the justification for this is questionable. An almost inevitable consequence of this asymmetry of understanding of copyright and asymmetry of bargaining power is that creators become alienated from their copyright works. The empirical evidence presented here supports historic and contemporary calls for a statutory mechanism limiting the maximum copyright assignment period to ten-years.
Resumo:
Mode of access: Internet.
Resumo:
The development and recording of 10 songs for a CD to accompany DeepBlue's new live orchestra production "Who Are You" which began touring Australia and Asia in 2012.
Resumo:
Environmental monitoring has become increasingly important due to the significant impact of human activities and climate change on biodiversity. Environmental sound sources such as rain and insect vocalizations are a rich and underexploited source of information in environmental audio recordings. This paper is concerned with the classification of rain within acoustic sensor re-cordings. We present the novel application of a set of features for classifying environmental acoustics: acoustic entropy, the acoustic complexity index, spectral cover, and background noise. In order to improve the performance of the rain classification system we automatically classify segments of environmental recordings into the classes of heavy rain or non-rain. A decision tree classifier is experientially compared with other classifiers. The experimental results show that our system is effective in classifying segments of environmental audio recordings with an accuracy of 93% for the binary classification of heavy rain/non-rain.
Resumo:
This thesis is concerned with the detection and prediction of rain in environmental recordings using different machine learning algorithms. The results obtained in this research will help ecologists to efficiently analyse environmental data and monitor biodiversity.
Resumo:
This paper addresses the problem of separation of pitched sounds in monaural recordings. We present a novel feature for the estimation of parameters of overlapping harmonics which considers the covariance of partials of pitched sounds. Sound templates are formed from the monophonic parts of the mixture recording. A match for every note is found among these templates on the basis of covariance profile of their harmonics. The matching template for the note provides the second order characteristics for the overlapped harmonics of the note. The algorithm is tested on the RWC music database instrument sounds. The results clearly show that the covariance characteristics can be used to reconstruct overlapping harmonics effectively.
Resumo:
Acoustic recorders were used to document black drum (Pogonias cromis) sound production during their spawning season in southwest Florida. Diel patterns of sound production were similar to those of other sciaenid fishes and demonstrated increased sound levels from the late afternoon to early evening—a period that lasted up to 12 hours during peak season. Peak sound production occurred from January through March when water temperatures were between 18° and 22°C. Seasonal trends in sound production matched patterns of black drum reproductive readiness and spawning reported previously for populations in the Gulf of Mexico. Total acoustic energy of nightly chorus events was estimated by integration of the sound pressure amplitude with duration above a threshold based on daytime background levels. Maximum chorus sound level was highly correlated with total acoustic energy and was used to quantitatively represent nightly black drum sound production. This study gives evidence that long-term passive acoustic recordings can provide information on the timing and location of black drum reproductive behavior that is similar to that provided by traditional, more costly methods. The methods and results have broad application for the study of many other fish species, including commercially and recreationally valuable reef fishes that produce sound in association with reproductive behav
Resumo:
The ability to isolate a single sound source among concurrent sources and reverberant energy is necessary for understanding the auditory world. The precedence effect describes a related experimental finding, that when presented with identical sounds from two locations with a short onset asynchrony (on the order of milliseconds), listeners report a single source with a location dominated by the lead sound. Single-cell recordings in multiple animal models have indicated that there are low-level mechanisms that may contribute to the precedence effect, yet psychophysical studies in humans have provided evidence that top-down cognitive processes have a great deal of influence on the perception of simulated echoes. In the present study, event-related potentials evoked by click pairs at and around listeners' echo thresholds indicate that perception of the lead and lag sound as individual sources elicits a negativity between 100 and 250 msec, previously termed the object-related negativity (ORN). Even for physically identical stimuli, the ORN is evident when listeners report hearing, as compared with not hearing, a second sound source. These results define a neural mechanism related to the conscious perception of multiple auditory objects.
Resumo:
This article documents the creation of a work by the authors based on a score written by the composer John Cage entitled 'Owenvarragh: A Belfast Circus on The Star Factory.' The article is part of a documentary portfolio in the journal which also includes a volume of the poetry created by Dowling in accordance with the instructions of the Cage score, and a series of documentary videos on the creation of the work and its first performance. Cage's score is based on his work 'Roaratorio: An Irish Circus on Finnegan's Wake' (1979) and it provides a set of detailed instructions for the musical realisation of a literary work. The article documents this first fully realised version of the score since Cage first produced 'Roaratorio' in 1979. The work, which was motivated by the Cage centenary year in 2012, musically realises Carson's book 'The Star Factory' (1998), a novelestic autobiography of Carson's Belfast childhood. The score required the creation of a fixed media piece based on over 300 field recordings of the sounds and places mentioned in the book, a volume of poetry created from the book which is recited to form the rhythmic spine of the work, and the arrangement of a performance including these two components along with live musical performance by the authors in collaboration with three other musicians under their direction, and a video installation created for the work. The piece has been performed three times: in association with the Sonorities 2012 Festival at Queen's University of Belfast (March 2012), at The Belfast Festival at Queen's (October 2012), and in the Rymer Auditoium of the University of York (June 2013).
Additional information:
The work which the article documents was conceived by Monaghan and Dowling, and the project was initiated by Monaghan after a she received a student prize to support its development and first performance. Elements of the project will be included in her PhD dissertation for which Dowling is a supervisor. Monaghan created the fixed media piece based on over 300 field recordings, the largest single aspect of realising Cage's score. Dowling was responsible for initiating the collaboration with Ciaran Carson, and for two other components: the creation of a volume of poetry derived from the literary work which is recited in the performance, and the creation of and supervision of the technical work on a video which accompanies the piece. The co-authors consulted closely during the work on these large components from May 2011 until March 2012 when the first performance took place. The co-authors also shared in numerous other artistic and organisational aspects of the production, including the arrangement and performnance of the music, musical direction to other performers, and marketing.
Resumo:
The characteristics of moving sound sources have strong implications on the listener's distance perception and the estimation of velocity. Modifications of the typical sound emissions as they are currently occurring due to the tendency towards electromobility have an impact on the pedestrian's safety in road traffic. Thus, investigations of the relevant cues for velocity and distance perception of moving sound sources are not only of interest for the psychoacoustic community, but also for several applications, like e.g. virtual reality, noise pollution and safety aspects of road traffic. This article describes a series of psychoacoustic experiments in this field. Dichotic and diotic stimuli of a set of real-life recordings taken from a passing passenger car and a motorcycle were presented to test subjects who in turn were asked to determine the velocity of the object and its minimal distance from the listener. The results of these psychoacoustic experiments show that the estimated velocity is strongly linked to the object's distance. Furthermore, it could be shown that binaural cues contribute significantly to the perception of velocity. In a further experiment, it was shown that - independently of the type of the vehicle - the main parameter for distance determination is the maximum sound pressure level at the listener's position. The article suggests a system architecture for the adequate consideration of moving sound sources in virtual auditory environments. Virtual environments can thus be used to investigate the influence of new vehicle powertrain concepts and the related sound emissions of these vehicles on the pedestrians' ability to estimate the distance and velocity of moving objects.
Resumo:
El audio multicanal ha avanzado a pasos agigantados en los últimos años, y no solo en las técnicas de reproducción, sino que en las de capitación también. Por eso en este proyecto se encuentran ambas cosas: un array microfónico, EigenMike32 de MH Acoustics, y un sistema de reproducción con tecnología Wave Field Synthesis, instalado Iosono en la Jade Höchscule Oldenburg. Para enlazar estos dos puntos de la cadena de audio se proponen dos tipos distintos de codificación: la reproducción de la toma horizontal del EigenMike32; y el 3er orden de Ambisonics (High Order Ambisonics, HOA), una técnica de codificación basada en Armónicos Esféricos mediante la cual se simula el campo acústico en vez de simular las distintas fuentes. Ambas se desarrollaron en el entorno Matlab y apoyadas por la colección de scripts de Isophonics llamada Spatial Audio Matlab Toolbox. Para probar éstas se llevaron a cabo una serie de test en los que se las comparó con las grabaciones realizadas a la vez con un Dummy Head, a la que se supone el método más aproximado a nuestro modo de escucha. Estas pruebas incluían otras grabaciones hechas con un Doble MS de Schoeps que se explican en el proyecto “Sally”. La forma de realizar éstas fue, una batería de 4 audios repetida 4 veces para cada una de las situaciones garbadas (una conversación, una clase, una calle y un comedor universitario). Los resultados fueron inesperados, ya que la codificación del tercer orden de HOA quedo por debajo de la valoración Buena, posiblemente debido a la introducción de material hecho para un array tridimensional dentro de uno de 2 dimensiones. Por el otro lado, la codificación que consistía en extraer los micrófonos del plano horizontal se mantuvo en el nivel de Buena en todas las situaciones. Se concluye que HOA debe seguir siendo probado con mayores conocimientos sobre Armónicos Esféricos; mientras que el otro codificador, mucho más sencillo, puede ser usado para situaciones sin mucha complejidad en cuanto a espacialidad. In the last years the multichannel audio has increased in leaps and bounds and not only in the playback techniques, but also in the recording ones. That is the reason of both things being in this project: a microphone array, EigenMike32 from MH Acoustics; and a playback system with Wave Field Synthesis technology, installed by Iosono in Jade Höchscule Oldenburg. To link these two points of the audio chain, 2 different kinds of codification are proposed: the reproduction of the EigenMike32´s horizontal take, and the Ambisonics´ third order (High Order Ambisonics, HOA), a codification technique based in Spherical Harmonics through which the acoustic field is simulated instead of the different sound sources. Both have been developed inside Matlab´s environment and supported by the Isophonics´ scripts collection called Spatial Audio Matlab Toolbox. To test these, a serial of tests were made in which they were compared with recordings made at the time by a Dummy Head, which is supposed to be the closest method to our hearing way. These tests included other recording and codifications made by a Double MS (DMS) from Schoeps which are explained in the project named “3D audio rendering through Ambisonics techniques: from multi-microphone recordings (DMS Schoeps) to a WFS system, through Matlab”. The way to perform the tests was, a collection made of 4 audios repeated 4 times for each recorded situation (a chat, a class, a street and college canteen or Mensa). The results were unexpected, because the HOA´s third order stood under the Well valuation, possibly caused by introducing material made for a tridimensional array inside one made only by 2 dimensions. On the other hand, the codification that consisted of extracting the horizontal plane microphones kept the Well valuation in all the situations. It is concluded that HOA should keep being tested with larger knowledge about Spherical Harmonics; while the other coder, quite simpler, can be used for situations without a lot of complexity with regards to spatiality.