998 resultados para Spatial Audio
Resumo:
Performance at the Joinery, Dublin, at at Spatial Music Collective concert
Resumo:
SSR es el acrónimo de SoundScape Renderer (tool for real-time spatial audio reproduction providing a variety of rendering algorithms), es un programa escrito en su mayoría en C++. El programa permite al usuario escuchar tanto sonidos grabados con anterioridad como sonidos en directo. El sonido o los sonidos se oirán, desde el punto de vista del oyente, como si el sonido se produjese en el punto que el programa decida, lo interesante de este proyecto es que el sonido podrá cambiar de lugar, moverse, etc. Todo en tiempo real. Esto se consigue sin modificar el sonido al grabarlo pero sí al emitirlo, el programa calcula las variaciones necesarias para que al emitir el sonido al oyente le llegue como si el sonido realmente se generase en un punto del espacio o lo más parecido posible. La sensación de movimiento no deja de ser el punto anterior cambiando de lugar. La idea era crear una aplicación web basada en Canvas de HTML5 que se comunicará con esta interfaz de usuario remota. Así se solucionarían todos los problemas de compatibilidad ya que cualquier dispositivo con posibilidad de visualizar páginas web podría correr una aplicación basada en estándares web, por ejemplo un sistema con Windows o un móvil con navegador. El protocolo debía de ser WebSocket porque es un protocolo HTML5 y ofrece las “garantías” de latencia que una aplicación con necesidades de información en tiempo real requiere. Nos permite una comunicación full-dúplex asíncrona sin mucho payload que es justo lo que se venía a evitar al no usar polling normal de HTML. El problema que surgió fue que la interfaz de usuario de red que tenía el programa no era compatible con WebSocket debido a un handshacking inicial y obligatorio que realiza el protocolo, por lo que se necesitaba otra interfaz de red. Se decidió entonces cambiar a JSON como formato para el intercambio de mensajes. Al final el proyecto comprende no sólo la aplicación web basada en Canvas sino también un servidor funcional y la definición de una nueva interfaz de usuario de red con su protocolo añadido. ABSTRACT. This project aims to become a part of the SSR tool to extend its capabilities in the field of the access. SSR is an acronym for SoundScape Renderer, is a program mostly written in C++ that allows you to hear already recorded or live sound with a variety of sound equipment as if the sound came from a desired place in the space. Like the web-page of the SSR says surely better explained: “The SoundScape Renderer (SSR) is a tool for real-time spatial audio reproduction providing a variety of rendering algorithms.” The application can be used with a graphical interface written in Qt but has also a network interface for external applications to use it. This network interface communicates using XML messages. A good example of it is the Android client. This Android client is already working. In order to use the application should be run it by loading an audio source and the wanted environment so that the renderer knows what to do. In that moment the server binds and anyone can use the network interface. Since the network interface is documented everyone can make an application to interact with this network interface. So the application can have as many user interfaces as wanted. The part that is developed in this project has nothing to do neither with audio rendering nor even with the reproduction of the spatial audio. The part that is developed here is about the interface used in the SSR application. As it can be deduced from the title: “Distributed Web Interface for Real-Time Spatial Audio Reproduction System”, this work aims only to offer the interface via web for the SSR (“Real-Time Spatial Audio Reproduction System”). The idea is not to make a new graphical interface for SSR but to allow more types of interfaces and communication. To accomplish the objective of allowing more graphical interfaces this project is going to use a new network interface. By now the SSR application is using only XML for data interchange but this new network interface support JSON. This project comprehends the server that launch the application, the user interface and the new network interface. It is done with these modules in order to allow creating new user interfaces that can communicate with the server or new servers that can communicate with the user interface by defining a complete network interface for data interchange.
Resumo:
Digital systems can generate left and right audio channels that create the effect of virtual sound source placement (spatialization) by processing an audio signal through pairs of Head-Related Transfer Functions (HRTFs) or, equivalently, Head-Related Impulse Responses (HRIRs). The spatialization effect is better when individually-measured HRTFs or HRIRs are used than when generic ones (e.g., from a mannequin) are used. However, the measurement process is not available to the majority of users. There is ongoing interest to find mechanisms to customize HRTFs or HRIRs to a specific user, in order to achieve an improved spatialization effect for that subject. Unfortunately, the current models used for HRTFs and HRIRs contain over a hundred parameters and none of those parameters can be easily related to the characteristics of the subject. This dissertation proposes an alternative model for the representation of HRTFs, which contains at most 30 parameters, all of which have a defined functional significance. It also presents methods to obtain the value of parameters in the model to make it approximately equivalent to an individually-measured HRTF. This conversion is achieved by the systematic deconstruction of HRIR sequences through an augmented version of the Hankel Total Least Squares (HTLS) decomposition approach. An average 95% match (fit) was observed between the original HRIRs and those re-constructed from the Damped and Delayed Sinusoids (DDSs) found by the decomposition process, for ipsilateral source locations. The dissertation also introduces and evaluates an HRIR customization procedure, based on a multilinear model implemented through a 3-mode tensor, for mapping of anatomical data from the subjects to the HRIR sequences at different sound source locations. This model uses the Higher-Order Singular Value Decomposition (HOSVD) method to represent the HRIRs and is capable of generating customized HRIRs from easily attainable anatomical measurements of a new intended user of the system. Listening tests were performed to compare the spatialization performance of customized, generic and individually-measured HRIRs when they are used for synthesized spatial audio. Statistical analysis of the results confirms that the type of HRIRs used for spatialization is a significant factor in the spatialization success, with the customized HRIRs yielding better results than generic HRIRs.
Resumo:
Future generations of mobile communication devices will serve more and more as multimedia platforms capable of reproducing high quality audio. In order to achieve a 3-D sound perception the reproduction quality of audio via headphones can be significantly increased by applying binaural technology. To be independent of individual head-related transfer functions (HRTFs) and to guarantee a good performance for all listeners, an adaptation of the synthesized sound field to the listener's head movements is required. In this article several methods of head-tracking for mobile communication devices are presented and compared. A system for testing the identified methods is set up and experiments are performed to evaluate the prosand cons of each method. The implementation of such a device in a 3-D audio system is described and applications making use of such a system are identified and discussed.
Resumo:
This article describes a series of experiments which were carried out to measure the sense of presence in auditory virtual environments. Within the study a comparison of self-created signals to signals created by the surrounding environment is drawn. Furthermore, it is investigated if the room characteristics of the simulated environment have consequences on the perception of presence during vocalization or when listening to speech. Finally the experiments give information about the influence of background signals on the sense of presence. In the experiments subjects rated the degree of perceived presence in an auditory virtual environment on a perceptual scale. It is described which parameters have the most influence on the perception of presence and which ones are of minor influence. The results show that on the one hand an external speaker has more influence on the sense of presence than an adequate presentation of one’s own voice. On the other hand both room reflections and adequately presented background signals significantly increase the perceived presence in the virtual environment.
Resumo:
‘Dark Cartographies’ is a slowly evolving meditation upon seasonal change, life after light and the occluding shadows of human influence. Through creating experiences of the many ‘times of a night’ the work allows participants to experience deep engagement with rich spectras of hidden place and sound. By amplifying and shining light upon a myriad of lives lived in blackness, ‘Dark Cartographies’ tempts us to re-understand seasonal change as actively-embodied temporality, inflected by our climate-changing disturbances. ‘Dark Cartographies’ uses custom interactive systems, illusionary techniques and real time spatial audio that draw upon a rich array of media, including seasonal, nocturnal field recordings sourced in the Far North Queensland region and detailed observations of foliage & flowering phases. By drawing inspiration from the subtle transitions between what Europeans named ‘Summer’ and ‘Autumn’, and by including the body and its temporal disturbances within the work, ‘Dark Cartographies’ creates compellingly immersive environments that wrap us in atmospheres beyond sight and hearing. ‘Dark Cartographies’ is a dynamic new installation directed & choreographed by environmental cycles; alluding to a new framework for making works that we call ‘Seasonal’. This powerful, responsive & experiential work draws attention to that which will disappear when biodiverse worlds have descended into an era of permanent darkness – an ‘extinction of human experience’. By tapping into the deeply interlocking seasonal cycles of environments that are themselves intimately linked with social, geographical & political concerns, participating audiences are therefore challenged to see the night, their locality & ecologies in new ways through extending their personal limits of perception, imagery & comprehension.
Resumo:
An evolving meditation upon the complex, periodic processes that mark Australia’s seasonality, and our increasing ability to disturb them. By amplifying and shining light upon a myriad of mysterious lives lived in blackness, the work presents a sensuous, deep engagement with the rich, irregular spectras of seasonal forms: whilst hinting at a far less comforting background increasingly framed by anthropogenic climate change. ’Temporal’ uses custom interactive systems, illusionary techniques and real time spatial audio processes that draw upon a rich array of media, including seasonal, nocturnal field recordings sourced in the Bundaberg region and detailed observations of foliage & flowering phases from that region. By drawing inspiration from the subtle transitions between what Europeans once named ‘Summer’ and ‘Autumn’ and the multiple seasons recognised by other cultures, whilst also including bodily disturbances within the work, ’Temporal’ creates a compellingly immersive environment that wraps audiences in luscious yet ominous atmospheres beyond sight and hearing. This work completes a two year long project of dynamic mediated installations that have been presented in Sydney, Beijing, Cairns and Bundanon, that have each been somehow choreographed by environmental cycles; alluding to a new framework for making works that we named ‘Seasonal’. These powerful, responsive & experiential works each draw attention to that which will disappear when biodiverse worlds have descended into an era of permanent darkness – an ‘extinction of human experience’. By tapping into the deeply interlocking seasonal cycles of environments that are themselves intimately linked with social, geographical & political concerns, participating audiences are therefore challenged to see the night, their locality & ecologies in new ways through extending their personal limits of perception, imagery & comprehension.
Resumo:
Music for Sleeping & Waking Minds is a 8-hour composition intended for overnight listening. It features 4 performers who wear custom-designed EEG sensors. The performers rest and fall asleep as they naturally would. Over the course of one night, their brainwave activity generates a spatial audio environment. Audiences are invited to sleep or listen as they wish. Composition & concept by Gascia Ouzounian. Physiological interface and interaction design by R. Benjamin Knapp. Audio interface and interaction design by Eric Lyon.
Resumo:
Ambisonics is a scalable spatial audio technique that attempts to present a sound scene to listeners over as large an area as possi- ble. A localisation experiment was carried out to investigate the performance of a first and third order system at three listening positions - one in the centre and two off-centre. The test used a reverse target-pointer adjustment method to determine the error, both signed and absolute, for each combination of listening posi- tion and system. The signed error was used to indicate the direc- tion and magnitude of the shifts in panning angle introduced for the off-centre listening positions. The absolute error was used as a measure of the performance of the listening position and systems combinations for a comparison of their overall performance. A comparison was made between the degree of image shifting be- tween the two systems and the robustness of their off-centre per- formance.
Resumo:
Ambisonics is spatial audio technique that attempts to recreate a physical sound field over as large an area as possible. Higher Order Ambisonic systems modelled with near field loudspeakers in free field as well as in a simulated room are investigated. The influence of reflections on the image quality is analysed objectively for both a studio-sized and large reproduction environment using the relative intensity of the reproduced sound field. The results of a simulated enclosed HOA system in the studio-sized room are compared to sound field measurements in the reproduced area.
Resumo:
Ambisonics and Higher Order Ambisonics (HOA) are scalable spatial audio techniques that attempt to present a sound scene to listeners over as large an area as possible. A localisation experiment was carried out to investigate the performance of a first and third order system at three listening positions - one in the centre and two off-centre - using a 5 m radius loudspeaker array. The results are briefly presented and compared to those of an earlier experiment on a 2.2 m loudspeaker array. In both experiments the off-centre listeners were placed such that the ratio of distance from the centre to the array radius was constant in both experiments. The test used a reverse target-pointer adjustment method to determine the error, both signed and absolute, for each combination of listening position and system. The results for both arrays were found to be very similar, suggesting that the relative amplitude of the loudspeakers, which were the same in both cases, was more dominant for localisation than the arrival time differences, which differed between array sizes.