974 resultados para surround sound
Resumo:
Opinnäytetyöni Kahden kulttuurin ääniä on monimuototyö, jonka teososa koostuu kuudesta suunnittelemastani ja äänittämästäni konkreettisen musiikin teoksesta. Materiaali on äänitetty 2006 kesällä ja syksyllä Suomessa ja Japanissa. Kirjallisessa osassa tarkastelin lajityypin historiaa sekä omaan työskentelyyni peilaten keinoja, joilla tein kenttä-äänistä musiikillisen teoksen jaettuna kolmeen eri teemaan kunkin maan äänimateriaalista: luontoäänet, kaupunkiäänet sekä uskonnollisten palvelusten äänet. Teoksille ei tehty ennakkokäsikirjoitusta, vaan niiden kokoaminen tapahtui konkreettisen musiikin perinteiden mukaisesti vasta äänityöasemalla. Työ koostettiin ja miksattiin stereona ja surroundina (5.0) Pro Toolsissa vuoden 2007 aikana. Osa äänimateriaalista säilyi käsittelemättömänä, osa käsiteltiin lähes tunnistamattomaksi. Kahden kulttuurin ääniä poikkeaa lajityypin klassikoista nopealla tempollaan ja runsaalla tunnelmien vaihtelevuudellaan. Toisinaan työ lähentelee jo nykyaikaisia elektronisen musiikin alalajeja. Teokset sisältävät paljon elementtejä, jotka aiheuttavat eri ihmisissä erilaisia assosiaatioita. Tämän vuoksi konkreettinen musiikki voisi hyvin toimia muun muassa musiikkiterapian välineenä.
Resumo:
Tutkin opinnäytteessäni tilaäänen dramaturgiaa teatteriäänisuunnittelussa. Työn punaisena lankana kulkee kysymys, voiko Jumalaa, vapautta tai onnellisuutta tuoda uskottavana kokemuksena teatterin katsojalle tilaäänen keinoin? Ääntä koskeva kirjallisuus keskittyy yleensä äänen mitattaviin tai teknisiin ominaisuuksiin, joten päätin keskittyä omassa työssäni puhtaasti tilaäänen dramaturgiseen hahmotukseen. Koska tiläänen dramaturgiasta ei juurikaan kirjallisuutta löydy, pohjautuu kirjallinen työni pitkälti omaan työelämän kautta saamaani kokemusperäiseen tietoon nk. hiljaiseen tietoon. Tässä työssä esitetty äänisuunnittelijan työnkuva on tehty teatteriin, jonka keskustassa on ihmisen läsnäolon kautta tapahtuva kerronta. Työ on suunnattu ohjaajille ja äänisuunittelijoille, joilla on jo kokemusta teatterikerronnasta. Käsittelen äänen dramaturgiaa hyvin rajatusta näkökulmasta, jolloin työn todellinen anti syntyy vasta sen peilautuessa lukijan omiin pohdintoihin tilankäytöstä, ihmisen läsnäolosta sekä äänisuunnittelusta. Työn alussa käsitellään aikaa kokemisen keskeisenä ulottuvuutena, hiljaisuuden vaikutusta kokijan minään, sekä ihmisen olemusta tilassa. Pohdin tilan merkitystä koettuna avaruutena, sivuuttaen tilan paikkana. Tämän jälkeen tutkin tilankäytön vaikutusta ihmisen mielihyvän rakenteisiin katsojan näkökulmasta, jonka jälkeen päästään varsinaiseen ääntä koskevaan osioon, tilaäänen koreografiaan. Tässä luvussa esitän mallin luoda mielenkiintoista ja jännitteistä tilaääntä, asettamalla ääni vastakkaiseksi elementiksi suhteessa hiljaisuuteen, jolloin hiljaisuus muuttuu merkittäväksi osaksi esityksen tilankäyttöä ja äänikerrontaa. Teoreettisesta lähtökohdastaan huolimatta työ tähtää puhtaasti emotionaalisesti koettavaan äänisuunnitteluun, jonka keskiössä on esiintyjän läsnäolon syventäminen tilaäänen keinoin.
Resumo:
This thesis had two goals: to explore the transformation of Hollywood from the 1930s to present, and to investigate how Contemporary Hollywood functions in a growing attention economy. Evident in the types of films that it produces as well as its evolving industrial structure, Contemporary Hollywood significantly differs from the Classical Hollywood of the 1930s. New digital technologies like surround sound and computer-generated imagery (CGI) have allowed studios to create a different type of film like the blockbuster and to have more extensive control over their films. Additionally, growing exhibition and distribution platforms have also fundamentally altered the industrial landscape of Hollywood. In order to combat this more egalitarian distribution system, Contemporary Hollywood has turned to conglomeratization. But, what has caused such a radical shift in the form and function of Contemporary Hollywood and its films? This thesis argues that Hollywood is failing to thrive in this new media landscape¿not because of changing technologies¿but because of a changing consumer. Richard Lanham theorizes that we are living in a growing attention economy, where human attention is the most valuable commodity in such an information-saturated society. For the current consumer, there is near-constant media over-stimulation: he or she is exposed to any number of screens (mobile phones, laptops, tablets, televisions, etc.) at any given time. Because we can access anything from anywhere at anytime, we¿ve become somewhat schizophrenic and impatient in terms of the media that we consume in our lives.
Resumo:
El actual proyecto consiste en la creación de una interfaz gráfica de usuario (GUI) en entorno de MATLAB que realice una representación gráfica de la base de datos de HRTF (Head-Related Transfer Function). La función de transferencia de la cabeza es una herramienta muy útil en el estudio de la capacidad del ser humano para percibir su entorno sonoro, además de la habilidad de éste en la localización de fuentes sonoras en el espacio que le rodea. La HRTF biaural (terminología para referirse al conjunto de HRTF del oído izquierdo y del oído derecho) en sí misma, posee información de especial interés ya que las diferencias entre las HRTF de cada oído, conceden la información que nuestro sistema de audición utiliza en la percepción del campo sonoro. Por ello, la funcionalidad de la interfaz gráfica creada presenta gran provecho dentro del estudio de este campo. Las diferencias interaurales se caracterizan en amplitud y en tiempo, variando en función de la frecuencia. Mediante la transformada inversa de Fourier de la señal HRTF, se obtiene la repuesta al impulso de la cabeza, es decir, la HRIR (Head-Related Impulse Response). La cual, además de tener una gran utilidad en la creación de software o dispositivos de generación de sonido envolvente, se utiliza para obtener las diferencias ITD (Interaural Time Difference) e ILD (Interaural Time Difference), comúnmente denominados “parámetros de localización espacial”. La base de datos de HRTF contiene la información biaural de diferentes puntos de ubicación de la fuente sonora, formando una red de coordenadas esféricas que envuelve la cabeza del sujeto. Dicha red, según las medidas realizadas en la cámara anecoica de la EUITT (Escuela Universitaria de Ingeniería Técnica de Telecomunicación), presenta una precisión en elevación de 10º y en azimut de 5º. Los receptores son dos micrófonos alojados en el maniquí acústico llamado HATS (Hats and Torso Simulator) modelo 4100D de Brüel&Kjaer. Éste posee las características físicas que influyen en la percepción del entorno como son las formas del pabellón auditivo (pinna), de la cabeza, del cuello y del torso humano. Será necesario realizar los cálculos de interpolación para todos aquellos puntos no contenidos en la base de datos HRTF, este proceso es sumamente importante no solo para potenciar la capacidad de la misma sino por su utilidad para la comparación entre otras bases de datos existentes en el estudio de este ámbito. La interfaz gráfica de usuario está concebida para un manejo sencillo, claro y predecible, a la vez que interactivo. Desde el primer boceto del programa se ha tenido clara su filosofía, impuesta por las necesidades de un usuario que busca una herramienta práctica y de manejo intuitivo. Su diseño de una sola ventana reúne tanto los componentes de obtención de datos como los que hacen posible la representación gráfica de las HRTF, las HRIR y los parámetros de localización espacial, ITD e ILD. El usuario podrá ir alternando las representaciones gráficas a la vez que introduce las coordenadas de los puntos que desea visualizar, definidas por phi (elevación) y theta (azimut). Esta faceta de la interfaz es la que le otorga una gran facilidad de acceso y lectura de la información representada en ella. Además, el usuario puede introducir valores incluidos en la base de datos o valores intermedios a estos, de esta manera, se indica a la interfaz la necesidad de realizar la interpolación de los mismos. El método de interpolación escogido es el de la ponderación de la distancia inversa entre puntos. Dependiendo de los valores introducidos por el usuario se realizará una interpolación de dos o cuatro puntos, siendo éstos limítrofes al valor introducido, ya sea de phi o theta. Para añadir versatilidad a la interfaz gráfica de usuario, se ha añadido la opción de generar archivos de salida en forma de imagen de las gráficas representadas, de tal forma que el usuario pueda extraer los datos que le interese para cualquier valor de phi y theta. Se completa el presente proyecto fin de carrera con un trabajo de investigación y estudio comparativo de la función y la aplicación de las bases de datos de HRTF dentro del marco científico y de investigación. Esto ha hecho posible concentrar información relacionada a través de revistas científicas de investigación como la JAES (Journal of the Audio Engineering Society) o la ASA (Acoustical Society of America), además, del IEEE ( Institute of Electrical and Electronics Engineers) o la “Web of knowledge” entre otras. Además de realizar la búsqueda en estas fuentes, se ha optado por vías de información más comunes como Google Académico o el portal de acceso “Ingenio” a los todos los recursos electrónicos contenidos en la base de datos de la universidad. El estudio genera una ampliación en el conocimiento de la labor práctica de las HRTF. La mayoría de los estudios enfocan sus esfuerzos en mejorar la percepción del evento sonoro mediante su simulación en la escucha estéreo o multicanal. A partir de las HRTF, esto es posible mediante el análisis y el cálculo de datos como pueden ser las regresiones, siendo éstas muy útiles en la predicción de una medida basándose en la información de la actual. Otro campo de especial interés es el de la generación de sonido 3D. Mediante la base de datos HRTF es posible la simulación de una señal biaural. Se han diseñado algoritmos que son implementados en dispositivos DSP, de tal manera que por medio de retardos interaurales y de diferencias espectrales es posible llegar a un resultado óptimo de sonido envolvente, sin olvidar la importancia de los efectos de reverberación para conseguir un efecto creíble de sonido envolvente. Debido a la complejidad computacional que esto requiere, gran parte de los estudios coinciden en desarrollar sistemas más eficientes, llegando a objetivos tales como la generación de sonido 3D en tiempo real. ABSTRACT. This project involves the creation of a Graphic User Interface (GUI) in the Matlab environment which creates a graphic representation of the HRTF (Head-Related Transfer Function) database. The head transfer function is a very useful tool in the study of the capacity of human beings to perceive their sound environment, as well as their ability to localise sound sources in the area surrounding them. The binaural HRTF (terminology which refers to the HRTF group of the left and right ear) in itself possesses information of special interest seeing that the differences between the HRTF of each ear admits the information that our system of hearing uses in the perception of each sound field. For this reason, the functionality of the graphic interface created presents great benefits within the study of this field. The interaural differences are characterised in space and in time, varying depending on the frequency. By means of Fourier's transformed inverse of the HRTF signal, the response to the head impulse is obtained, in other words, the HRIR (Head-Related Impulse Response). This, as well as having a great use in the creation of software or surround sound generating devices, is used to obtain ITD differences (Interaural Time Difference) and ILD (Interaural Time Difference), commonly named “spatial localisation parameters”. The HRTF database contains the binaural information of different points of sound source location, forming a network of spherical coordinates which surround the subject's head. This network, according to the measures carried out in the anechoic chamber at the EUITT (School of Telecommunications Engineering) gives a precision in elevation of 10º and in azimuth of 5º. The receivers are two microphones placed on the acoustic mannequin called HATS (Hats and Torso Simulator) Brüel&Kjaer model 4100D. This has the physical characteristics which affect the perception of the surroundings which are the forms of the auricle (pinna), the head, neck and human torso. It will be necessary to make interpolation calculations for all those points which are not contained the HRTF database. This process is extremely important not only to strengthen the database's capacity but also for its usefulness in making comparisons with other databases that exist in the study of this field. The graphic user interface is conceived for a simple, clear and predictable use which is also interactive. Since the first outline of the program, its philosophy has been clear, based on the needs of a user who requires a practical tool with an intuitive use. Its design with only one window unites not only the components which obtain data but also those which make the graphic representation of the HRTFs possible, the hrir and the ITD and ILD spatial location parameters. The user will be able to alternate the graphic representations at the same time as entering the point coordinates that they wish to display, defined by phi (elevation) and theta (azimuth). The facet of the interface is what provides the great ease of access and reading of the information displayed on it. In addition, the user can enter values included in the database or values which are intermediate to these. It is, likewise, indicated to the interface the need to carry out the interpolation of these values. The interpolation method is the deliberation of the inverse distance between points. Depending on the values entered by the user, an interpolation of two or four points will be carried out, with these being adjacent to the entered value, whether that is phi or theta. To add versatility to the graphic user interface, the option of generating output files in the form of an image of the graphics displayed has been added. This is so that the user may extract the information that interests them for any phi and theta value. This final project is completed with a research and comparative study essay on the function and application of HRTF databases within the scientific and research framework. It has been possible to collate related information by means of scientific research magazines such as the JAES (Journal of the Audio Engineering Society), the ASA (Acoustical Society of America) as well as the IEEE (Institute of Electrical and Electronics Engineers) and the “Web of knowledge” amongst others. In addition to carrying out research with these sources, I also opted to use more common sources of information such as Academic Google and the “Ingenio” point of entry to all the electronic resources contained on the university databases. The study generates an expansion in the knowledge of the practical work of the HRTF. The majority of studies focus their efforts on improving the perception of the sound event by means of its simulation in stereo or multichannel listening. With the HRTFs, this is possible by means of analysis and calculation of data as can be the regressions. These are very useful in the prediction of a measure being based on the current information. Another field of special interest is that of the generation of 3D sound. Through HRTF databases it is possible to simulate the binaural signal. Algorithms have been designed which are implemented in DSP devices, in such a way that by means of interaural delays and wavelength differences it is possible to achieve an excellent result of surround sound, without forgetting the importance of the effects of reverberation to achieve a believable effect of surround sound. Due to the computational complexity that this requires, a great many studies agree on the development of more efficient systems which achieve objectives such as the generation of 3D sound in real time.
Resumo:
This is a study of a monochromatic planar perturbation impinging upon a canonical acoustic hole. We show that acoustic hole scattering shares key features with black hole scattering. The interference of wave fronts passing in opposite senses around the hole creates regular oscillations in the scattered intensity. We examine this effect by applying a partial wave method to compute the differential scattering cross section for a range of incident wavelengths. We demonstrate the existence of a scattering peak in the backward direction, known as the glory. We show that the glory created by the canonical acoustic hole is approximately 170 times less intense than the glory created by the Schwarzschild black hole, for equivalent horizon-to-wavelength ratios. We hope that direct experimental observations of such effects may be possible in the near future.
Resumo:
This paper presents a novel adaptive control scheme. with improved convergence rate, for the equalization of harmonic disturbances such as engine noise. First, modifications for improving convergence speed of the standard filtered-X LMS control are described. Equalization capabilities are then implemented, allowing the independent tuning of harmonics. Eventually, by providing the desired order vs. engine speed profiles, the pursued sound quality attributes can be achieved. The proposed control scheme is first demonstrated with a simple secondary path model and, then, experimentally validated with the aid of a vehicle mockup which is excited with engine noise. The engine excitation is provided by a real-time sound quality equivalent engine simulator. Stationary and transient engine excitations are used to assess the control performance. The results reveal that the proposed controller is capable of large order-level reductions (up to 30 dB) for stationary excitation, which allows a comfortable margin for equalization. The same holds for slow run-ups ( > 15s) thanks to the improved convergence rate. This margin, however, gets narrower with shorter run-ups (<= 10s). (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Active control solutions appear to be a feasible approach to cope with the steadily increasing requirements for noise reduction in the transportation industry. Active controllers tend to be designed with a target on the sound pressure level reduction. However, the perceived control efficiency for the occupants can be more accurately assessed if psychoacoustic metrics can be taken into account. Therefore, this paper aims to evaluate, numerically and experimentally, the effect of a feedback controller on the sound quality of a vehicle mockup excited with engine noise. The proposed simulation scheme is described and experimentally validated. The engine excitation is provided by a sound quality equivalent engine simulator, running on a real-time platform that delivers harmonic excitation in function of the driving condition. The controller performance is evaluated in terms of specific loudness and roughness. It is shown that the use of a quite simple control strategy, such as a velocity feedback, can result in satisfactory loudness reduction with slightly spread roughness, improving the overall perception of the engine sound. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Swallowing dynamics involves the coordination and interaction of several muscles and nerves which allow correct food transport from mouth to stomach without laryngotracheal penetration or aspiration. Clinical swallowing assessment depends on the evaluator`s knowledge of anatomic structures and of neurophysiological processes involved in swallowing. Any alteration in those steps is denominated oropharyngeal dysphagia, which may have many causes, such as neurological or mechanical disorders. Videofluoroscopy of swallowing is presently considered to be the best exam to objectively assess the dynamics of swallowing, but the exam needs to be conducted under certain restrictions, due to patient`s exposure to radiation, which limits periodical repetition for monitoring swallowing therapy. Another method, called cervical auscultation, is a promising new diagnostic tool for the assessment of swallowing disorders. The potential to diagnose dysphagia in a noninvasive manner by assessing the sounds of swallowing is a highly attractive option for the dysphagia clinician. Even so, the captured sound has an amount of noise, which can hamper the evaluator`s decision. In that way, the present paper proposes the use of a filter to improve the quality of audible sound and facilitate the perception of examination. The wavelet denoising approach is used to decompose the noisy signal. The signal to noise ratio was evaluated to demonstrate the quantitative results of the proposed methodology. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Sound source localization (SSL) is an essential task in many applications involving speech capture and enhancement. As such, speaker localization with microphone arrays has received significant research attention. Nevertheless, existing SSL algorithms for small arrays still have two significant limitations: lack of range resolution, and accuracy degradation with increasing reverberation. The latter is natural and expected, given that strong reflections can have amplitudes similar to that of the direct signal, but different directions of arrival. Therefore, correctly modeling the room and compensating for the reflections should reduce the degradation due to reverberation. In this paper, we show a stronger result. If modeled correctly, early reflections can be used to provide more information about the source location than would have been available in an anechoic scenario. The modeling not only compensates for the reverberation, but also significantly increases resolution for range and elevation. Thus, we show that under certain conditions and limitations, reverberation can be used to improve SSL performance. Prior attempts to compensate for reverberation tried to model the room impulse response (RIR). However, RIRs change quickly with speaker position, and are nearly impossible to track accurately. Instead, we build a 3-D model of the room, which we use to predict early reflections, which are then incorporated into the SSL estimation. Simulation results with real and synthetic data show that even a simplistic room model is sufficient to produce significant improvements in range and elevation estimation, tasks which would be very difficult when relying only on direct path signal components.
Resumo:
In order to effectively suppress the noise radiation from large electrical power transformers, both the structure-borne and air-borne sound fields need to be characterised. The characterisation can be made either from theoretical predictions or by in-situ measurements. This paper presents the study of the sound radiation from a large power transformer in a substation. The radiation pattern can be predicted from the measured acceleration distribution and the predicted value is not affected by other noise sources. Alternatively, the farfield sound pressure level can be predicted from the sound pressure level measured at NEMA locations. Both the near- and far-field power radiation can be in-situ measured using the sound intensity technique. It is shown that both the vibration of a transformer tank wall and the radiated noise consist of a series of tonal components mainly at the first few harmonic frequencies of 100 Hz. Also, the neglect of the noise radiation from the transformer (top and bottom) lids does not affects the accuracy of the transformer radiation characterisation. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
Bees generate thoracic vibrations with their indirect flight muscles in various behavioural contexts. The main frequency component of non-flight vibrations, during which the wings are usually folded over the abdomen, is higher than that of thoracic vibrations that drive the wing movements for flight. So far, this has been concluded from an increase in natural frequency of the oscillating system in association with the wing adduction. In the present study, we measured the thoracic oscillations in stingless bees during stationary flight and during two types of non-flight behaviour, annoyance buzzing and forager communication, using laser vibrometry. As expected, the flight vibrations met all tested assumptions for resonant oscillations: slow build-up and decay of amplitude; increased frequency following reduction of the inertial load; and decreased frequency following an increase of the mass of the oscillating system. Resonances, however, do not play a significant role in the generation of non-flight vibrations. The strong decrease in main frequency at the end of the pulses indicates that these were driven at a frequency higher than the natural frequency of the system. Despite significant differences regarding the main frequency components and their oscillation amplitudes, the mechanism of generation is apparently similar in annoyance buzzing and forager vibrations. Both types of non-flight vibration induced oscillations of the wings and the legs in a similar way. Since these body parts transform thoracic oscillations into airborne sounds and substrate vibrations, annoyance buzzing can also be used to study mechanisms of signal generation and transmission potentially relevant in forager communication under controlled conditions.
Resumo:
In stingless bees, recruitment of hive bees to food sources involves thoracic vibrations by foragers during trophallaxis. The temporal pattern of these vibrations correlates with the sugar concentration of the collected food. One possible pathway for transfering such information to nestmates is through airborne sound. In the present study, we investigated the transformation of thoracic vibrations into air particle velocity, sound pressure, and jet airflows in the stingless bee Melipona scutellaris. Whereas particle velocity and sound pressure were found all around and above vibrating individuals, there was no evidence for a jet airflow as with honey bees. The largest particle velocities were measured 5 mm above the wings (16.0 +/- 4.8 mm s(-1)). Around a vibrating individual, we found maximum particle velocities of 8.6 +/- 3.0 mm s(-1) (horizontal particle velocity) in front of the bee`s head and of 6.0 +/- 2.1 mm s(-1) (vertical particle velocity) behind its wings. Wing oscillations, which are mainly responsible for air particle movements in honey bees, significantly contributed to vertically oriented particle oscillations only close to the abdomen in M. scutellaris(distances <= 5 mm). Almost 80% of the hive bees attending trophallactic food transfers stayed within a range of 5 mm from the vibrating foragers. It remains to be shown, however, whether air particle velocity alone is strong enough to be detected by Johnston`s organ of the bee antenna. Taking the physiological properties of the honey bee`s Johnston`s organ as the reference, M. scutellaris hive bees are able to detect the forager vibrations through particle movements at distances of up to 2 cm.
Resumo:
Cervical auscultation presents as a noninvasive screening assessment of swallowing. Until now the focus of acoustic research in swallowing has been the characterization of swallowing sounds,. However, it may be that the technique is also suitable for the detection of respiratory sounds post swallow. A healthy relationship between swallowing and respiration is widely accepted as pivotal to safe swallowing. Previous investigators have shown that the expiratory phase of respiration commonly occurs prior to and after swallowing. That the larynx is valved shut during swallowing is also accepted. Previous research indicates that the larynx releases valved air immediately post swallow in healthy individuals. The current investigation sought to explore acoustic evidence of a release of subglottic air post swallow in nondysphagic individuals using a noninvasive medium. Fifty-nine healthy individuals spanning the ages of 18 to 60+ years swallowed 5 and 10 milliliters (ml) of thin and thick liquid boluses. Objective acoustic analysis was used to verify presence of the sound and to characterize its morphological features. The sound, dubbed the glottal release sound, was found to consistently occur in close proximity following the swallowing sound. The results indicated that the sound has distinct morphological features and that these change depending on the volume and viscosity of the bolus swallowed. Further research will be required to translate this information to a clinical tool.