30 resultados para Headphones
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
BACKGROUND AND OBJECTIVE: In the Swiss version of the Freiburg speech intelligibility test five test words from the original German recording which are rarely used in Switzerland have been exchanged. Furthermore, differences in the transfer functions between headphone and loudspeaker presentation are not taken into account during calibration. New settings for the levels of the individual test words in the recommended recording and small changes in calibration procedures led us to make a verification of the currently used normative values.PATIENTS AND METHODS: Speech intelligibility was measured in 20 subjects with normal hearing using monosyllabic words and numbers via headphones and loudspeakers.RESULTS: On average, 50% speech intelligibility was reached at levels which were 7.5 dB lower under free-field conditions than for headphone presentation. The average difference between numbers and monosyllabic words was found to be 9.6 dB, which is considerably lower than the 14 dB of the current normative curves.CONCLUSIONS: There is a good agreement between our measurements and the normative values for tests using monosyllabic words and headphones, but not for numbers or free-field measurements.
Resumo:
Future generations of mobile communication devices will serve more and more as multimedia platforms capable of reproducing high quality audio. In order to achieve a 3-D sound perception the reproduction quality of audio via headphones can be significantly increased by applying binaural technology. To be independent of individual head-related transfer functions (HRTFs) and to guarantee a good performance for all listeners, an adaptation of the synthesized sound field to the listener's head movements is required. In this article several methods of head-tracking for mobile communication devices are presented and compared. A system for testing the identified methods is set up and experiments are performed to evaluate the prosand cons of each method. The implementation of such a device in a 3-D audio system is described and applications making use of such a system are identified and discussed.
Resumo:
For enhanced immersion into a virtual scene more than just the visual sense should be addressed by a Virtual Reality system. Additional auditory stimulation appears to have much potential, as it realizes a multisensory system. This is especially useful when the user does not have to wear any additional hardware, e.g., headphones. Creating a virtual sound scene with spatially distributed sources requires a technique for adding spatial cues to audio signals and an appropriate reproduction. In this paper we present a real-time audio rendering system that combines dynamic crosstalk cancellation and multi-track binaural synthesis for virtual acoustical imaging. This provides the possibility of simulating spatially distributed sources and, in addition to that, near-to-head sources for a freely moving listener in room-mounted virtual environments without using any headphones. A special focus will be put on near-to-head acoustics, and requirements in respect of the head-related transfer function databases are discussed.
Resumo:
The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.
Resumo:
The current study was designed to test for the effect of lateralized attention on prospective memory performance in a dichotic listening task. The practice phase of the experiment consisted of a semantic decision task during which the participants were presented with different words on either side via headphones. Depending on the experimental condition the participants were required to focus on the words presented on the left or right side and to decide if these words were abstract or concrete. Thereafter, the participants were informed about the prospective memory task. They were instructed to press a distinct key whenever they hear a word which denotes an animal in the same task later during the experiment. The participants were explicitly informed that the prospective memory cues could appear on either side of the headphones. This was followed by a retention interval which was filled with unrelated tasks. Next, the participants performed the prospective memory task. The results revealed more prospective hits for the attended side. The finding suggests that noticing a prospective memory cue is not an automatic process but requires attention.
Resumo:
Objectives: It has been repeatedly demonstrated that athletes in a state of ego depletion do not perform up to their capabilities in high pressure situations. We assume that momentarily available self-control strength determines whether individuals in high pressure situations can resist distracting stimuli. Design/method: In the present study, we applied a between-subjects design, as 31 experienced basketball players were randomly assigned to a depletion group or a non-depletion group. Participants performed 30 free throws while listening to statements representing worrisome thoughts (as frequently experienced in high pressure situations) over stereo headphones. Participants were instructed to block out these distracting audio messages and focus on the free throws. We postulated that depleted participants would be more likely to be distracted. They were also assumed to perform worse in the free throw task. Results: The results supported our assumption as depleted participants paid more attention to the distracting stimuli. In addition, they displayed worse performance in the free throw task. Conclusions: These results indicate that sufficient levels of self-control strength can serve as a buffer against distracting stimuli under pressure.
Resumo:
Athletes in a state of ego depletion do not perform up to their capabilities in high pressure situations (e.g., Englert & Bertrams, 2012). We assume that momentarily available self-control strength determines whether individuals in high pressure situations can resist distracting stimuli. In the present study, we applied a between-subjects design, as 31 experienced basketball players were randomly assigned to a depletion group or a non-depletion group. Participants performed 30 free throws while listening to statements representing worrisome thoughts (as frequently experienced in high pressure situations; Oudejans, Kuijpers, Kooijman, & Bakker, 2011) over stereo headphones. Participants were instructed to block out these distracting audio messages and focus on the free throws. We postulated that depleted participants would be more likely to be distracted and would perform worse in the free throw task. The results supported our assumption as depleted participants paid more attention to the distracting stimuli and displayed worse performance in the free throw task. These results indicate that sufficient levels of self-control strength can serve as a buffer against increased distractibility under pressure. Implementing self-control trainings into workout routines may be a useful approach (e.g., Oaten & Cheng, 2007).
Resumo:
In this paper we present the design and implementation of a wearable application in Prolog. The application program is a "sound spatializer." Given an audio signal and real time data from a head-mounted compass, a signal is generated for stereo headphones that will appear to come from a position in space. We describe high-level and low-level optimizations and transformations that have been applied in order to fit this application on the wearable device. The end application operates comfortably in real-time on a wearable computer, and has a memory foot print that remains constant over time enabling it to run on continuous audio streams. Comparison with a version hand-written in C shows that the C version is no more than 20-40% faster; a small price to pay for a high level description.
Resumo:
El control, o cancelación activa de ruido, consiste en la atenuación del ruido presente en un entorno acústico mediante la emisión de una señal igual y en oposición de fase al ruido que se desea atenuar. La suma de ambas señales en el medio acústico produce una cancelación mutua, de forma que el nivel de ruido resultante es mucho menor al inicial. El funcionamiento de estos sistemas se basa en los principios de comportamiento de los fenómenos ondulatorios descubiertos por Augustin-Jean Fresnel, Christiaan Huygens y Thomas Young entre otros. Desde la década de 1930, se han desarrollado prototipos de sistemas de control activo de ruido, aunque estas primeras ideas eran irrealizables en la práctica o requerían de ajustes manuales cada poco tiempo que hacían inviable su uso. En la década de 1970, el investigador estadounidense Bernard Widrow desarrolla la teoría de procesado adaptativo de señales y el algoritmo de mínimos cuadrados LMS. De este modo, es posible implementar filtros digitales cuya respuesta se adapte de forma dinámica a las condiciones variables del entorno. Con la aparición de los procesadores digitales de señal en la década de 1980 y su evolución posterior, se abre la puerta para el desarrollo de sistemas de cancelación activa de ruido basados en procesado de señal digital adaptativo. Hoy en día, existen sistemas de control activo de ruido implementados en automóviles, aviones, auriculares o racks de equipamiento profesional. El control activo de ruido se basa en el algoritmo fxlms, una versión modificada del algoritmo LMS de filtrado adaptativo que permite compensar la respuesta acústica del entorno. De este modo, se puede filtrar una señal de referencia de ruido de forma dinámica para emitir la señal adecuada que produzca la cancelación. Como el espacio de cancelación acústica está limitado a unas dimensiones de la décima parte de la longitud de onda, sólo es viable la reducción de ruido en baja frecuencia. Generalmente se acepta que el límite está en torno a 500 Hz. En frecuencias medias y altas deben emplearse métodos pasivos de acondicionamiento y aislamiento, que ofrecen muy buenos resultados. Este proyecto tiene como objetivo el desarrollo de un sistema de cancelación activa de ruidos de carácter periódico, empleando para ello electrónica de consumo y un kit de desarrollo DSP basado en un procesador de muy bajo coste. Se han desarrollado una serie de módulos de código para el DSP escritos en lenguaje C, que realizan el procesado de señal adecuado a la referencia de ruido. Esta señal procesada, una vez emitida, produce la cancelación acústica. Empleando el código implementado, se han realizado pruebas que generan la señal de ruido que se desea eliminar dentro del propio DSP. Esta señal se emite mediante un altavoz que simula la fuente de ruido a cancelar, y mediante otro altavoz se emite una versión filtrada de la misma empleando el algoritmo fxlms. Se han realizado pruebas con distintas versiones del algoritmo, y se han obtenido atenuaciones de entre 20 y 35 dB medidas en márgenes de frecuencia estrechos alrededor de la frecuencia del generador, y de entre 8 y 15 dB medidas en banda ancha. ABSTRACT. Active noise control consists on attenuating the noise in an acoustic environment by emitting a signal equal but phase opposed to the undesired noise. The sum of both signals results in mutual cancellation, so that the residual noise is much lower than the original. The operation of these systems is based on the behavior principles of wave phenomena discovered by Augustin-Jean Fresnel, Christiaan Huygens and Thomas Young. Since the 1930’s, active noise control system prototypes have been developed, though these first ideas were practically unrealizable or required manual adjustments very often, therefore they were unusable. In the 1970’s, American researcher Bernard Widrow develops the adaptive signal processing theory and the Least Mean Squares algorithm (LMS). Thereby, implementing digital filters whose response adapts dynamically to the variable environment conditions, becomes possible. With the emergence of digital signal processors in the 1980’s and their later evolution, active noise cancellation systems based on adaptive signal processing are attained. Nowadays active noise control systems have been successfully implemented on automobiles, planes, headphones or racks for professional equipment. Active noise control is based on the fxlms algorithm, which is actually a modified version of the LMS adaptive filtering algorithm that allows compensation for the acoustic response of the environment. Therefore it is possible to dynamically filter a noise reference signal to obtain the appropriate cancelling signal. As the noise cancellation space is limited to approximately one tenth of the wavelength, noise attenuation is only viable for low frequencies. It is commonly accepted the limit of 500 Hz. For mid and high frequencies, conditioning and isolating passive techniques must be used, as they produce very good results. The objective of this project is to develop a noise cancellation system for periodic noise, by using consumer electronics and a DSP development kit based on a very-low-cost processor. Several C coded modules have been developed for the DSP, implementing the appropriate signal processing to the noise reference. This processed signal, once emitted, results in noise cancellation. The developed code has been tested by generating the undesired noise signal in the DSP. This signal is emitted through a speaker simulating the noise source to be removed, and another speaker emits an fxlms filtered version of the same signal. Several versions of the algorithm have been tested, obtaining attenuation levels around 20 – 35 dB measured in a tight bandwidth around the generator frequency, or around 8 – 15 dB measured in broadband.
Resumo:
"October 1966."
Resumo:
"December 1966."
Resumo:
Small groups of athletes (maximum size 8) were taught to voluntarily control their finger temperature, in a test of the feasibility of thermal biofeedback as a tool for coaches. The objective was to decrease precompetitive anxiety among the 140 young, competitive athletes (track and field, N=61; swimming, N=79), 66 females and 74 males, mean age 14.8 years, age range 8.9-20.5 years, from local high schools and swimming clubs. The biofeedback (visual and auditory) was provided by small, battery-powered devices that were connected to thermistors attached to the middle finger of the dominant hand. An easily readable digital LCD display, in 0.01 degrees C increments, provided visual feedback, while a musical tone, which descended in pitch with increased finger temperature, provided the audio component via small headphones. Eight twenty minute sessions were scheduled, with 48 hours between sessions. The measures employed in this prestest-posttest study were Levenson's locus of control scale (IPC), and the Competitive Sport Anxiety Inventory (CSAI-2). The results indicated that, while significant control of finger temperature was achieved, F(1, 160)=5.30, p
Resumo:
Reassembled, Slightly Askew is an autobiographical, immersive audio-based artwork based on Shannon Sickels’ experience of falling critically ill with a rare brain infection and her journey of rehabilitation with an acquired brain injury. Audience members experience Reassembled individually, listening to the audio via headphones while lying on a bed. The piece makes use of binaural microphone technology and spatial sound design techniques, causing listeners to feel they are inside Shannon’s head, viscerally experiencing her descent into coma, brain surgeries, early days in the hospital, and re-integration into the world with a hidden disability. It is a new kind of storytelling, never done before about this topic, that places the listener safely in the first-person perspective with the aim of increasing empathy and understanding. Reassembled… was made through a 5-year collaboration with an interdisciplinary team of artists led by Shannon Sickels (writer & performer), Paul Stapleton (composer & sound designer), Anna Newell (director), Hanna Slattne (dramaturgy), Stevie Prickett (choreography), and Shannon’s consultant neurosurgeon and head injury nurse. It’s development and production has been made possible with the support of a Wellcome Trust Arts Award, the Arts Council NI, Sonic Arts Research Centre, Belfast's Metropolitan Arts Centre, and grants from the Arts & Disability Award Ireland scheme. In its 2015 premiere year, Reassembled had 99 shows across Northern Ireland, including at the Cathedral Quarter Arts Festival (the MAC, Belfast) and BOUNCE Arts & Disability Forum Festival (Lyric Theatre, Belfast). It was awarded 5 stars in the Stage, a Hospital Club h100 Theatre & Performance Award, and been shared at medical conferences and trainings across the UK. It continues to be presented in diverse artistic and educational contexts, including as part of A Nation’s Theatre Festival in 2016 at Battersea Arts Centre in London where it was given 4 star reviews in the Guardian, Time Out London and the Evening Standard. "A real-life ordeal, captured by a daring, disorientating artistic collaboration, which works brilliantly on so many levels…It should be available on prescription.” — The Stage ★★★★★ www.reassembled.co.uk Audio clips and documentary footage available here: http://www.paulstapleton.net/portfolio/reassembled-slightly-askew