747 resultados para Sounds.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se tiene evidencia de que en el pasado, en las catedrales, se realizaban simultáneamente diferentes actividades. El proyecto trata de definir las áreas de la Catedral de Toledo en las cuales se considera escucha aceptable, cuando se están produciendo varias actividades simultáneamente a partir de un test de escucha de evaluación subjetiva. Para ello, se ha planteado una situación con tres fuentes sonoras situadas en el trascoro y en dos capillas. A partir de un modelo acústico diseñado en Odeon se han obtenido varias auralizaciones, con las señales procedentes de las tres fuentes, que han sido sometidas al test. Con el fin de crear un entorno que facilite la realización del mismo, tanto a los sujetos que lo realicen, como el tratamiento posterior de los datos obtenidos, se ha creado una herramienta desarrollada en Matlab. Se trata de una interfaz gráfica que permite escuchar las distintas auralizaciones, así como los sonidos procedentes de las distintas fuentes y recoge los juicios de los sujetos sometidos al test en un archivo Excel. Una vez recogidos los datos de los participantes se ha realizado un tratamiento estadístico de los mismos a fin de obtener resultados a cerca del horizonte acústico en la Catedral de Toledo, para la situación planteada. ABSTRACT. We have evidence from the old times that cathedrals were used to host different activities at the same time. This project aims to define the areas of the Cathedral of Toledo that have an acceptable listening level while several activities are carried out simultaneously, by performing subjective listening evaluation test. To achieve the objective, three sound sources are located inside the Cathedral, one behind the choir and two in the chapels. Several auralizations have been obtained, with the signals from the three test sources, by using an acoustic model designed with Odeon. In order to create an environment that facilitates the realization of the listening tests and the post-processing of the data obtained, a tool has been developed in Matlab. It has a graphical interface that allows the subject to listen to different auralizations, as well as the sounds from the different sources, and then collect the results of each test in an excel file. Once the data from the participants is collected, a statistical processing is performed in order to get the results for the acoustic horizon of the Cathedral of Toledo for the aforementioned setup.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En esta investigación se aborda el tema del comportamiento acústico de las Iglesias Jesuíticas de la ciudad de Córdoba (Argentina) y San Ignacio Mini ubicada en la localidad de San Ignacio, provincia de Misiones (Argentina), construidas hace dos siglos atrás y declaradas Patrimonio de la Humanidad, con el objetivo de evaluar los parámetros que determinan la comprensión de la palabra y la aptitud de cada una de las iglesias para el canto y la música religiosa. En una primera etapa la investigación se orientó a profundizar en las características constructivas interiores de cada templo y a proponer una metodología de análisis para comparar los resultados de las mediciones objetivas, realizadas mediante la implementación de mediciones in situ, con los resultados de las apreciaciones subjetivas resultantes de la elaboración de encuestas, a los fines de caracterizar acústicamente cada espacio sonoro. Se seleccionaron, para la caracterización objetiva de cada templo, aquellos parámetros que permiten sintetizar las propiedades acústicas relacionadas con la música y la palabra, y aquellos que posibilitan medir la proporción efectiva de las primeras reflexiones, consideradas como índices subjetivos de la capacidad de distinción del sonido por parte del oyente. Se comparan los valores alcanzados con las preferencias subjetivas obtenidas en las encuestas de opinión. Se relevaron tiempos de reverberación altos en todas iglesias, fuera de los considerados óptimos para cada recinto. Se analizaron los índices de calidad y se comprobó cómo influyen los diferentes materiales en el comportamiento acústico de cada recinto. Para la evaluación subjetiva se implementó una encuesta ya validada en la que se privilegió la fácil asociación entre parámetros acústicos y psicoacústicos, esto posibilitó encontrar aquellos parámetros objetivos, simulados con público, que estuviesen fuertemente relacionados con el juicio subjetivo, así como aquellos con menor correlación. La búsqueda y relevamiento de material grafico, fotográfico y otros documentos históricos posibilitó la reconstrucción de cada iglesia para su modelización y la evaluación del comportamiento de todos los templos con la presencia de feligreses, no habiéndose podido realizar mediciones bajo esta condición. El interés por obtener datos acústicos más precisos de la Iglesia San Ignacio Mini, que actualmente se encuentra en ruinas, llevó a utilizar herramientas más poderosas de cálculo como el método de las fuentes de imagen “Ray Tracing Impact” por medio del cual se logró la auralización. Para ello se trabajó con un archivo de audio que representó la voz masculina de un sacerdote en el idioma jesuítico-guaraní, recuperando así el patrimonio cultural intangible. ABSTRACT This research addresses the acoustic behavior of the Jesuit Churches in Cordoba City (Argentina) and San Ignacio Mini (located in the town of San Ignacio, Misiones, Argentina), built two centuries ago and declared World Heritage Sites, with the objective to evaluate the parameters that determine the speech comprehension and the ability of each of the churches for singing of religious music. The first step of the work was aimed to further investigate the internal structural characteristics of each temple and to propose an analysis methodology to compare the objective results of in situ measurements with the subjective results of surveys, in order to characterize acoustically each sound-space. For the subjective characterization of each temple, those parameters that allow synthesizing the acoustic properties related to music & speech and measuring the subjective indices for the recognition of sounds, were selected. Also, the values were compared with the ones obtained from the surveys. High reverberation times were found in all churches, which is not considered optimal for the enclosed areas of the temples. The quality indices were analyzed and it was found how the different materials influence in the acoustic behavior of each enclosure. For subjective evaluation, a survey was implemented (that was previously validated) where the association between acoustic and psychoacoustic parameters was privileged; this allowed to find those objective parameters who were strongly related to the subjective ones, as well as those with lower correlation. Photographic and graphic material and other historical documents allowed the reconstruction of each church for its modeling, and also the evaluation of the performance of all the temples in the presence of their congregation. The interest in obtaining more accurate acoustic data of the San Ignacio Mini Church, which is now in ruins, led to the use of most powerful methods, as for example the image-sources "Ray Tracing Impact" method. For this, an audio archive was used, representing a male voice of a priest in the Jesuit-Guaraní language; recovering in this way intangible cultural heritage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente proyecto pretende ser una herramienta para la enseñanza de la lectoescritura (enseñar a leer y a escribir) para niños con discapacidad, haciendo para ello uso de una aplicación que se ejecuta en una tablet con Sistema Operativo (S.O.) Android. Existe un vacío en el mundo de las aplicaciones para tabletas en este campo en el que se intentará poner un grano de arena para, al menos, tener una aplicación que sirva de toma de contacto a los interesados en este campo. Para establecer las funcionalidades más adecuadas al propósito de la herramienta, se ha consultado a profesionales de la logopedia de un colegio de educación especial, con cuya colaboración se ha dado forma a la estructura de la misma. La implementación de la aplicación se ha llevado a cabo con programación en entorno Java para Android. Se han incluido diferentes recursos como imágenes, pictogramas y locuciones tanto elementos con licencia libre, como elementos propios generados ‘ex profeso’ para dar la forma final a la herramienta. Podemos decir que en general esta aplicación puede ser usada para enseña a leer y escribir a cualquier niño, pero se ha dotado de unas ciertas características que la confieren una orientación especial hacia niños con necesidades educativas especiales. Para ello se ha cuidado mucho la estética, para que ésta sea lo más simple y suave posible, para hacer especial hincapié en la atención de los niños y evitar su distracción con elementos visuales innecesarios. Se ha dotado de estímulos visuales y sonoros para fomentar su interés (aplausos en caso de acierto, colores para diferenciar aciertos y errores, etc.). Se han utilizado los tamaños de letra más grandes posibles (para las discapacidades visuales), etc. El mercado cuenta con una ingente cantidad de dispositivos Android, con características muy dispares, de tamaño de pantalla, resolución y versiones del S.O. entre otras. La aplicación se ha desarrollado tratando de dar cobertura al mayor porcentaje de ellos posible. El requisito mínimo de tamaño de pantalla sería de siete pulgadas. Esta herramienta no tiene demasiado sentido en dispositivos con pantallas menores por las características intrínsecas de la misma. No obstante se ha trabajado también en la configuración para dispositivos pequeños, como “smartphones”, no por su valor como herramienta para la enseñanza de la lectoescritura (aunque en algunos casos podría ser viable) sino más bien con fines de prueba y entrenamiento para profesores, padres o tutores que realizarán la labor docente con dispositivos tablet. Otro de los requisitos, como se ha mencionado, para poder ejecutar la aplicación sería la versión mínima de S.O., por debajo de la cual (versiones muy obsoletas) la aplicación sería inviable. Sirva este proyecto pues para cubrir, mediante el uso de la tecnología, un aspecto de la enseñanza con grandes oportunidades de mejora. ----------------------- This Project is aimed to be a tool for teaching reading and writing skills to handicapped children with an Android application. There are no Android applications available on this field, so it is intended to provide at least one option to take contact with. Speech therapy professionals from a special needs school have been asked for the most suitable functions to be included in this tool. The structure of this tool has been made with the cooperation of these professionals. The implementation of the application has been performed through Java coding for Android. Different resources have been included such as pictures, pictograms and sounds, including free licenses resources and self-developed resources. In general, it can be said that this application can be used to teach learning and writing skills to any given kid, however it has been provided of certain features that makes it ideal for children with special educational needs. It has been strongly taken into account the whole aesthetic to be as simple and soft as possible, in order to get attention of children, excluding any visual disturbing elements. It has been provided with sound and visual stimulations, to attract their interest (applauses in cases of correct answers, different colours to differentiate right or wrong answers), etc. There are many different types of Android devices, with very heterogeneous features regarding their screen size, resolution and O.S. version, etc., available today. The application has been developed trying to cover most of them. Minimum screen resolution is seven inches. This tool doesn’t seem to be very useful for smaller screens, for its inner features. Nevertheless, it has been developed for smaller devices as well, like smartphones, not intended to be a tool for teaching reading and writing skills (even it could be possible in some cases), but in a test and training context for teachers, parents or guardians who do the teaching work with tablet devices. Another requirement, as stated before, in order to be able to run the application, it would be the minimum O.S. version, below that (very obsolete versions) the application would become impracticable. Hope this project to be used to fulfill, by means of technology, one area of teaching with great improvement opportunities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We are witnessing a fundamental transformation in how Internet of Things (IoT) is having an impact on the experience users have with data-driven devices, smart appliances, and connected products. The experience of any place is commonly defined as the result of a series of user engagements with a surrounding place in order to carry out daily activities (Golledge, 2002). Knowing about users? experiences becomes vital to the process of designing a map. In the near future, a user will be able to interact directly with any IoT device placed in his surrounding place and very little is known on what kinds of interactions and experiences a map might offer (Roth, 2015). The main challenge is to develop an experience design process to devise maps capable of supporting different user experience dimensions such as cognitive, sensory-physical, affective, and social (Tussyadiah and Zach, 2012). For example, in a smart city of the future, the IoT devices allowing a multimodal interaction with a map could help tourists in the assimilation of their knowledge about points of interest (cognitive experience), their association of sounds and smells to these places (sensory-physical experience), their emotional connection to them (affective experience) and their relationships with other nearby tourists (social experience). This paper aims to describe a conceptual framework for developing a Mapping Experience Design (MXD) process for building maps for smart connected places of the future. Our MXD process is focussed on the cognitive dimension of an experience in which a person perceives a place as a "living entity" that uses and feeds through his experiences. We want to help people to undergo a meaningful experience of a place through mapping what is being communicated during their interactions with the IoT devices situated in this place. Our purpose is to understand how maps can support a person?s experience in making better decisions in real-time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente proyecto pretende ser una herramienta para la enseñanza de la lectoescritura (enseñar a leer y a escribir) para niños con discapacidad, haciendo para ello uso de una aplicación que se ejecuta en una tablet con Sistema Operativo (S.O.) Android. Existe un vacío en el mundo de las aplicaciones para tabletas en este campo en el que se intentará poner un grano de arena para, al menos, tener una aplicación que sirva de toma de contacto a los interesados en este campo. Para establecer las funcionalidades más adecuadas al propósito de la herramienta, se ha consultado a profesionales de la logopedia de un colegio de educación especial, con cuya colaboración se ha dado forma a la estructura de la misma. La implementación de la aplicación se ha llevado a cabo con programación en entorno Java para Android. Se han incluido diferentes recursos como imágenes, pictogramas y locuciones tanto elementos con licencia libre, como elementos propios generados ‘ex profeso’ para dar la forma final a la herramienta. Podemos decir que en general esta aplicación puede ser usada para enseña a leer y escribir a cualquier niño, pero se ha dotado de unas ciertas características que la confieren una orientación especial hacia niños con necesidades educativas especiales. Para ello se ha cuidado mucho la estética, para que ésta sea lo más simple y suave posible, para hacer especial hincapié en la atención de los niños y evitar su distracción con elementos visuales innecesarios. Se ha dotado de estímulos visuales y sonoros para fomentar su interés (aplausos en caso de acierto, colores para diferenciar aciertos y errores, etc.). Se han utilizado los tamaños de letra más grandes posibles (para las discapacidades visuales), etc. El mercado cuenta con una ingente cantidad de dispositivos Android, con características muy dispares, de tamaño de pantalla, resolución y versiones del S.O. entre otras. La aplicación se ha desarrollado tratando de dar cobertura al mayor porcentaje de ellos posible. El requisito mínimo de tamaño de pantalla sería de siete pulgadas. Esta herramienta no tiene demasiado sentido en dispositivos con pantallas menores por las características intrínsecas de la misma. No obstante se ha trabajado también en la configuración para dispositivos pequeños, como “smartphones”, no por su valor como herramienta para la enseñanza de la lectoescritura (aunque en algunos casos podría ser viable) sino más bien con fines de prueba y entrenamiento para profesores, padres o tutores que realizarán la labor docente con dispositivos tablet. Otro de los requisitos, como se ha mencionado, para poder ejecutar la aplicación sería la versión mínima de S.O., por debajo de la cual (versiones muy obsoletas) la aplicación sería inviable. Sirva este proyecto pues para cubrir, mediante el uso de la tecnología, un aspecto de la enseñanza con grandes oportunidades de mejora. ABSTRACT. This Project is aimed to be a tool for teaching reading and writing skills to handicapped children with an Android application. There are no Android applications available on this field, so it is intended to provide at least one option to take contact with. Speech therapy professionals from a special needs school have been asked for the most suitable functions to be included in this tool. The structure of this tool has been made with the cooperation of these professionals. The implementation of the application has been performed through Java coding for Android. Different resources have been included such as pictures, pictograms and sounds, including free licenses resources and self-developed resources. In general, it can be said that this application can be used to teach learning and writing skills to any given kid, however it has been provided of certain features that makes it ideal for children with special educational needs. It has been strongly taken into account the whole aesthetic to be as simple and soft as possible, in order to get attention of children, excluding any visual disturbing elements. It has been provided with sound and visual stimulations, to attract their interest (applauses in cases of correct answers, different colours to differentiate right or wrong answers), etc. There are many different types of Android devices, with very heterogeneous features regarding their screen size, resolution and O.S. version, etc., available today. The application has been developed trying to cover most of them. Minimum screen resolution is seven inches. This tool doesn’t seem to be very useful for smaller screens, for its inner features. Nevertheless, it has been developed for smaller devices as well, like smartphones, not intended to be a tool for teaching reading and writing skills (even it could be possible in some cases), but in a test and training context for teachers, parents or guardians who do the teaching work with tablet devices. Another requirement, as stated before, in order to be able to run the application, it would be the minimum O.S. version, below that (very obsolete versions) the application would become impracticable. Hope this project to be used to fulfill, by means of technology, one area of teaching with great improvement opportunities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El habla es la principal herramienta de comunicación de la que dispone el ser humano que, no sólo le permite expresar su pensamiento y sus sentimientos sino que le distingue como individuo. El análisis de la señal de voz es fundamental para múltiples aplicaciones como pueden ser: síntesis y reconocimiento de habla, codificación, detección de patologías, identificación y reconocimiento de locutor… En el mercado se pueden encontrar herramientas comerciales o de libre distribución para realizar esta tarea. El objetivo de este Proyecto Fin de Grado es reunir varios algoritmos de análisis de la señal de voz en una única herramienta que se manejará a través de un entorno gráfico. Los algoritmos están siendo utilizados en el Grupo de investigación en Aplicaciones MultiMedia y Acústica de la Universidad Politécnica de Madrid para llevar a cabo su tarea investigadora y para ofertar talleres formativos a los alumnos de grado de la Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación. Actualmente se ha encontrado alguna dificultad para poder aplicar los algoritmos ya que se han ido desarrollando a lo largo de varios años, por distintas personas y en distintos entornos de programación. Se han adaptado los programas existentes para generar una única herramienta en MATLAB que permite: . Detección de voz . Detección sordo/sonoro . Extracción y revisión manual de frecuencia fundamental de los sonidos sonoros . Extracción y revisión manual de formantes de los sonidos sonoros En todos los casos el usuario puede ajustar los parámetros de análisis y se ha mantenido y, en algunos casos, ampliado la funcionalidad de los algoritmos existentes. Los resultados del análisis se pueden manejar directamente en la aplicación o guardarse en un fichero. Por último se ha escrito el manual de usuario de la aplicación y se ha generado una aplicación independiente que puede instalarse y ejecutarse aunque no se disponga del software o de la versión adecuada de MATLAB. ABSTRACT. The speech is the main communication tool which has the human that as well as allowing to express his thoughts and feelings distinguishes him as an individual. The analysis of speech signal is essential for multiple applications such as: synthesis and recognition of speech, coding, detection of pathologies, identification and speaker recognition… In the market you can find commercial or open source tools to perform this task. The aim of this Final Degree Project is collect several algorithms of speech signal analysis in a single tool which will be managed through a graphical environment. These algorithms are being used in the research group Aplicaciones MultiMedia y Acústica at the Universidad Politécnica de Madrid to carry out its research work and to offer training workshops for students at the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación. Currently some difficulty has been found to be able to apply the algorithms as they have been developing over several years, by different people and in different programming environments. Existing programs have been adapted to generate a single tool in MATLAB that allows: . Voice Detection . Voice/Unvoice Detection . Extraction and manual review of fundamental frequency of voiced sounds . Extraction and manual review formant voiced sounds In all cases the user can adjust the scan settings, we have maintained and in some cases expanded the functionality of existing algorithms. The analysis results can be managed directly in the application or saved to a file. Finally we have written the application user’s manual and it has generated a standalone application that can be installed and run although the user does not have MATLAB software or the appropriate version.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dynamic responses of the hearing organ to acoustic overstimulation were investigated using the guinea pig isolated temporal bone preparation. The organ was loaded with the fluorescent Ca2+ indicator Fluo-3, and the cochlear electric responses to low-level tones were recorded through a microelectrode in the scala media. After overstimulation, the amplitude of the cochlear potentials decreased significantly. In some cases, rapid recovery was seen with the potentials returning to their initial amplitude. In 12 of 14 cases in which overstimulation gave a decrease in the cochlear responses, significant elevations of the cytoplasmic [Ca2+] in the outer hair cells were seen. [Ca2+] increases appeared immediately after terminating the overstimulation, with partial recovery taking place in the ensuing 30 min in some preparations. Such [Ca2+] changes were not seen in preparations that were stimulated at levels that did not cause an amplitude change in the cochlear potentials. The overstimulation also gave rise to a contraction, evident as a decrease of the width of the organ of Corti. The average contraction in 10 preparations was 9 μm (SE 2 μm). Partial or complete recovery was seen within 30–45 min after the overstimulation. The [Ca2+] changes and the contraction are likely to produce major functional alterations and consequently are suggested to be a factor contributing strongly to the loss of function seen after exposure to loud sounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Echolocating big brown bats (Eptesicus fuscus) broadcast ultrasonic frequency-modulated (FM) biosonar sounds (20–100 kHz frequencies; 10–50 μs periods) and perceive target range from echo delay. Knowing the acuity for delay resolution is essential to understand how bats process echoes because they perceive target shape and texture from the delay separation of multiple reflections. Bats can separately perceive the delays of two concurrent electronically generated echoes arriving as little as 2 μs apart, thus resolving reflecting points as close together as 0.3 mm in range (two-point threshold). This two-point resolution is roughly five times smaller than the shortest periods in the bat’s sounds. Because the bat’s broadcasts are 2,000–4,500 μs long, the echoes themselves overlap and interfere with each other, to merge together into a single sound whose spectrum is shaped by their mutual interference depending on the size of the time separation. To separately perceive the delays of overlapping echoes, the bat has to recover information about their very small delay separation that was transferred into the spectrum when the two echoes interfered with each other, thus explicitly reconstructing the range profile of targets from the echo spectrum. However, the bat’s 2-μs resolution limit is so short that the available spectral cues are extremely limited. Resolution of delay seems overly sharp just for interception of flying insects, which suggests that the bat’s biosonar images are of higher quality to suit a wider variety of orientation tasks, and that biosonar echo processing is correspondingly more sophisticated than has been suspected.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During metamorphosis, ranid frogs shift from a purely aquatic to a partly terrestrial lifestyle. The central auditory system undergoes functional and neuroanatomical reorganization in parallel with the development of new sound conduction pathways adapted for the detection of airborne sounds. Neural responses to sounds can be recorded from the auditory midbrain of tadpoles shortly after hatching, with higher rates of synchronous neural activity and lower sharpness of tuning than observed in postmetamorphic animals. Shortly before the onset of metamorphic climax, there is a brief “deaf” period during which no auditory activity can be evoked from the midbrain, and a loss of connectivity is observed between medullary and midbrain auditory nuclei. During the final stages of metamorphic development, auditory function and neural connectivity are restored. The acoustic communication system of the adult frog emerges from these periods of anatomical and physiological plasticity during metamorphosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We compared magnetoencephalographic responses for natural vowels and for sounds consisting of two pure tones that represent the two lowest formant frequencies of these vowels. Our aim was to determine whether spectral changes in successive stimuli are detected differently for speech and nonspeech sounds. The stimuli were presented in four blocks applying an oddball paradigm (20% deviants, 80% standards): (i) /α/ tokens as deviants vs. /i/ tokens as standards; (ii) /e/ vs. /i/; (iii) complex tones representing /α/ formants vs. /i/ formants; and (iv) complex tones representing /e/ formants vs. /i/ formants. Mismatch fields (MMFs) were calculated by subtracting the source waveform produced by standards from that produced by deviants. As expected, MMF amplitudes for the complex tones reflected acoustic deviation: the amplitudes were stronger for the complex tones representing /α/ than /e/ formants, i.e., when the spectral difference between standards and deviants was larger. In contrast, MMF amplitudes for the vowels were similar despite their different spectral composition, whereas the MMF onset time was longer for /e/ than for /α/. Thus the degree of spectral difference between standards and deviants was reflected by the MMF amplitude for the nonspeech sounds and by the MMF latency for the vowels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neuronal models predict that retrieval of specific event information reactivates brain regions that were active during encoding of this information. Consistent with this prediction, this positron-emission tomography study showed that remembering that visual words had been paired with sounds at encoding activated some of the auditory brain regions that were engaged during encoding. After word-sound encoding, activation of auditory brain regions was also observed during visual word recognition when there was no demand to retrieve auditory information. Collectively, these observations suggest that information about the auditory components of multisensory event information is stored in auditory responsive cortex and reactivated at retrieval, in keeping with classical ideas about “redintegration,” that is, the power of part of an encoded stimulus complex to evoke the whole experience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fundamental question in human memory is how the brain represents sensory-specific information during the process of retrieval. One hypothesis is that regions of sensory cortex are reactivated during retrieval of sensory-specific information (1). Here we report findings from a study in which subjects learned a set of picture and sound items and were then given a recall test during which they vividly remembered the items while imaged by using event-related functional MRI. Regions of visual and auditory cortex were activated differentially during retrieval of pictures and sounds, respectively. Furthermore, the regions activated during the recall test comprised a subset of those activated during a separate perception task in which subjects actually viewed pictures and heard sounds. Regions activated during the recall test were found to be represented more in late than in early visual and auditory cortex. Therefore, results indicate that retrieval of vivid visual and auditory information can be associated with a reactivation of some of the same sensory regions that were activated during perception of those items.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Owls and other animals, including humans, use the difference in arrival time of sounds between the ears to determine the direction of a sound source in the horizontal plane. When an interaural time difference (ITD) is conveyed by a narrowband signal such as a tone, human beings may fail to derive the direction represented by that ITD. This is because they cannot distinguish the true ITD contained in the signal from its phase equivalents that are ITD ± nT, where T is the period of the stimulus tone and n is an integer. This uncertainty is called phase-ambiguity. All ITD-sensitive neurons in birds and mammals respond to an ITD and its phase equivalents when the ITD is contained in narrowband signals. It is not known, however, if these animals show phase-ambiguity in the localization of narrowband signals. The present work shows that barn owls (Tyto alba) experience phase-ambiguity in the localization of tones delivered by earphones. We used sound-induced head-turning responses to measure the sound-source directions perceived by two owls. In both owls, head-turning angles varied as a sinusoidal function of ITD. One owl always pointed to the direction represented by the smaller of the two ITDs, whereas a second owl always chose the direction represented by the larger ITD (i.e., ITD − T).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The barn owl (Tyto alba) uses interaural time difference (ITD) cues to localize sounds in the horizontal plane. Low-order binaural auditory neurons with sharp frequency tuning act as narrow-band coincidence detectors; such neurons respond equally well to sounds with a particular ITD and its phase equivalents and are said to be phase ambiguous. Higher-order neurons with broad frequency tuning are unambiguously selective for single ITDs in response to broad-band sounds and show little or no response to phase equivalents. Selectivity for single ITDs is thought to arise from the convergence of parallel, narrow-band frequency channels that originate in the cochlea. ITD tuning to variable bandwidth stimuli was measured in higher-order neurons of the owl’s inferior colliculus to examine the rules that govern the relationship between frequency channel convergence and the resolution of phase ambiguity. Ambiguity decreased as stimulus bandwidth increased, reaching a minimum at 2–3 kHz. Two independent mechanisms appear to contribute to the elimination of ambiguity: one suppressive and one facilitative. The integration of information carried by parallel, distributed processing channels is a common theme of sensory processing that spans both modality and species boundaries. The principles underlying the resolution of phase ambiguity and frequency channel convergence in the owl may have implications for other sensory systems, such as electrolocation in electric fish and the computation of binocular disparity in the avian and mammalian visual systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two and a half millennia ago Pythagoras initiated the scientific study of the pitch of sounds; yet our understanding of the mechanisms of pitch perception remains incomplete. Physical models of pitch perception try to explain from elementary principles why certain physical characteristics of the stimulus lead to particular pitch sensations. There are two broad categories of pitch-perception models: place or spectral models consider that pitch is mainly related to the Fourier spectrum of the stimulus, whereas for periodicity or temporal models its characteristics in the time domain are more important. Current models from either class are usually computationally intensive, implementing a series of steps more or less supported by auditory physiology. However, the brain has to analyze and react in real time to an enormous amount of information from the ear and other senses. How is all this information efficiently represented and processed in the nervous system? A proposal of nonlinear and complex systems research is that dynamical attractors may form the basis of neural information processing. Because the auditory system is a complex and highly nonlinear dynamical system, it is natural to suppose that dynamical attractors may carry perceptual and functional meaning. Here we show that this idea, scarcely developed in current pitch models, can be successfully applied to pitch perception.