908 resultados para Sound Localization
Resumo:
OBJETIVO: avaliar o desempenho em habilidades auditivas e as condições de orelha média de crianças de 4 a 6 anos de idade. MÉTODO: foram aplicados os testes de detecção sonora (audiômetro pediátrico em 20dBNA), a Avaliação Simplificada do Processamento Auditivo (ASPA) e as medidas de imitância acústica (handtymp com tom de 226Hz) em 61 crianças com média de idade de 5,65 anos. Para comparar os resultados das provas de habilidades auditivas e das medidas da imitância acústica foi aplicado o teste exato de Fisher com nível de significância de p< 0,05. RESULTADOS: houve alteração em pelo menos uma das habilidades auditivas investigadas em 24,6% das crianças. Houve alteração timpanométrica em 34,4% das crianças e 64% foram classificadas no critério "falha" para a pesquisa do reflexo acústico ispilateral. As crianças mais jovens apresentaram maior ocorrência de alterações de orelha média, mas não houve diferença estatisticamente significante entre as diferentes idades para as provas realizadas. CONCLUSÃO: as crianças mais jovens apresentaram maior ocorrência de alterações nas provas de habilidades auditivas e nas medidas de imitância acústica. Programas de investigação e acompanhamento das condições de orelha média e das habilidades auditivas em idade pré-escolar e escolar podem eliminar ou minimizar intercorrências que alterariam o desenvolvimento sócio-linguístico.
Resumo:
The aim of the present work is a historical survey on Gestalt trends in psychological research between late 19th and the first half of 20th century with privileged reference to sound and musical perception by means of a reconsideration of experimental and theoretical literature. Ernst Mach and Christian von Ehrenfels gave rise to the debate about Gestaltqualität which notably grew thanks to the ‘Graz School’ (Alexius Meinong, Stephan Witasek, Anton Faist, Vittorio Benussi), where the object theory and the production theory of perception were worked out. Stumpf’s research on Tonpsychologie and Franz Brentano’s tradition of ‘act psychology’ were directly involved in this debate, opposing to Wilhelm Wundt’s conception of the discipline; this clearly came to light in Stumpf’s controversy with Carl Lorenz and Wundt on Tondistanzen. Stumpf’s concept of Verschmelzung and his views about consonance and concordance led him to some disputes with Theodor Lipps and Felix Krueger, lasting more than two decades. Carl Stumpf was responsible for education of a new generation of scholars during his teaching at the Berlin University: his pupils Wolfgang Köhler, Kurt Koffka and Max Wertheimer established the so-called ‘Berlin School’ and promoted the official Gestalt theory since the 1910s. After 1922 until 1938 they gave life and led together with other distinguished scientists the «Psychologische Forschung», a scientific journal in which ‘Gestalt laws’ and many other acoustical studies on different themes (such as sound localization, successive comparison, phonetic phenomena) were exposed. During the 1920s Erich Moritz von Hornbostel gave important contributions towards the definition of an organic Tonsystem in which sound phenomena could find adequate arrangement. Last section of the work contains descriptions of Albert Wellek’s studies, Kurt Huber’s vowel researches and aspects of melody perception, apparent movement and phi-phenomenon in acoustical field. The work contains also some considerations on the relationships among tone psychology, musical psychology, Gestalt psychology, musical aesthetics and musical theory. Finally, the way Gestalt psychology changed earlier interpretations is exemplified by the decisive renewal of perception theory, the abandon of Konstanzannahme, some repercussions on theory of meaning as organization and on feelings in musical experience.
Resumo:
Objective: To investigate objective and subjective effects of an adjunctive contralateral routing of signal (CROS) device at the untreated ear in patients with a unilateral cochlear implant (CI). Design: Prospective study of 10 adult experienced unilateral CI users with bilateral severe-to-profound hearing loss. Speech in noise reception (SNR) and sound localization were measured with and without the additional CROS device. SNR was measured by applying speech signals at the untreated/CROS side while noise signals came from the front (S90N0). For S0N90, signal sources were switched. Sound localization was measured in a 12-loudspeaker full circle setup. To evaluate the subjective benefit, patients tried the device for 2 weeks at home, then filled out the abbreviated Speech, Spatial and Qualities of Hearing Scale as well as the Bern benefit in single-sided deafness questionnaires. Results: In the setting S90N0, all patients showed a highly significant SNR improvement when wearing the additional CROS device (mean 6.4 dB, p < 0.001). In the unfavorable setting S0N90, only a minor deterioration of speech understanding was noted (mean -0.66 dB, p = 0.54). Sound localization did not improve substantially with CROS. In the two questionnaires, 12 of 14 items showed an improvement in mean values, but none of them was statistically significant. Conclusion: Patients with unilateral CI benefit from a contralateral CROS device, particularly in a noisy environment, when speech comes from the CROS ear side. © 2014 S. Karger AG, Basel.
Resumo:
Computational maps are of central importance to a neuronal representation of the outside world. In a map, neighboring neurons respond to similar sensory features. A well studied example is the computational map of interaural time differences (ITDs), which is essential to sound localization in a variety of species and allows resolution of ITDs of the order of 10 μs. Nevertheless, it is unclear how such an orderly representation of temporal features arises. We address this problem by modeling the ontogenetic development of an ITD map in the laminar nucleus of the barn owl. We show how the owl's ITD map can emerge from a combined action of homosynaptic spike-based Hebbian learning and its propagation along the presynaptic axon. In spike-based Hebbian learning, synaptic strengths are modified according to the timing of pre- and postsynaptic action potentials. In unspecific axonal learning, a synapse's modification gives rise to a factor that propagates along the presynaptic axon and affects the properties of synapses at neighboring neurons. Our results indicate that both Hebbian learning and its presynaptic propagation are necessary for map formation in the laminar nucleus, but the latter can be orders of magnitude weaker than the former. We argue that the algorithm is important for the formation of computational maps, when, in particular, time plays a key role.
Resumo:
Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the “cocktail party effect”) are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.
Resumo:
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Resumo:
Understanding how the brain processes vocal communication sounds is one of the most challenging problems in neuroscience. Our understanding of how the cortex accomplishes this unique task should greatly facilitate our understanding of cortical mechanisms in general. Perception of species-specific communication sounds is an important aspect of the auditory behavior of many animal species and is crucial for their social interactions, reproductive success, and survival. The principles of neural representations of these behaviorally important sounds in the cerebral cortex have direct implications for the neural mechanisms underlying human speech perception. Our progress in this area has been relatively slow, compared with our understanding of other auditory functions such as echolocation and sound localization. This article discusses previous and current studies in this field, with emphasis on nonhuman primates, and proposes a conceptual platform to further our exploration of this frontier. It is argued that the prerequisite condition for understanding cortical mechanisms underlying communication sound perception and production is an appropriate animal model. Three issues are central to this work: (i) neural encoding of statistical structure of communication sounds, (ii) the role of behavioral relevance in shaping cortical representations, and (iii) sensory–motor interactions between vocal production and perception systems.
Resumo:
A perda auditiva unilateral (PAUn) é caracterizada pela diminuição da audição em apenas uma orelha. Indivíduos com este tipo de perda auditiva podem apresentar comprometimento nas habilidades auditivas de localização sonora, processamento temporal, ordenação e resolução temporal. O objetivo deste estudo foi verificar as habilidades auditivas de ordenação temporal, resolução temporal e localização sonora, antes e após a adaptação do aparelho de amplificação sonora individual (AASI). Foram avaliados 22 indivíduos, com idades entre 18 e 60 anos, com diagnóstico de PAUn sensorioneural ou mista, de graus leve a severo. O estudo foi dividido em duas etapas: a pré e a pós-adaptação de AASI. Em ambas as etapas, os indivíduos foram submetidos a uma anamnese, aplicação do Questionário de Habilidade Auditiva da Localização da fonte sonora, avaliação simplificada do processamento auditivo (ASPA) e Random Gap Detection Test (RGDT). O presente estudo encontrou diferença estatisticamente significante na avaliação da ASPA, exceto no teste de memória para sons não verbais em sequência (TMSnV), no RGDT e no Questionário de Habilidade Auditiva da Localização Sonora. A conclusão do estudo foi que com o uso efetivo do AASI, indivíduos com PAUn apresentaram melhora nas habilidades auditivas de localização sonora, ordenação e resolução temporal.
Resumo:
Introdução: O implante coclear (IC) amplamente aceito como forma de intervenção e (re) habilitação nas perdas auditivas severas e profundas nas diversas faixas etárias. Contudo observa-se no usuário do IC unilateral queixas como localização e compreensão sonora em meio ao ruído, gerado pelo padrão anormal de estimulação sensorial. A fim de fornecer os benefícios da audição binaural, é preconizado a estimulação bilateral, seja por meio do IC bilateral ou com a adaptação de um aparelho de amplificação sonora individual (AASI) contralateralmente ao IC. Esta última condição é referida como estimulação bimodal, quando temos, concomitantemente dois modos de estimulação: Elétrica (IC) e acústica (AASI). Não há dados suficientes na literatura voltados à população infantil que esclareça ou demonstre o desenvolvimento do córtex auditivo na audição bimodal. Ressalta-se que não foram encontrados estudos em crianças. Objetivo: Caracterizar o PEAC complexo P1, N1 P2 em usuários da estimulação bimodal e verificar se há correlação com testes de percepção de fala. Metodologia: Estudo descritivo de séries de casos, com a realização do PEAC em cinco crianças usuárias da estimulação bimodal, a partir da metodologia proposta por Ventura (2008) utilizando o sistema Smart EP USB Jr da Intelligent Hearing Systems. Foi utilizado o som de fala /da/, apresentado em campo livre. O exame será realizado em três situações: Somente IC, IC e AASI e somente AASI. A análise dos dados dos potenciais corticais foi realizada após a marcação da presença ou ausência dos componentes do complexo P1-N1-P2 por dois juízes com experiência em potenciais evocados. Resultados: Foi obtida a captação do PEAC em todas as crianças em todas as situações de teste, além do que foi possível observar a correlação destes com os testes de percepção auditiva da fala. Foi possível verificar que o registro dos PEAC é um procedimento viável para a avaliação da criança com estimulação bimodal, porém, ainda não há dados suficientes quanto a utilização deste para a avaliação e indicação do IC bilateral.
Resumo:
Dynamical principles in recent psychology / Madison Bentley -- Some neglected aspects of a history of psychology / Coleman R. Griffith -- A preliminary study of the emothions/ C.A. Ruckmick -- A comment upon the psychology of the audience / Coleman R. Griffith -- Leading and legibility / Madison Bentley -- The printing of backbone titles on thin books and magazines / P.N. Gould, L.C. Raines and C.A. Ruckmick -- Experiments in sound localization / C.A. Ruckmick -- The intensive summation of thermal sensations / Annette Baron and Madison Bentley.
Resumo:
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
Involuntary episodic memories are memories that come into consciousness without preceding retrieval effort. These memories are commonplace and are relevant to multiple mental disorders. However, they are vastly understudied. We use a novel paradigm to elicit involuntary memories in the laboratory so that we can study their neural basis. In session one, an encoding session, sounds are presented with picture pairs or alone. In session two, in the scanner, sounds-picture pairs and unpaired sounds are reencoded. Immediately following, participants are split into two groups: a voluntary and an involuntary group. Both groups perform a sound localization task in which they hear the sounds and indicate the side from which they are coming. The voluntary group additionally tries to remember the pictures that were paired with the sounds. Looking at neural activity, we find a main effect of condition (paired vs. unpaired sounds) showing similar activity in both groups for voluntary and involuntary memories in regions typically associated with retrieval. There is also a main effect of group (voluntary vs. involuntary) in the dorsolateral prefrontal cortex, a region typically associated with cognitive control. Turning to connectivity similarities and differences between groups again, there is a main effect of condition showing paired > unpaired sounds are associated with a recollection network. In addition, three group differences were found: (1) increased connectivity between the pulvinar nucleus of the thalamus and the recollection network for the voluntary group, (2) a higher association between the voluntary group and a network that includes regions typically found in frontoparietal and cingulo-opercular networks, and (3) shorter path length for about half of the nodes in these networks for the voluntary group. Finally, we use the same paradigm to compare involuntary memories in people with posttraumatic stress disorder (PTSD) to trauma-controls. This study also included the addition of emotional pictures. There were two main findings. (1) A similar pattern of activity was found for paired > unpaired sounds for both groups but this activity was delayed in the PTSD group. (2) A similar pattern of activity was found for high > low emotion stimuli but it occurred early in the PTSD group compared to the control group. Our results suggest that involuntary and voluntary memories share the same neural representation but that voluntary memories are associated with additional cognitive control processes. They also suggest that disorders associated with cognitive deficits, like PTSD, can affect the processing of involuntary memories.
Resumo:
A new algorithm based on signal subspace approach is proposed for localizing a sound source in shallow water. In the first instance we assumed an ideal channel with plane parallel boundaries and known reflection properties. The sound source is assumed to emit a broadband stationary stochastic signal. The algorithm takes into account the spatial distribution of all images and reflection characteristics of the sea bottom. It is shown that both range and depth of a source can be measured accurately with the help of a vertical array of sensors. For good results the number of sensors should be greater than the number of significant images; however, localization is possible even with a smaller array but at the cost of higher side lobes. Next, we allowed the channel to be stochastically perturbed; this resulted in random phase errors in the reflection coefficients. The most singular effect of the phase errors is to introduce into the spectral matrix an extra term which may be looked upon as a signal generated coloured noise. It is shown through computer simulations that the signal peak height is reduced considerably as a consequence of random phase errors.
Resumo:
Binaural experiments are described which indicate that the ability of the brain to localize a desired sound and to suppress undesired sounds coming from other directions can be traced in part to the different times of arrival of a sound at the two ears. It is suggested that the brain inserts a time delay in one of the two nerve paths associated with the ears so as to be able to compare, and thus concentrate on, those sounds arriving at the ears with this particular time of arrival distance.The ability to perceive weak sounds binaurally in the presence of noise is shown to be a simple function of the direction of the desired sound and noise. An explanation is given for the effect reported by Koenig that front and rear confusion is avoided by head movements.