999 resultados para Spatial Hearing
Resumo:
Spatial hearing refers to a set of abilities enabling us to determine the location of sound sources, redirect our attention toward relevant acoustic events, and recognize separate sound sources in noisy environments. Determining the location of sound sources plays a key role in the way in which humans perceive and interact with their environment. Deficits in sound localization abilities are observed after lesions to the neural tissues supporting these functions and can result in serious handicaps in everyday life. These deficits can, however, be remediated (at least to a certain degree) by the surprising capacity of reorganization that the human brain possesses following damage and/or learning, namely, the brain plasticity. In this thesis, our aim was to investigate the functional organization of auditory spatial functions and the learning-induced plasticity of these functions. Overall, we describe the results of three studies. The first study entitled "The role of the right parietal cortex in sound localization: A chronometric single pulse transcranial magnetic stimulation study" (At et al., 2011), study A, investigated the role of the right parietal cortex in spatial functions and its chronometry (i.e. the critical time window of its contribution to sound localizations). We concentrated on the behavioral changes produced by the temporarily inactivation of the parietal cortex with transcranial magnetic stimulation (TMS). We found that the integrity of the right parietal cortex is crucial for localizing sounds in the space and determined a critical time window of its involvement, suggesting a right parietal dominance for auditory spatial discrimination in both hemispaces. In "Distributed coding of the auditory space in man: evidence from training-induced plasticity" (At et al., 2013a), study B, we investigated the neurophysiological correlates and changes of the different sub-parties of the right auditory hemispace induced by a multi-day auditory spatial training in healthy subjects with electroencephalography (EEG). We report a distributed coding for sound locations over numerous auditory regions, particular auditory areas code specifically for precise parts of the auditory space, and this specificity for a distinct region is enhanced with training. In the third study "Training-induced changes in auditory spatial mismatch negativity" (At et al., 2013b), study C, we investigated the pre-attentive neurophysiological changes induced with a training over 4 days in healthy subjects with a passive mismatch negativity (MMN) paradigm. We showed that training changed the mechanisms for the relative representation of sound positions and not the specific lateralization themselves and that it changed the coding in right parahippocampal regions. - L'audition spatiale désigne notre capacité à localiser des sources sonores dans l'espace, de diriger notre attention vers les événements acoustiques pertinents et de reconnaître des sources sonores appartenant à des objets distincts dans un environnement bruyant. La localisation des sources sonores joue un rôle important dans la façon dont les humains perçoivent et interagissent avec leur environnement. Des déficits dans la localisation de sons sont souvent observés quand les réseaux neuronaux impliqués dans cette fonction sont endommagés. Ces déficits peuvent handicaper sévèrement les patients dans leur vie de tous les jours. Cependant, ces déficits peuvent (au moins à un certain degré) être réhabilités grâce à la plasticité cérébrale, la capacité du cerveau humain à se réorganiser après des lésions ou un apprentissage. L'objectif de cette thèse était d'étudier l'organisation fonctionnelle de l'audition spatiale et la plasticité induite par l'apprentissage de ces fonctions. Dans la première étude intitulé « The role of the right parietal cortex in sound localization : A chronometric single pulse study » (At et al., 2011), étude A, nous avons examiné le rôle du cortex pariétal droit dans l'audition spatiale et sa chronométrie, c'est-à- dire le moment critique de son intervention dans la localisation de sons. Nous nous sommes concentrés sur les changements comportementaux induits par l'inactivation temporaire du cortex pariétal droit par le biais de la Stimulation Transcrânienne Magnétique (TMS). Nous avons démontré que l'intégrité du cortex pariétal droit est cruciale pour localiser des sons dans l'espace. Nous avons aussi défini le moment critique de l'intervention de cette structure. Dans « Distributed coding of the auditory space : evidence from training-induced plasticity » (At et al., 2013a), étude B, nous avons examiné la plasticité cérébrale induite par un entraînement des capacités de discrimination auditive spatiale de plusieurs jours. Nous avons montré que le codage des positions spatiales est distribué dans de nombreuses régions auditives, que des aires auditives spécifiques codent pour des parties données de l'espace et que cette spécificité pour des régions distinctes est augmentée par l'entraînement. Dans « Training-induced changes in auditory spatial mismatch negativity » (At et al., 2013b), étude C, nous avons examiné les changements neurophysiologiques pré- attentionnels induits par un entraînement de quatre jours. Nous avons montré que l'entraînement modifie la représentation des positions spatiales entraînées et non-entrainées, et que le codage de ces positions est modifié dans des régions parahippocampales.
Resumo:
Dans le cas de perte auditive, la localisation spatiale est amoindrie et vient entraver la compréhension de la parole et ce, malgré le port de prothèses auditives. La présente étude modifie la forme de l’oreille externe d’individus à l’aide de silicone afin d’induire des changements aux indices spectraux (HRTFs), similaires à ceux causés par des prothèses auditives, et d’explorer les mécanismes perceptifs (visuel, spectral, ou tactile) permettant d’alterner d’un nouvel ensemble à l’ensemble originel de HRTFs une fois les prothèses enlevées. Les résultats démontrent que les participants s’adaptent aux nouveaux HRTFs à l’intérieur de quatre séances d’entraînement. Dès le retrait des prothèses, les participants reviennent à leur performance originale. Il n’est pas possible de conclure avec les données présentes si le changement d’un ensemble de HRTFs à un autre est influencé par un des mécanismes de rétroaction perceptuelle étudié. L’adaptation aux prothèses perdure jusqu’à quatre semaines après leur retrait.
Resumo:
A speech message played several metres from the listener in a room is usually heard to have much the same phonetic content as it does when played nearby, even though the different amounts of reflected sound make the temporal envelopes of these signals very different. To study this ‘constancy’ effect, listeners heard speech messages and speech-like sounds comprising 8 auditory-filter shaped noise-bands that had temporal envelopes corresponding to those in these filters when the speech message is played. The ‘contexts’ were “next you’ll get _to click on”, into which a “sir” or “stir” test word was inserted. These test words were from an 11-step continuum, formed by amplitude modulation. Listeners identified the test words appropriately, even in the 8-band conditions where the speech had a ‘robotic’ quality. Constancy was assessed by comparing the influence of room reflections on the test word across conditions where the context had either the same level of room reflections (i.e. from the same, far distance), or where it had a much lower level (i.e. from nearby). Constancy effects were obtained with both the natural- and the 8-band speech. Results are considered in terms of the degree of ‘matching’ between the context’s and test-word’s bands.
Resumo:
The characteristics of moving sound sources have strong implications on the listener's distance perception and the estimation of velocity. Modifications of the typical sound emissions as they are currently occurring due to the tendency towards electromobility have an impact on the pedestrian's safety in road traffic. Thus, investigations of the relevant cues for velocity and distance perception of moving sound sources are not only of interest for the psychoacoustic community, but also for several applications, like e.g. virtual reality, noise pollution and safety aspects of road traffic. This article describes a series of psychoacoustic experiments in this field. Dichotic and diotic stimuli of a set of real-life recordings taken from a passing passenger car and a motorcycle were presented to test subjects who in turn were asked to determine the velocity of the object and its minimal distance from the listener. The results of these psychoacoustic experiments show that the estimated velocity is strongly linked to the object's distance. Furthermore, it could be shown that binaural cues contribute significantly to the perception of velocity. In a further experiment, it was shown that - independently of the type of the vehicle - the main parameter for distance determination is the maximum sound pressure level at the listener's position. The article suggests a system architecture for the adequate consideration of moving sound sources in virtual auditory environments. Virtual environments can thus be used to investigate the influence of new vehicle powertrain concepts and the related sound emissions of these vehicles on the pedestrians' ability to estimate the distance and velocity of moving objects.
Resumo:
Pouvoir déterminer la provenance des sons est fondamental pour bien interagir avec notre environnement. La localisation auditive est une faculté importante et complexe du système auditif humain. Le cerveau doit décoder le signal acoustique pour en extraire les indices qui lui permettent de localiser une source sonore. Ces indices de localisation auditive dépendent en partie de propriétés morphologiques et environnementales qui ne peuvent être anticipées par l'encodage génétique. Le traitement de ces indices doit donc être ajusté par l'expérience durant la période de développement. À l’âge adulte, la plasticité en localisation auditive existe encore. Cette plasticité a été étudiée au niveau comportemental, mais on ne connaît que très peu ses corrélats et mécanismes neuronaux. La présente recherche avait pour objectif d'examiner cette plasticité, ainsi que les mécanismes d'encodage des indices de localisation auditive, tant sur le plan comportemental, qu'à travers les corrélats neuronaux du comportement observé. Dans les deux premières études, nous avons imposé un décalage perceptif de l’espace auditif horizontal à l’aide de bouchons d’oreille numériques. Nous avons montré que de jeunes adultes peuvent rapidement s’adapter à un décalage perceptif important. Au moyen de l’IRM fonctionnelle haute résolution, nous avons observé des changements de l’activité corticale auditive accompagnant cette adaptation, en termes de latéralisation hémisphérique. Nous avons également pu confirmer l’hypothèse de codage par hémichamp comme représentation de l'espace auditif horizontal. Dans une troisième étude, nous avons modifié l’indice auditif le plus important pour la perception de l’espace vertical à l’aide de moulages en silicone. Nous avons montré que l’adaptation à cette modification n’était suivie d’aucun effet consécutif au retrait des moulages, même lors de la toute première présentation d’un stimulus sonore. Ce résultat concorde avec l’hypothèse d’un mécanisme dit de many-to-one mapping, à travers lequel plusieurs profils spectraux peuvent être associés à une même position spatiale. Dans une quatrième étude, au moyen de l’IRM fonctionnelle et en tirant profit de l’adaptation aux moulages de silicone, nous avons révélé l’encodage de l’élévation sonore dans le cortex auditif humain.
Resumo:
Pouvoir déterminer la provenance des sons est fondamental pour bien interagir avec notre environnement. La localisation auditive est une faculté importante et complexe du système auditif humain. Le cerveau doit décoder le signal acoustique pour en extraire les indices qui lui permettent de localiser une source sonore. Ces indices de localisation auditive dépendent en partie de propriétés morphologiques et environnementales qui ne peuvent être anticipées par l'encodage génétique. Le traitement de ces indices doit donc être ajusté par l'expérience durant la période de développement. À l’âge adulte, la plasticité en localisation auditive existe encore. Cette plasticité a été étudiée au niveau comportemental, mais on ne connaît que très peu ses corrélats et mécanismes neuronaux. La présente recherche avait pour objectif d'examiner cette plasticité, ainsi que les mécanismes d'encodage des indices de localisation auditive, tant sur le plan comportemental, qu'à travers les corrélats neuronaux du comportement observé. Dans les deux premières études, nous avons imposé un décalage perceptif de l’espace auditif horizontal à l’aide de bouchons d’oreille numériques. Nous avons montré que de jeunes adultes peuvent rapidement s’adapter à un décalage perceptif important. Au moyen de l’IRM fonctionnelle haute résolution, nous avons observé des changements de l’activité corticale auditive accompagnant cette adaptation, en termes de latéralisation hémisphérique. Nous avons également pu confirmer l’hypothèse de codage par hémichamp comme représentation de l'espace auditif horizontal. Dans une troisième étude, nous avons modifié l’indice auditif le plus important pour la perception de l’espace vertical à l’aide de moulages en silicone. Nous avons montré que l’adaptation à cette modification n’était suivie d’aucun effet consécutif au retrait des moulages, même lors de la toute première présentation d’un stimulus sonore. Ce résultat concorde avec l’hypothèse d’un mécanisme dit de many-to-one mapping, à travers lequel plusieurs profils spectraux peuvent être associés à une même position spatiale. Dans une quatrième étude, au moyen de l’IRM fonctionnelle et en tirant profit de l’adaptation aux moulages de silicone, nous avons révélé l’encodage de l’élévation sonore dans le cortex auditif humain.
Resumo:
OBJECTIVE: To identify and quantify sources of variability in scores on the speech, spatial, and qualities of hearing scale (SSQ) and its short forms among normal-hearing and hearing-impaired subjects using a French-language version of the SSQ. DESIGN: Multi-regression analyses of SSQ scores were performed using age, gender, years of education, hearing loss, and hearing-loss asymmetry as predictors. Similar analyses were performed for each subscale (Speech, Spatial, and Qualities), for several SSQ short forms, and for differences in subscale scores. STUDY SAMPLE: One hundred normal-hearing subjects (NHS) and 230 hearing-impaired subjects (HIS). RESULTS: Hearing loss in the better ear and hearing-loss asymmetry were the two main predictors of scores on the overall SSQ, the three main subscales, and the SSQ short forms. The greatest difference between the NHS and HIS was observed for the Speech subscale, and the NHS showed scores well below the maximum of 10. An age effect was observed mostly on the Speech subscale items, and the number of years of education had a significant influence on several Spatial and Qualities subscale items. CONCLUSION: Strong similarities between SSQ scores obtained across different populations and languages, and between SSQ and short forms, underline their potential international use.
Resumo:
A deficiência auditiva afecta milhões de pessoas em todo o mundo, originando vários problemas, nomeadamente a nível psicossocial, que comprometem a qualidade de vida do indivíduo. A deficiência auditiva influencia o comportamento, particularmente ao dificultar a comunicação. Com o avanço tecnológico, os produtos de apoio, em particular os aparelhos auditivos e o implante coclear, melhoram essa qualidade de vida, através da melhoria da comunicação. Com as escalas de avaliação determinamos o modo como a deficiência auditiva influencia a vida diária, com ou sem amplificação, e de que forma afecta o desempenho psicossocial, emocional ou profissional do indivíduo, sendo esta informação importante para determinar a necessidade e o sucesso de amplificação, independentemente do tipo e grau da deficiência auditiva. O objectivo do presente estudo foi a tradução e adaptação para a cultura portuguesa da escala The Speech, Spatial and Qualities of Hearing Scale (SSQ), desenvolvida por Stuart Gatehouse e William Noble em 2004. Este trabalho foi realizado nos centros auditivos da Widex Portugal. Após os procedimentos de tradução e retroversão, a versão portuguesa foi testada em 12 indivíduos, com idades compreendidas entre os 36 anos e os 80 anos, dos quais 6 utilizavam prótese auditiva há mais de um ano, um utilizava prótese há menos de um ano e 5 nunca tinham utilizado. Com a tradução e adaptação cultural para o Português Europeu do “Questionário sobre as Qualidades Espaciais do Discurso – SSQ”, contribuímos para uma melhor avaliação dos indivíduos que estejam, ou venham a estar, a cumprir programas de reabilitação auditiva.
Resumo:
Consonant imprecision has been reported to be a common feature of the dysarthric speech disturbances exhibited by individuals who have sustained a traumatic brain injury (TBI). Inaccurate tongue placements against the hard palate during consonant articulation may be one factor underlying the imprecision. To investigate this hypothesis, electropalatography (EPG) was used to assess the spatial characteristics of the tongue-to-palate contacts exhibited by three males (aged 23-29 years) with dysarthria following severe TBI. Five nonneurologically impaired adults served as control subjects. Twelve single-syllable words of CV or CVC construction (where initial C = /t, d, S, z, k, g/, V=/i, a/) were read aloud three times by each subject while wearing an EPG palate. Spatial characteristics were analyzed in terms of the location, pattern, and amount of tongue-to-palate contact at the frame of maximum contact during production of each consonant. The results revealed that for the majority of consonants, the patterns and locations of contacts exhibited by the TBI subjects were consistent with the contacts generated by the group of control subjects. One notable exception was one subject's production of the alveolar fricatives in which complete closure across the palate was demonstrated, rather than the characteristic groove configuration. Major discrepancies were also noted in relation to the amount of tongue-to-palate contact exhibited, with two TBI subjects consistently demonstrating increased contacts compared to the control subjects. The implications of these findings for the development of treatment programs for dysarthric speech disorders subsequent to TBI are highlighted.
Resumo:
The knowledge of the spatial variability of noise levels and the build of kriging maps can help the evaluation of the salubrity of environments occupied by agricultural workers. Therefore, the objective of this research was to characterize the spatial variability of the noise level generated by four agricultural machines, using geostatistics, and to verify if the values are within the limits of human comfort. The evaluated machines were: harvester, chainsaw, brushcutter and tractor. The data were collected at the height of the operator's ear and at different distances. Through the results, it was possible to verify that the use of geostatistics, by kriging technique, made it possible to define areas with different levels for the data collected. With exception of the harvester, all of machines presented noise levels above than 85 dB (A) near to the operator, demanding the use of hearing protection.
Resumo:
Bone-anchored hearing implants (BAHI) are routinely used to alleviate the effects of the acoustic head shadow in single-sided sensorineural deafness (SSD). In this study, the influence of the directional microphone setting and the maximum power output of the BAHI sound processor on speech understanding in noise in a laboratory setting were investigated. Eight adult BAHI users with SSD participated in this pilot study. Speech understanding in noise was measured using a new Slovak speech-in-noise test in two different spatial settings, either with noise coming from the front and noise from the side of the BAHI (S90N0) or vice versa (S0N90). In both spatial settings, speech understanding was measured without a BAHI, with a Baha BP100 in omnidirectional mode, with a BP100 in directional mode, with a BP110 power in omnidirectional and with a BP110 power in directional mode. In spatial setting S90N0, speech understanding in noise with either sound processor and in either directional mode was improved by 2.2-2.8 dB (p = 0.004-0.016). In spatial setting S0N90, speech understanding in noise was reduced by either BAHI, but was significantly better by 1.0-1.8 dB, if the directional microphone system was activated (p = 0.046), when compared to the omnidirectional setting. With the limited number of subjects in this study, no statistically significant differences were found between the two sound processors.
Resumo:
Peripheral auditory neurons are tuned to single frequencies of sound. In the central auditory system, excitatory (or facilitatory) and inhibitory neural interactions take place at multiple levels and produce neurons with sharp level-tolerant frequency-tuning curves, neurons tuned to parameters other than frequency, cochleotopic (frequency) maps, which are different from the peripheral cochleotopic map, and computational maps. The mechanisms to create the response properties of these neurons have been considered to be solely caused by divergent and convergent projections of neurons in the ascending auditory system. The recent research on the corticofugal (descending) auditory system, however, indicates that the corticofugal system adjusts and improves auditory signal processing by modulating neural responses and maps. The corticofugal function consists of at least the following subfunctions. (i) Egocentric selection for short-term modulation of auditory signal processing according to auditory experience. Egocentric selection, based on focused positive feedback associated with widespread lateral inhibition, is mediated by the cortical neural net working together with the corticofugal system. (ii) Reorganization for long-term modulation of the processing of behaviorally relevant auditory signals. Reorganization is based on egocentric selection working together with nonauditory systems. (iii) Gain control based on overall excitatory, facilitatory, or inhibitory corticofugal modulation. Egocentric selection can be viewed as selective gain control. (iv) Shaping (or even creation) of response properties of neurons. Filter properties of neurons in the frequency, amplitude, time, and spatial domains can be sharpened by the corticofugal system. Sharpening of tuning is one of the functions of egocentric selection.
Resumo:
High-fidelity eye tracking is combined with a perceptual grouping task to provide insight into the likely mechanisms underlying the compensation of retinal image motion caused by movement of the eyes. The experiments describe the covert detection of minute temporal and spatial offsets incorporated into a test stimulus. Analysis of eye motion on individual trials indicates that the temporal offset sensitivity is actually due to motion of the eye inducing artificial spatial offsets in the briefly presented stimuli. The results have strong implications for two popular models of compensation for fixational eye movements, namely efference copy and image-based models. If an efference copy model is assumed, the results place constraints on the spatial accuracy and source of compensation. If an image-based model is assumed then limitations are placed on the integration time window over which motion estimates are calculated. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Visual acuity is limited by the size and density of the smallest retinal ganglion cells, which correspond to the midget ganglion cells in primate retina and the beta- ganglion cells in cat retina, both of which have concentric receptive fields that respond at either light- On or light- Off. In contrast, the smallest ganglion cells in the rabbit retina are the local edge detectors ( LEDs), which respond to spot illumination at both light- On and light- Off. However, the LEDs do not predominate in the rabbit retina and the question arises, what role do they play in fine spatial vision? We studied the morphology and physiology of LEDs in the isolated rabbit retina and examined how their response properties are shaped by the excitatory and inhibitory inputs. Although the LEDs comprise only similar to 15% of the ganglion cells, neighboring LEDs are separated by 30 - 40 mu m on the visual streak, which is sufficient to account for the grating acuity of the rabbit. The spatial and temporal receptive- field properties of LEDs are generated by distinct inhibitory mechanisms. The strong inhibitory surround acts presynaptically to suppress both the excitation and the inhibition elicited by center stimulation. The temporal properties, characterized by sluggish onset, sustained firing, and low bandwidth, are mediated by the temporal properties of the bipolar cells and by postsynaptic interactions between the excitatory and inhibitory inputs. We propose that the LEDs signal fine spatial detail during visual fixation, when high temporal frequencies are minimal.
Resumo:
The aim was to describe the outcome of neonatal hearing screening (NHS) and audiological diagnosis in neonates in the NICU. The sample was divided into Group I: neonates who underwent NHS in one step and Group II: neonates who underwent a test and retest NHS. NHS procedure was automated auditory brainstem response. NHS was performed in 82.1% of surviving neonates. For GI, referral rate was 18.6% and false-positive was 62.2% (normal hearing in the diagnostic stage). In GII, with retest, referral rate dropped to 4.1% and false-positive to 12.5%. Sensorineural hearing loss was found in 13.2% of infants and conductive in 26.4% of cases. There was one case of auditory neuropathy spectrum (1.9%). Dropout rate in whole process was 21.7% for GI and 24.03% for GII. We concluded that it was not possible to perform universal NHS in the studied sample or, in many cases, to apply it within the first month of life. Retest reduced failure and false-positive rate and did not increase evasion, indicating that it is a recommendable step in NHS programs in the NICU. The incidence of hearing loss was 2.9%, considering sensorineural hearing loss (0.91%), conductive (1.83%) and auditory neuropathy spectrum (0.19%).