999 resultados para auditory design
Resumo:
Although the effects of cannabis on perception are well documented, little is known about their neural basis or how these may contribute to the formation of psychotic symptoms. We used functional magnetic resonance imaging (fMRI) to assess the effects of Delta-9-tetrahydrocannabinol (THC) and cannabidiol (CBD) during visual and auditory processing in healthy volunteers. In total, 14 healthy volunteers were scanned on three occasions. Identical 10mg THC, 600mg CBD, and placebo capsules were allocated in a balanced double-blinded pseudo-randomized crossover design. Plasma levels of each substance, physiological parameters, and measures of psychopathology were taken at baseline and at regular intervals following ingestion of substances. Volunteers listened passively to words read and viewed a radial visual checkerboard in alternating blocks during fMRI scanning. Administration of THC was associated with increases in anxiety, intoxication, and positive psychotic symptoms, whereas CBD had no significant symptomatic effects. THC decreased activation relative to placebo in bilateral temporal cortices during auditory processing, and increased and decreased activation in different visual areas during visual processing. CBD was associated with activation in right temporal cortex during auditory processing, and when contrasted, THC and CBD had opposite effects in the right posterior superior temporal gyrus, the right-sided homolog to Wernicke`s area. Moreover, the attenuation of activation in this area (maximum 61, -15, -2) by THC during auditory processing was correlated with its acute effect on psychotic symptoms. Single doses of THC and CBD differently modulate brain function in areas that process auditory and visual stimuli and relate to induced psychotic symptoms. Neuropsychopharmacology (2011) 36, 1340-1348; doi:10.1038/npp.2011.17; published online 16 March 2011
Resumo:
Speech understanding disorders in the elderly may be due to peripheral or central auditory dysfunctions. Asymmetry of results in dichotic testing increases with age, and may reflect on a lack of inter-hemisphere transmission and cognitive decline. Aim: To investigate auditory processing of aged people with no hearing complaints. Study design: clinical prospective. Materials and Methods: Twenty-two voluntary individuals, aged between 55 and 75 years, were evaluated. They reported no hearing complaints and had maximal auditory thresholds of 40 dB HL until 4 KHz, 80% of minimal speech recognition scores and peripheral symmetry between the ears. We used two kinds of tests: speech in noise and dichotic alternated dissyllables (SSW). Results were compared between males and females, right and left ears and between age groups. Results: There were no significant differences between genders, in both tests. Their Left ears showed worse results, in the competitive condition of SSW. Individuals aged 65 or older had poorer performances than those aged 55 to 64. Conclusion: Central auditory tests showed worse performance with aging. The employment of a dichotic test in the auditory evaluation setting in the elderly may help in the early identification of degenerative processes, which are common among these patients.
Resumo:
Auditory spatial deficits occur frequently after hemispheric damage; a previous case report suggested that the explicit awareness of sound positions, as in sound localisation, can be impaired while the implicit use of auditory cues for the segregation of sound objects in noisy environments remains preserved. By assessing systematically patients with a first hemispheric lesion, we have shown that (1) explicit and/or implicit use can be disturbed; (2) impaired explicit vs. preserved implicit use dissociations occur rather frequently; and (3) different types of sound localisation deficits can be associated with preserved implicit use. Conceptually, the dissociation between the explicit and implicit use may reflect the dual-stream dichotomy of auditory processing. Our results speak in favour of systematic assessments of auditory spatial functions in clinical settings, especially when adaptation to auditory environment is at stake. Further, systematic studies are needed to link deficits of explicit vs. implicit use to disability in everyday activities, to design appropriate rehabilitation strategies, and to ascertain how far the explicit and implicit use of spatial cues can be retrained following brain damage.
Resumo:
Mapping the human auditory cortex with standard functional imaging techniques is difficult because of its small size and angular position along the Sylvian fissure. As a result, the exact number and location of auditory cortex areas in the human remains unknown. In a first experiment, we measured the two largest tonotopic areas of primary auditory cortex (PAC, Al and R) using high-resolution functional MRI at 7 Tesla relative to the underlying anatomy of Heschl's gyrus (HG). The data reveals a clear anatomical- functional relationship that indicates the location of PAC across the range of common morphological variants of HG (single gyri, partial duplication and complete duplication). Human PAC tonotopic areas are oriented along an oblique posterior-to-anterior axis with mirror-symmetric frequency gradients perpendicular to HG, as in the macaque. In a second experiment, we tested whether these primary frequency-tuned units were modulated by selective attention to preferred vs. non-preferred sound frequencies in the dynamic manner needed to account for human listening abilities in noisy environments, such as cocktail parties or busy streets. We used a dual-stream selective attention experiment where subjects attended to one of two competing tonal streams presented simultaneously to different ears. Attention to low-frequency tones (250 Hz) enhanced neural responses within low-frequency-tuned voxels relative to high (4000 Hz), and vice versa when at-tention switched from high to low. Human PAC is able to tune into attended frequency channels and can switch frequencies on demand, like a radio. In a third experiment, we investigated repetition suppression effects to environmental sounds within primary and non-primary early-stage auditory areas, identified with the tonotopic mapping design. Repeated presentations of sounds from the same sources, as compared to different sources, gave repetition suppression effects within posterior and medial non-primary areas of the right hemisphere, reflecting their potential involvement in semantic representations. These three studies were conducted at 7 Tesla with high-resolution imaging. However, 7 Tesla scanners are, for the moment, not yet used for clinical diagnosis and mostly reside in institutions external to hospitals. Thus, hospital-based clinical functional and structural studies are mainly performed using lower field systems (1.5 or 3 Tesla). In a fourth experiment, we acquired tonotopic maps at 3 and 7 Tesla and evaluated the consistency of a tonotopic mapping paradigm between scanners. Mirror-symmetric gradients within PAC were highly similar at 7 and 3 Tesla across renderings at different spatial resolutions. We concluded that the tonotopic mapping paradigm is robust and suitable for definition of primary tonotopic areas, also at 3 Tesla. Finally, in a fifth study, we considered whether focal brain lesions alter tonotopic representations in the intact ipsi- and contralesional primary auditory cortex in three patients with hemispheric or cerebellar lesions, without and with auditory complaints. We found evidence for tonotopic reorganisation at the level of the primary auditory cortex in cases of brain lesions independently of auditory complaints. Overall, these results reflect a certain degree of plasticity within primary auditory cortex in different populations of subjects, assessed at different field strengths. - La cartographie du cortex auditif chez l'humain est difficile à réaliser avec des techniques d'imagerie fonctionnelle standard, étant donné sa petite taille et position angulaire le long de la fissure sylvienne. En conséquence, le nombre et l'emplacement exacts des différentes aires du cortex auditif restent inconnus chez l'homme. Lors d'une première expérience, nous avons mesuré, avec de l'imagerie par résonance magnétique à haute intensité (IRMf à 7 Tesla) chez des sujets humains sains, deux larges aires au sein du cortex auditif primaire (PAC; Al et R) avec une représentation spécifique des fréquences pures préférées - ou tonotopie. Nos résultats ont démontré une relation anatomico- fonctionnelle qui définit clairement la position du PAC à travers toutes les variantes du gyrus d'Heschl's (HG). Les aires tonotopiques du PAC humain sont orientées le long d'un axe postéro-antérieur oblique avec des gradients de fréquences spécifiques perpendiculaires à HG, d'une manière similaire à celles mesurées chez le singe. Dans une deuxième expérience, nous avons testé si ces aires primaires pouvaient être modulées, de façon dynamique, par une attention sélective pour des fréquences préférées par rapport à celles non-préférées. Cette modulation est primordiale lors d'interactions sociales chez l'humain en présence de bruits distracteurs tels que d'autres discussions ou un environnement sonore nuisible (comme par exemple, dans la circulation routière). Dans cette étude, nous avons utilisé une expérience d'attention sélective où le sujet devait être attentif à une des deux voies sonores présentées simultanément à chaque oreille. Lorsque le sujet portait était attentif aux sons de basses fréquences (250 Hz), la réponse neuronale relative à ces fréquences augmentait par rapport à celle des hautes fréquences (4000 Hz), et vice versa lorsque l'attention passait des hautes aux basses fréquences. De ce fait, nous pouvons dire que PAC est capable de focaliser sur la fréquence attendue et de changer de canal selon la demande, comme une radio. Lors d'une troisième expérience, nous avons étudié les effets de suppression due à la répétition de sons environnementaux dans les aires auditives primaires et non-primaires, d'abord identifiées via le protocole de la première étude. La présentation répétée de sons provenant de la même source sonore, par rapport à de sons de différentes sources sonores, a induit un effet de suppression dans les aires postérieures et médiales auditives non-primaires de l'hémisphère droite, reflétant une implication de ces aires dans la représentation de la catégorie sémantique. Ces trois études ont été réalisées avec de l'imagerie à haute résolution à 7 Tesla. Cependant, les scanners 7 Tesla ne sont pour le moment utilisés que pour de la recherche fondamentale, principalement dans des institutions externes, parfois proches du patient mais pas directement à son chevet. L'imagerie fonctionnelle et structurelle clinique se fait actuellement principalement avec des infrastructures cliniques à 1.5 ou 3 Tesla. Dans le cadre dune quatrième expérience, nous avons avons évalués la cohérence du paradigme de cartographie tonotopique à travers différents scanners (3 et 7 Tesla) chez les mêmes sujets. Nos résultats démontrent des gradients de fréquences définissant PAC très similaires à 3 et 7 Tesla. De ce fait, notre paradigme de définition des aires primaires auditives est robuste et applicable cliniquement. Finalement, nous avons évalués l'impact de lésions focales sur les représentations tonotopiques des aires auditives primaires des hémisphères intactes contralésionales et ipsilésionales chez trois patients avec des lésions hémisphériques ou cérébélleuses avec ou sans plaintes auditives. Nous avons trouvé l'évidence d'une certaine réorganisation des représentations topographiques au niveau de PAC dans le cas de lésions cérébrales indépendamment des plaintes auditives. En conclusion, nos résultats démontrent une certaine plasticité du cortex auditif primaire avec différentes populations de sujets et différents champs magnétiques.
Resumo:
Here we describe a method for measuring tonotopic maps and estimating bandwidth for voxels in human primary auditory cortex (PAC) using a modification of the population Receptive Field (pRF) model, developed for retinotopic mapping in visual cortex by Dumoulin and Wandell (2008). The pRF method reliably estimates tonotopic maps in the presence of acoustic scanner noise, and has two advantages over phase-encoding techniques. First, the stimulus design is flexible and need not be a frequency progression, thereby reducing biases due to habituation, expectation, and estimation artifacts, as well as reducing the effects of spatio-temporal BOLD nonlinearities. Second, the pRF method can provide estimates of bandwidth as a function of frequency. We find that bandwidth estimates are narrower for voxels within the PAC than in surrounding auditory responsive regions (non-PAC).
Resumo:
Recent evidence suggests the human auditory system is organized,like the visual system, into a ventral 'what' pathway, devoted toidentifying objects and a dorsal 'where' pathway devoted to thelocalization of objects in space w1x. Several brain regions have beenidentified in these two different pathways, but until now little isknown about the temporal dynamics of these regions. We investigatedthis issue using 128-channel auditory evoked potentials(AEPs).Stimuli were stationary sounds created by varying interaural timedifferences and environmental real recorded sounds. Stimuli ofeach condition (localization, recognition) were presented throughearphones in a blocked design, while subjects determined theirposition or meaning, respectively.AEPs were analyzed in terms of their topographical scalp potentialdistributions (segmentation maps) and underlying neuronalgenerators (source estimation) w2x.Fourteen scalp potential distributions (maps) best explained theentire data set.Ten maps were nonspecific (associated with auditory stimulationin general), two were specific for sound localization and two werespecific for sound recognition (P-values ranging from 0.02 to0.045).Condition-specific maps appeared at two distinct time periods:;200 ms and ;375-550 ms post-stimulus.The brain sources associated with the maps specific for soundlocalization were mainly situated in the inferior frontal cortices,confirming previous findings w3x. The sources associated withsound recognition were predominantly located in the temporal cortices,with a weaker activation in the frontal cortex.The data show that sound localization and sound recognitionengage different brain networks that are apparent at two distincttime periods.References1. Maeder et al. Neuroimage 2001.2. Michel et al. Brain Research Review 2001.3. Ducommun et al. Neuroimage 2002.
Resumo:
Peer-reviewed
Resumo:
Preattentive perception of occasional deviating stimuli in the stream of standard stimuli can be recorded with cognitive event-related potential (ERP) mismatch negativity (MMN). The earlier detection of stimuli at the auditory cortex can be examined with N1 and P2 ERPs. The MMN recording does not require co-operation, it correlates with perceptual threshold, and even complex sounds can be used as stimuli. The aim of this study was to examine different aspects that should be considered when measuring discrimination of hearing with ERPs. The MMN was found to be stimulusintensity- dependent. As the intensity of sine wave stimuli was increased from 40 to 80 dB HL, MMN mean amplitudes increased. The effect of stimulus frequency on the MMN was studied so that the pitch difference would be equal in each stimulus block according to the psychophysiological mel scale or the difference limen of frequency (DLF). However, the blocks differed from each other. The contralateral white noise masking (50 dB EML) was found to attenuate the MMN amplitude when the right ear was stimulated. The N1 amplitude was attenuated and, in contrast, P2 amplitude was not affected by contralateral white noise masking. The perception and production of vowels by four postlingually deafened patients with a cochlear implant were studied. The MMN response could be elicited in the patient with the best vowel perception abilities. The results of the studies show that concerning the MMN recordings, the stimulus parameters and recording procedure design have a great influence on the results.
Resumo:
Abstract: The paper describes an auditory interface using directional sound as a possible support for pilots during approach in an instrument landing scenario. Several ways of producing directional sounds are illustrated. One using speaker pairs and controlling power distribution between speakers is evaluated experimentally. Results show, that power alone is insufficient for positioning single isolated sound events, although discrimination in the horizontal plane performs better than in the vertical. Additional sound parameters to compensate for this are proposed.
Resumo:
The treatment of auditory-verbal short-term memory (STM) deficits in aphasia is a growing avenue of research (Martin & Reilly, 2012; Murray, 2012). STM treatment requires time precision, which is suited to computerised delivery. We have designed software, which provides STM treatment for aphasia. The treatment is based on matching listening span tasks (Howard & Franklin, 1990), aiming to improve the temporal maintenance of multi-word sequences (Salis, 2012). The person listens to pairs of word-lists that differ in word-order and decides if the pairs are the same or different. This approach does not require speech output and is suitable for persons with aphasia who have limited or no output. We describe the software and how its review from clinicians shaped its design.
Resumo:
CONTEXTO E OBJETIVO: Crianças e adolescentes que vivem em situação de vulnerabilidade social apresentam uma série de problemas de saúde. Apesar disso, ainda é controversa a afirmação sobre a existência de alterações cognitivas e/ou sensoriais. O objetivo deste estudo foi investigar aspectos relacionados ao processamento auditivo, através da aplicação de testes de potencial evocado auditivo de tronco encefálico (PEATE) e avaliação comportamental do processamento auditivo em crianças em situação de rua, comparando a um grupo controle. TIPO DE ESTUDO E LOCAL: Estudo transversal no Laboratório de Processamento Auditivo, Faculdade de Medicina da Universidade de São Paulo. MÉTODOS: Os testes de processamento auditivo foram aplicados em um grupo de 27 indivíduos, subdivididos em grupos de 11 crianças (7 a 10 anos) e 16 adolescentes (11 a 16 anos) de ambos os sexos, em situação de vulnerabilidade social, e comparado a um grupo controle, formado por 21 crianças, subdivididas em grupos de 10 crianças e 11 adolescentes, pareados por idade, sem queixas. Também se aplicou os PEATE para investigação da integridade da via auditiva. RESULTADOS: Para ambas as faixas etárias, foram encontradas diferenças significantes entre grupos estudo e controle para a maioria dos testes aplicados, sendo que o grupo estudo apresentou desempenho estatisticamente pior do que o controle para todos os testes, exceto para o teste pediatric speech intelligibility. Apenas uma criança apresentou resultado alterado para os PEATE. CONCLUSÕES: Os resultados demonstraram pior desempenho do grupo estudo (crianças e adolescentes) para os testes comportamentais de processamento auditivo, apesar de estes apresentarem integridade da via auditiva em nível de tronco encefálico, demonstrada pela normalidade nos resultados do PEATE.
Resumo:
Objectives: The effects of chronic music auditory stimulation on the cardiovascular system have been investigated in the literature. However, data regarding the acute effects of different styles of music on cardiac autonomic regulation are lacking. The literature has indicated that auditory stimulation with white noise above 50 dB induces cardiac responses. We aimed to evaluate the acute effects of classical baroque and heavy metal music of different intensities on cardiac autonomic regulation. Study design: The study was performed in 16 healthy men aged 18-25 years. All procedures were performed in the same soundproof room. We analyzed heart rate variability (HRV) in time (standard deviation of normal-to-normal R-R intervals [SDNN], root-mean square of differences [RMSSD] and percentage of adjacent NN intervals with a difference of duration greater than 50 ms [pNN50]) and frequency (low frequency [LF], high frequency [HF] and LF/HF ratio) domains. HRV was recorded at rest for 10 minutes. Subsequently, the volunteers were exposed to one of the two musical styles (classical baroque or heavy metal music) for five minutes through an earphone, followed by a five-minute period of rest, and then they were exposed to the other style for another five minutes. The subjects were exposed to three equivalent sound levels (60- 70dB, 70-80dB and 80-90dB). The sequence of songs was randomized for each individual. Results: Auditory stimulation with heavy metal music did not influence HRV indices in the time and frequency domains in the three equivalent sound level ranges. The same was observed with classical baroque musical auditory stimulation with the three equivalent sound level ranges. Conclusion: Musical auditory stimulation of different intensities did not influence cardiac autonomic regulation in men.
Resumo:
Objective: To characterize the thresholds of the auditory steady-state response that relates to term newborns and infants. Design: The study was cross-sectional using auditory steadystate response assessment, and the real-ear-to-dial difference was measured in the external auditory canal. Study Sample: The study group included 60 newborns and infants between the age of 0 and 6 months. Results: A statistically significant difference was found in the carrier frequency variable for auditory steady-state response thresholds but not in comparison to ages. Furthermore, there is an association between auditory steady-state response thresholds and the real-ear-to-dial difference. Conclusion: The same threshold can be used as a normality reference for this age range, with distinct values for the different carrier frequencies. The influence of external auditory canal amplification should be taken into account.
Resumo:
Studies about cortical auditory evoked potentials using the speech stimuli in normal hearing individuals are important for understanding how the complexity of the stimulus influences the characteristics of the cortical potential generated. OBJECTIVE: To characterize the cortical auditory evoked potential and the P3 auditory cognitive potential with the vocalic and consonantal contrast stimuli in normally hearing individuals. METHOD: 31 individuals with no risk for hearing, neurologic and language alterations, in the age range between 7 and 30 years, participated in this study. The cortical auditory evoked potentials and the P3 auditory cognitive one were recorded in the Fz and Cz active channels using consonantal (/ba/-/da/) and vocalic (/i/-/a/) speech contrasts. Design: A crosssectional prospective cohort study. RESULTS: We found a statistically significant difference between the speech contrast used and the latencies of the N2 (p = 0.00) and P3 (p = 0.00) components, as well as between the active channel considered (Fz/Cz) and the P3 latency and amplitude values. These correlations did not occur for the exogenous components N1 and P2. CONCLUSION: The speech stimulus contrast, vocalic or consonantal, must be taken into account in the analysis of the cortical auditory evoked potential, N2 component, and auditory cognitive P3 potential.