988 resultados para Auditory temporal processing
Resumo:
The extraction of information about neural activity timing from BOLD signal is a challenging task as the shape of the BOLD curve does not directly reflect the temporal characteristics of electrical activity of neurons. In this work, we introduce the concept of neural processing time (NPT) as a parameter of the biophysical model of the hemodynamic response function (HRF). Through this new concept we aim to infer more accurately the duration of neuronal response from the highly nonlinear BOLD effect. The face validity and applicability of the concept of NPT are evaluated through simulations and analysis of experimental time series. The results of both simulation and application were compared with summary measures of HRF shape. The experiment that was analyzed consisted of a decision-making paradigm with simultaneous emotional distracters. We hypothesize that the NPT in primary sensory areas, like the fusiform gyrus, is approximately the stimulus presentation duration. On the other hand, in areas related to processing of an emotional distracter, the NPT should depend on the experimental condition. As predicted, the NPT in fusiform gyrus is close to the stimulus duration and the NPT in dorsal anterior cingulate gyrus depends on the presence of an emotional distracter. Interestingly, the NPT in right but not left dorsal lateral prefrontal cortex depends on the stimulus emotional content. The summary measures of HRF obtained by a standard approach did not detect the variations observed in the NPT. Hum Brain Mapp, 2012. (C) 2010 Wiley Periodicals, Inc.
Resumo:
The caudomedial nidopallium (NCM) is a telencephalic area involved in auditory processing and memorization in songbirds, but the synaptic mechanisms associated with auditory processing in NCM are largely unknown. To identify potential changes in synaptic transmission induced by auditory stimulation in NCM, we used a slice preparation for path-clamp recordings of synaptic currents in the NCM of adult zebra finches (Taenopygia guttata) sacrificed after sound isolation followed by exposure to conspecific song or silence. Although post-synaptic GABAergic and glutamatergic currents in the NCM of control and song-exposed birds did not present any differences regarding their frequency, amplitude and duration after song exposure, we observed a higher probability of generation of bursting glutamatergic currents after blockade of GABAergic transmission in song-exposed birds as compared to controls. Both song-exposed males and females presented an increase in the probability of the expression of bursting glutamatergic currents, however bursting was more commonly seen in males where they appeared even without blocking GABAergic transmission. Our data show that song exposure changes the excitability of the glutamatergic neuronal network, increasing the probability of the generation of bursts of glutamatergic currents, but does not affect basic parameters of glutamatergic and GABAergic synaptic currents.
Resumo:
There is evidence that the explicit lexical-semantic processing deficits which characterize aphasia may be observed in the absence of implicit semantic impairment. The aim of this article was to critically review the international literature on lexical-semantic processing in aphasia, as tested through the semantic priming paradigm. Specifically, this review focused on aphasia and lexical-semantic processing, the methodological strengths and weaknesses of the semantic paradigms used, and recent evidence from neuroimaging studies on lexical-semantic processing. Furthermore, evidence on dissociations between implicit and explicit lexical-semantic processing reported in the literature will be discussed and interpreted by referring to functional neuroimaging evidence from healthy populations. There is evidence that semantic priming effects can be found both in fluent and in non-fluent aphasias, and that these effects are related to an extensive network which includes the temporal lobe, the pre-frontal cortex, the left frontal gyrus, the left temporal gyrus and the cingulated cortex.
Resumo:
O objetivo deste trabalho foi descrever os aspectos fonoaudiológicos de processamento auditivo, leitura e escrita de um paciente do gênero masculino com diagnóstico de síndrome de Silver-Russell. Aos dois meses de idade o paciente apresentava déficit pôndero-estatural; frontal amplo; orelhas pequenas, proeminentes e com baixa implantação; palato ogival; discreta micrognatia; esclera azulada; manchas café-com-leite; sobreposição do primeiro e segundo artelhos à direita; refluxo gastroesofágico; voz e choro agudos; atraso leve no desenvolvimento neuropsicomotor; e dificuldade de ganhar peso, recebendo o diagnóstico da síndrome. Na avaliação psicológica, realizada aos 8 anos de idade, o paciente apresentou nível intelectual normal, com dificuldades cognitivas envolvendo atenção sustentada, concentração, memória verbal imediata e processos emocionais e comportamentais. Para avaliação da leitura e escrita e de seus processos subjacentes, realizada aos 9 anos de idade foram utilizados os testes de Compreensão Leitora de Textos Expositivos, Perfil das Habilidades Fonológicas, Teste de Discriminação Auditiva, escrita espontânea, Teste de Desempenho Escolar (TDE), teste de Nomeação Automática Rápida e prova de memória de trabalho fonológica. Apresentou dificuldades em todos os testes, estando as pontuações abaixo do esperado para sua idade. Na avaliação do processamento auditivo foram realizados testes monóticos, dióticos e dicóticos. Foram encontradas alterações nas habilidades de atenção auditiva sustentada e seletiva, memória sequencial para sons verbais e não-verbais, e resolução temporal. Conclui-se que o paciente apresenta alterações na aprendizagem da leitura e escrita que podem ser secundários a síndrome de Silver-Russell, porém tais dificuldades também podem ser decorrentes das alterações em habilidades do processamento auditivo.
Resumo:
The effect produced by a warning stimulus(i) (WS) in reaction time (RT) tasks is commonly attributed to a facilitation of sensorimotor mechanisms by alertness. Recently, evidence was presented that this effect is also related to a proactive inhibition of motor control mechanisms. This inhibition would hinder responding to the WS instead of the target stimulus (TS). Some studies have shown that auditory WS produce a stronger facilitatory effect than visual WS. The present study investigated whether the former WS also produces a stronger inhibitory effect than the latter WS. In one session, the RTs to a visual target in two groups of volunteers were evaluated. In a second session, subjects reacted to the visual target both with (50% of the trials) and without (50% of the trials) a WS. During trials, when subjects received a WS, one group received a visual WS and the other group was presented with an auditory WS. In the first session, the mean RTs of the two groups did not differ significantly. In the second session, the mean RT of the two groups in the presence of the WS was shorter than in their absence. The mean RT in the absence of the auditory WS was significantly longer than the mean RT in the absence of the visual WS. Mean RTs did not differ significantly between the present conditions of the visual and auditory WS. The longer RTs of the auditory WS group as opposed to the visual WS group in the WS-absent trials suggest that auditory WS exert a stronger inhibitory influence on responsivity than visual WS.
Resumo:
Human brain is provided with a flexible audio-visual system, which interprets and guides responses to external events according to spatial alignment, temporal synchronization and effectiveness of unimodal signals. The aim of the present thesis was to explore the possibility that such a system might represent the neural correlate of sensory compensation after a damage to one sensory pathway. To this purpose, three experimental studies have been conducted, which addressed the immediate, short-term and long-term effects of audio-visual integration on patients with Visual Field Defect (VFD). Experiment 1 investigated whether the integration of stimuli from different modalities (cross-modal) and from the same modality (within-modal) have a different, immediate effect on localization behaviour. Patients had to localize modality-specific stimuli (visual or auditory), cross-modal stimulus pairs (visual-auditory) and within-modal stimulus pairs (visual-visual). Results showed that cross-modal stimuli evoked a greater improvement than within modal stimuli, consistent with a Bayesian explanation. Moreover, even when visual processing was impaired, cross-modal stimuli improved performance in an optimal fashion. These findings support the hypothesis that the improvement derived from multisensory integration is not attributable to simple target redundancy, and prove that optimal integration of cross-modal signals occurs in processing stage which are not consciously accessible. Experiment 2 examined the possibility to induce a short term improvement of localization performance without an explicit knowledge of visual stimulus. Patients with VFD and patients with neglect had to localize weak sounds before and after a brief exposure to a passive cross-modal stimulation, which comprised spatially disparate or spatially coincident audio-visual stimuli. After exposure to spatially disparate stimuli in the affected field, only patients with neglect exhibited a shifts of auditory localization toward the visual attractor (the so called Ventriloquism After-Effect). In contrast, after adaptation to spatially coincident stimuli, both neglect and hemianopic patients exhibited a significant improvement of auditory localization, proving the occurrence of After Effect for multisensory enhancement. These results suggest the presence of two distinct recalibration mechanisms, each mediated by a different neural route: a geniculo-striate circuit and a colliculus-extrastriate circuit respectively. Finally, Experiment 3 verified whether a systematic audio-visual stimulation could exert a long-lasting effect on patients’ oculomotor behaviour. Eye movements responses during a visual search task and a reading task were studied before and after visual (control) or audio-visual (experimental) training, in a group of twelve patients with VFD and twelve controls subjects. Results showed that prior to treatment, patients’ performance was significantly different from that of controls in relation to fixations and saccade parameters; after audiovisual training, all patients reported an improvement in ocular exploration characterized by fewer fixations and refixations, quicker and larger saccades, and reduced scanpath length. Similarly, reading parameters were significantly affected by the training, with respect to specific impairments observed in left and right hemisphere–damaged patients. The present findings provide evidence that a systematic audio-visual stimulation may encourage a more organized pattern of visual exploration with long lasting effects. In conclusion, results from these studies clearly demonstrate that the beneficial effects of audio-visual integration can be retained in absence of explicit processing of visual stimulus. Surprisingly, an improvement of spatial orienting can be obtained not only when a on-line response is required, but also after either a brief or a long adaptation to audio-visual stimulus pairs, so suggesting the maintenance of mechanisms subserving cross-modal perceptual learning after a damage to geniculo-striate pathway. The colliculus-extrastriate pathway, which is spared in patients with VFD, seems to play a pivotal role in this sensory compensation.
Resumo:
Numerosi studi mostrano che gli intervalli temporali sono rappresentati attraverso un codice spaziale che si estende da sinistra verso destra, dove gli intervalli brevi sono rappresentati a sinistra rispetto a quelli lunghi. Inoltre tale disposizione spaziale del tempo può essere influenzata dalla manipolazione dell’attenzione-spaziale. La presente tesi si inserisce nel dibattito attuale sulla relazione tra rappresentazione spaziale del tempo e attenzione-spaziale attraverso l’uso di una tecnica che modula l’attenzione-spaziale, ovvero, l’Adattamento Prismatico (AP). La prima parte è dedicata ai meccanismi sottostanti tale relazione. Abbiamo mostrato che spostando l’attenzione-spaziale con AP, verso un lato dello spazio, si ottiene una distorsione della rappresentazione di intervalli temporali, in accordo con il lato dello spostamento attenzionale. Questo avviene sia con stimoli visivi, sia con stimoli uditivi, nonostante la modalità uditiva non sia direttamente coinvolta nella procedura visuo-motoria di AP. Questo risultato ci ha suggerito che il codice spaziale utilizzato per rappresentare il tempo, è un meccanismo centrale che viene influenzato ad alti livelli della cognizione spaziale. La tesi prosegue con l’indagine delle aree corticali che mediano l’interazione spazio-tempo, attraverso metodi neuropsicologici, neurofisiologici e di neuroimmagine. In particolare abbiamo evidenziato che, le aree localizzate nell’emisfero destro, sono cruciali per l’elaborazione del tempo, mentre le aree localizzate nell’emisfero sinistro sono cruciali ai fini della procedura di AP e affinché AP abbia effetto sugli intervalli temporali. Infine, la tesi, è dedicata allo studio dei disturbi della rappresentazione spaziale del tempo. I risultati ci indicano che un deficit di attenzione-spaziale, dopo danno emisferico destro, provoca un deficit di rappresentazione spaziale del tempo, che si riflette negativamente sulla vita quotidiana dei pazienti. Particolarmente interessanti sono i risultati ottenuti mediante AP. Un trattamento con AP, efficace nel ridurre il deficit di attenzione-spaziale, riduce anche il deficit di rappresentazione spaziale del tempo, migliorando la qualità di vita dei pazienti.
Resumo:
Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.
Resumo:
This thesis reports on the experimental realization, characterization and application of a novel microresonator design. The so-called “bottle microresonator” sustains whispering-gallery modes in which light fields are confined near the surface of the micron-sized silica structure by continuous total internal reflection. While whispering-gallery mode resonators in general exhibit outstanding properties in terms of both temporal and spatial confinement of light fields, their monolithic design makes tuning of their resonance frequency difficult. This impedes their use, e.g., in cavity quantum electrodynamics (CQED) experiments, which investigate the interaction of single quantum mechanical emitters of predetermined resonance frequency with a cavity mode. In contrast, the highly prolate shape of the bottle microresonators gives rise to a customizable mode structure, enabling full tunability. The thesis is organized as follows: In chapter I, I give a brief overview of different types of optical microresonators. Important quantities, such as the quality factor Q and the mode volume V, which characterize the temporal and spatial confinement of the light field are introduced. In chapter II, a wave equation calculation of the modes of a bottle microresonator is presented. The intensity distribution of different bottle modes is derived and their mode volume is calculated. A brief description of light propagation in ultra-thin optical fibers, which are used to couple light into and out of bottle modes, is given as well. The chapter concludes with a presentation of the fabrication techniques of both structures. Chapter III presents experimental results on highly efficient, nearly lossless coupling of light into bottle modes as well as their spatial and spectral characterization. Ultra-high intrinsic quality factors exceeding 360 million as well as full tunability are demonstrated. In chapter IV, the bottle microresonator in add-drop configuration, i.e., with two ultra-thin fibers coupled to one bottle mode, is discussed. The highly efficient, nearly lossless coupling characteristics of each fiber combined with the resonator's high intrinsic quality factor, enable resonant power transfers between both fibers with efficiencies exceeding 90%. Moreover, the favorable ratio of absorption and the nonlinear refractive index of silica yields optical Kerr bistability at record low powers on the order of 50 µW. Combined with the add-drop configuration, this allows one to route optical signals between the outputs of both ultra-thin fibers, simply by varying the input power, thereby enabling applications in all-optical signal processing. Finally, in chapter V, I discuss the potential of the bottle microresonator for CQED experiments with single atoms. Its Q/V-ratio, which determines the ratio of the atom-cavity coupling rate to the dissipative rates of the subsystems, aligns with the values obtained for state-of-the-art CQED microresonators. In combination with its full tunability and the possibility of highly efficient light transfer to and from the bottle mode, this makes the bottle microresonator a unique tool for quantum optics applications.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
Training can change the functional and structural organization of the brain, and animal models demonstrate that the hippocampus formation is particularly susceptible to training-related neuroplasticity. In humans, however, direct evidence for functional plasticity of the adult hippocampus induced by training is still missing. Here, we used musicians' brains as a model to test for plastic capabilities of the adult human hippocampus. By using functional magnetic resonance imaging optimized for the investigation of auditory processing, we examined brain responses induced by temporal novelty in otherwise isochronous sound patterns in musicians and musical laypersons, since the hippocampus has been suggested previously to be crucially involved in various forms of novelty detection. In the first cross-sectional experiment, we identified enhanced neural responses to temporal novelty in the anterior left hippocampus of professional musicians, pointing to expertise-related differences in hippocampal processing. In the second experiment, we evaluated neural responses to acoustic temporal novelty in a longitudinal approach to disentangle training-related changes from predispositional factors. For this purpose, we examined an independent sample of music academy students before and after two semesters of intensive aural skills training. After this training period, hippocampal responses to temporal novelty in sounds were enhanced in musical students, and statistical interaction analysis of brain activity changes over time suggests training rather than predisposition effects. Thus, our results provide direct evidence for functional changes of the adult hippocampus in humans related to musical training.
Resumo:
We used fMRI to investigate the neuronal correlates of encoding and recognizing heard and imagined melodies. Ten participants were shown lyrics of familiar verbal tunes; they either heard the tune along with the lyrics, or they had to imagine it. In a subsequent surprise recognition test, they had to identify the titles of tunes that they had heard or imagined earlier. The functional data showed substantial overlap during melody perception and imagery, including secondary auditory areas. During imagery compared with perception, an extended network including pFC, SMA, intraparietal sulcus, and cerebellum showed increased activity, in line with the increased processing demands of imagery. Functional connectivity of anterior right temporal cortex with frontal areas was increased during imagery compared with perception, indicating that these areas form an imagery-related network. Activity in right superior temporal gyrus and pFC was correlated with the subjective rating of imagery vividness. Similar to the encoding phase, the recognition task recruited overlapping areas, including inferior frontal cortex associated with memory retrieval, as well as left middle temporal gyrus. The results present new evidence for the cortical network underlying goal-directed auditory imagery, with a prominent role of the right pFC both for the subjective impression of imagery vividness and for on-line mental monitoring of imagery-related activity in auditory areas.
When that tune runs through your head: A PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery forfamiliar tunes. Subjects either imagined the continuation of nonverbaltunes cued by their first few notes, listened to a short sequence of notesas a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area(SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealedactivation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliarsequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiartunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
When that tune runs through your head: a PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery for familiar tunes. Subjects either imagined the continuation of nonverbal tunes cued by their first few notes, listened to a short sequence of notes as a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area (SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealed activation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliar sequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiar tunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
Resumo:
Four experiments examined how people operate on memory representations of familiar songs. The tasks were similar to those used in studies of visual imagery. In one task, subjects saw a one word lyric from a song and then saw a second lyric; then they had to say if the second lyric was from the same song as the first. In a second task, subjects mentally compared pitches of notes corresponding to song lyrics. In both tasks, reaction time increased as a function of the distance in beats between the two lyrics in the actual song, and in some conditions reaction time increased with the starting beat of the earlier lyric. Imagery instructions modified the main results somewhat in the first task, but not in the second, much harder task. The results suggest that song representations have temporal-like characteristics. (PsycINFO Database Record (c) 2012 APA, all rights reserved)