988 resultados para Auditory temporal processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human electrophysiological studies support a model whereby sensitivity to so-called illusory contour stimuli is first seen within the lateral occipital complex. A challenge to this model posits that the lateral occipital complex is a general site for crude region-based segmentation, based on findings of equivalent hemodynamic activations in the lateral occipital complex to illusory contour and so-called salient region stimuli, a stimulus class that lacks the classic bounding contours of illusory contours. Using high-density electrical mapping of visual evoked potentials, we show that early lateral occipital cortex activity is substantially stronger to illusory contour than to salient region stimuli, whereas later lateral occipital complex activity is stronger to salient region than to illusory contour stimuli. Our results suggest that equivalent hemodynamic activity to illusory contour and salient region stimuli probably reflects temporally integrated responses, a result of the poor temporal resolution of hemodynamic imaging. The temporal precision of visual evoked potentials is critical for establishing viable models of completion processes and visual scene analysis. We propose that crude spatial segmentation analyses, which are insensitive to illusory contours, occur first within dorsal visual regions, not the lateral occipital complex, and that initial illusory contour sensitivity is a function of the lateral occipital complex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Auditory evoked potentials are informative of intact cortical functions of comatose patients. The integrity of auditory functions evaluated using mismatch negativity paradigms has been associated with their chances of survival. However, because auditory discrimination is assessed at various delays after coma onset, it is still unclear whether this impairment depends on the time of the recording. We hypothesized that impairment in auditory discrimination capabilities is indicative of coma progression, rather than of the comatose state itself and that rudimentary auditory discrimination remains intact during acute stages of coma. We studied 30 post-anoxic comatose patients resuscitated from cardiac arrest and five healthy, age-matched controls. Using a mismatch negativity paradigm, we performed two electroencephalography recordings with a standard 19-channel clinical montage: the first within 24 h after coma onset and under mild therapeutic hypothermia, and the second after 1 day and under normothermic conditions. We analysed electroencephalography responses based on a multivariate decoding algorithm that automatically quantifies neural discrimination at the single patient level. Results showed high average decoding accuracy in discriminating sounds both for control subjects and comatose patients. Importantly, accurate decoding was largely independent of patients' chance of survival. However, the progression of auditory discrimination between the first and second recordings was informative of a patient's chance of survival. A deterioration of auditory discrimination was observed in all non-survivors (equivalent to 100% positive predictive value for survivors). We show, for the first time, evidence of intact auditory processing even in comatose patients who do not survive and that progression of sound discrimination over time is informative of a patient's chance of survival. Tracking auditory discrimination in comatose patients could provide new insight to the chance of awakening in a quantitative and automatic fashion during early stages of coma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroimaging studies analyzing neurophysiological signals are typically based on comparing averages of peri-stimulus epochs across experimental conditions. This approach can however be problematic in the case of high-level cognitive tasks, where response variability across trials is expected to be high and in cases where subjects cannot be considered part of a group. The main goal of this thesis has been to address this issue by developing a novel approach for analyzing electroencephalography (EEG) responses at the single-trial level. This approach takes advantage of the spatial distribution of the electric field on the scalp (topography) and exploits repetitions across trials for quantifying the degree of discrimination between experimental conditions through a classification scheme. In the first part of this thesis, I developed and validated this new method (Tzovara et al., 2012a,b). Its general applicability was demonstrated with three separate datasets, two in the visual modality and one in the auditory. This development allowed then to target two new lines of research, one in basic and one in clinical neuroscience, which represent the second and third part of this thesis respectively. For the second part of this thesis (Tzovara et al., 2012c), I employed the developed method for assessing the timing of exploratory decision-making. Using single-trial topographic EEG activity during presentation of a choice's payoff, I could predict the subjects' subsequent decisions. This prediction was due to a topographic difference which appeared on average at ~516ms after the presentation of payoff and was subject-specific. These results exploit for the first time the temporal correlates of individual subjects' decisions and additionally show that the underlying neural generators start differentiating their responses already ~880ms before the button press. Finally, in the third part of this project, I focused on a clinical study with the goal of assessing the degree of intact neural functions in comatose patients. Auditory EEG responses were assessed through a classical mismatch negativity paradigm, during the very early phase of coma, which is currently under-investigated. By taking advantage of the decoding method developed in the first part of the thesis, I could quantify the degree of auditory discrimination at the single patient level (Tzovara et al., in press). Our results showed for the first time that even patients who do not survive the coma can discriminate sounds at the neural level, during the first hours after coma onset. Importantly, an improvement in auditory discrimination during the first 48hours of coma was predictive of awakening and survival, with 100% positive predictive value. - L'analyse des signaux électrophysiologiques en neuroimagerie se base typiquement sur la comparaison des réponses neurophysiologiques à différentes conditions expérimentales qui sont moyennées après plusieurs répétitions d'une tâche. Pourtant, cette approche peut être problématique dans le cas des fonctions cognitives de haut niveau, où la variabilité des réponses entre les essais peut être très élevéeou dans le cas où des sujets individuels ne peuvent pas être considérés comme partie d'un groupe. Le but principal de cette thèse est d'investiguer cette problématique en développant une nouvelle approche pour l'analyse des réponses d'électroencephalographie (EEG) au niveau de chaque essai. Cette approche se base sur la modélisation de la distribution du champ électrique sur le crâne (topographie) et profite des répétitions parmi les essais afin de quantifier, à l'aide d'un schéma de classification, le degré de discrimination entre des conditions expérimentales. Dans la première partie de cette thèse, j'ai développé et validé cette nouvelle méthode (Tzovara et al., 2012a,b). Son applicabilité générale a été démontrée avec trois ensembles de données, deux dans le domaine visuel et un dans l'auditif. Ce développement a permis de cibler deux nouvelles lignes de recherche, la première dans le domaine des neurosciences cognitives et l'autre dans le domaine des neurosciences cliniques, représentant respectivement la deuxième et troisième partie de ce projet. En particulier, pour la partie cognitive, j'ai appliqué cette méthode pour évaluer l'information temporelle de la prise des décisions (Tzovara et al., 2012c). En se basant sur l'activité topographique de l'EEG au niveau de chaque essai pendant la présentation de la récompense liée à un choix, on a pu prédire les décisions suivantes des sujets (en termes d'exploration/exploitation). Cette prédiction s'appuie sur une différence topographique qui apparaît en moyenne ~516ms après la présentation de la récompense. Ces résultats exploitent pour la première fois, les corrélés temporels des décisions au niveau de chaque sujet séparément et montrent que les générateurs neuronaux de ces décisions commencent à différentier leurs réponses déjà depuis ~880ms avant que les sujets appuient sur le bouton. Finalement, pour la dernière partie de ce projet, je me suis focalisée sur une étude Clinique afin d'évaluer le degré des fonctions neuronales intactes chez les patients comateux. Des réponses EEG auditives ont été examinées avec un paradigme classique de mismatch negativity, pendant la phase précoce du coma qui est actuellement sous-investiguée. En utilisant la méthode de décodage développée dans la première partie de la thèse, j'ai pu quantifier le degré de discrimination auditive au niveau de chaque patient (Tzovara et al., in press). Nos résultats montrent pour la première fois que même des patients comateux qui ne vont pas survivre peuvent discriminer des sons au niveau neuronal, lors de la phase aigue du coma. De plus, une amélioration dans la discrimination auditive pendant les premières 48heures du coma a été observée seulement chez des patients qui se sont réveillés par la suite (100% de valeur prédictive pour un réveil).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report for the scientific sojourn carried out at the University Medical Center, Swiss, from 2010 to 2012. Abundant evidence suggests that negative emotional stimuli are prioritized in the perceptual systems, eliciting enhanced neural responses in early sensory regions as compared with neutral information. This facilitated detection is generally paralleled by larger neural responses in early sensory areas, relative to the processing of neutral information. In this sense, the amygdala and other limbic regions, such as the orbitofrontal cortex, may play a critical role by sending modulatory projections onto the sensory cortices via direct or indirect feedback.The present project aimed at investigating two important issues regarding these mechanisms of emotional attention, by means of functional magnetic resonance imaging. In Study I, we examined the modulatory effects of visual emotion signals on the processing of task-irrelevant visual, auditory, and somatosensory input, that is, the intramodal and crossmodal effects of emotional attention. We observed that brain responses to auditory and tactile stimulation were enhanced during the processing of visual emotional stimuli, as compared to neutral, in bilateral primary auditory and somatosensory cortices, respectively. However, brain responses to visual task-irrelevant stimulation were diminished in left primary and secondary visual cortices in the same conditions. The results also suggested the existence of a multimodal network associated with emotional attention, presumably involving mediofrontal, temporal and orbitofrontal regions Finally, Study II examined the different brain responses along the low-level visual pathways and limbic regions, as a function of the number of retinal spikes during visual emotional processing. The experiment used stimuli resulting from an algorithm that simulates how the visual system perceives a visual input after a given number of retinal spikes. The results validated the visual model in human subjects and suggested differential emotional responses in the amygdala and visual regions as a function of spike-levels. A list of publications resulting from work in the host laboratory is included in the report.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report for the scientific sojourn carried out at the University Medical Center, Swiss, from 2010 to 2012. Abundant evidence suggests that negative emotional stimuli are prioritized in the perceptual systems, eliciting enhanced neural responses in early sensory regions as compared with neutral information. This facilitated detection is generally paralleled by larger neural responses in early sensory areas, relative to the processing of neutral information. In this sense, the amygdala and other limbic regions, such as the orbitofrontal cortex, may play a critical role by sending modulatory projections onto the sensory cortices via direct or indirect feedback.The present project aimed at investigating two important issues regarding these mechanisms of emotional attention, by means of functional magnetic resonance imaging. In Study I, we examined the modulatory effects of visual emotion signals on the processing of task-irrelevant visual, auditory, and somatosensory input, that is, the intramodal and crossmodal effects of emotional attention. We observed that brain responses to auditory and tactile stimulation were enhanced during the processing of visual emotional stimuli, as compared to neutral, in bilateral primary auditory and somatosensory cortices, respectively. However, brain responses to visual task-irrelevant stimulation were diminished in left primary and secondary visual cortices in the same conditions. The results also suggested the existence of a multimodal network associated with emotional attention, presumably involving mediofrontal, temporal and orbitofrontal regions Finally, Study II examined the different brain responses along the low-level visual pathways and limbic regions, as a function of the number of retinal spikes during visual emotional processing. The experiment used stimuli resulting from an algorithm that simulates how the visual system perceives a visual input after a given number of retinal spikes. The results validated the visual model in human subjects and suggested differential emotional responses in the amygdala and visual regions as a function of spike-levels. A list of publications resulting from work in the host laboratory is included in the report.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, the ventral occipito-temporal (vOT) area, but not the superior parietal lobules (SPLs), is thought as belonging to the neural system of visual word recognition. However, some dyslexic children who exhibit a visual attention span disorder - i.e. poor multi-element parallel processing - further show reduced SPLs activation when engaged in visual multi-element categorization tasks. We investigated whether these parietal regions further contribute to letter-identity processing within strings. Adult skilled readers and dyslexic participants with a visual attention span disorder were administered a letter-string comparison task under fMRI. Dyslexic adults were less accurate than skilled readers to detect letter identity substitutions within strings. In skilled readers, letter identity differs related to enhanced activation of the left vOT. However, specific neural responses were further found in the superior and inferior parietal regions, including the SPLs bilaterally. Two brain regions that are specifically related to substituted letter detection, the left SPL and the left vOT, were less activated in dyslexic participants. These findings suggest that the left SPL, like the left vOT, may contribute to letter string processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY The human auditory cortex, located on the supratemporal plane of the temporal lobe, is divided in a primary auditory area and several non-primary areas surrounding it. These different areas show anatomical and functional differences. Many studies have focussed on auditory areas in non-human primates, using investigation techniques such as electrophysiological recordings, tracing of neural connections, or immunohistochemical and histochemical staining. Some of these studies have suggested parallel and hierarchical organization of the cortical auditory areas as well as subcortical auditory relays. In humans, only few studies have investigated these regions immunohistochemically, but activation and lesion studies speak in favour of parallel and hierarchical organization, very similar to that of non-human primates. Calcium-binding proteins and metabolic markers were used to investigate possible correlates of hierarchical and parallel organization in man. Calcium-binding proteins, parvalbumin, calretinin and calbindin, modulate the concentration of intracellular free calcium ions and were found in distinct subpopulations of GABAergic neurons in non-human primates species. In our study, their distribution showed several differences between auditory areas: the primary auditory area was darkly stained for both parvalbumin and calbindin, and their expression rapidly decreased while moving away from the primary area. This staining pattern suggests a hierarchical organization of the areas, in which the more darkly stained areas could correspond to an earlier integration level and the areas showing light staining may correspond to higher level integration areas. Parallel organization of primary and non-primary auditory areas was suggested by the complementarity, within a given area, between parvalbumin and calbindin expression across layers. To investigate the possible differences in the energetic metabolism of the cortical auditory areas, several metabolic markers were used: cytochrome oxidase and LDH1 were used as oxidative metabolism markers and LDH5 was used as glycolytic metabolism marker. The results obtained show a difference in the expression of enzymes involved in oxidative metabolism between areas. In the primary auditory area the oxidative metabolism markers were maximally expressed in layer IV. In contrast, higher order areas showed maximal staining in supragranular layers. The expression of LDH5 varied in patches, but did not differ between the different hierarchical auditory areas. The distribution of the two LDH enzymes isoforms also provides information about cellular aspects of metabolic organization, since neurons expressed the LDH1 isoform whereas astrocytes express primarily LDH5, but some astrocytes also contained the LDH1 isoform. This cellular distribution pattern supports the hypothesis of the existence of an astrocyte-neuron lactate shuttle, previously suggested in rodent studies, and in particular of lactate transfer from astrocytes, which produce lactate from the glucose obtained from the circulation, to neurons that use lactate as energy substrate. In conclusion, the hypothesis of parallel and hierarchical organization of the auditory areas can be supported by CaBPs, cytochrome oxidase and LDH1 distribution. Moreover, the two LDHs cellular distribution pattern support the hypothesis of an astrocyte-neuron lactate shuttle in human cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT This thesis is composed of two main parts. The first addressed the question of whether the auditory and somatosensory systems, like their visual counterpart, comprise parallel functional pathways for processing identity and spatial attributes (so-called `what' and `where' pathways, respectively). The second part examined the independence of control processes mediating task switching across 'what' and `where' pathways in the auditory and visual modalities. Concerning the first part, electrical neuroimaging of event-related potentials identified the spatio-temporal mechanisms subserving auditory (see Appendix, Study n°1) and vibrotactile (see Appendix, Study n°2) processing during two types of blocks of trials. `What' blocks varied stimuli in their frequency independently of their location.. `Where' blocks varied the same stimuli in their location independently of their frequency. Concerning the second part (see Appendix, Study n°3), a psychophysical task-switching paradigm was used to investigate the hypothesis that the efficacy of control processes depends on the extent of overlap between the neural circuitry mediating the different tasks at hand, such that more effective task preparation (and by extension smaller switch costs) is achieved when the anatomical/functional overlap of this circuitry is small. Performance costs associated with switching tasks and/or switching sensory modalities were measured. Tasks required the analysis of either the identity or spatial location of environmental objects (`what' and `where' tasks, respectively) that were presented either visually or acoustically on any given trial. Pretrial cues informed participants of the upcoming task, but not of the sensory modality. - In the audio-visual domain, the results showed that switch costs between tasks were significantly smaller when the sensory modality of the task switched versus when it repeated. In addition, switch costs between the senses were correlated only when the sensory modality of the task repeated across trials and not when it switched. The collective evidence not only supports the independence of control processes mediating task switching and modality switching, but also the hypothesis that switch costs reflect competitive interterence between neural circuits that in turn can be diminished when these neural circuits are distinct. - In the auditory and somatosensory domains, the findings show that a segregation of location vs. recognition information is observed across sensory systems and that these happen around 100ms for both sensory modalities. - Also, our results show that functionally specialized pathways for audition and somatosensation involve largely overlapping brain regions, i.e. posterior superior and middle temporal cortices and inferior parietal areas. Both these properties (synchrony of differential processing and overlapping brain regions) probably optimize the relationships across sensory modalities. - Therefore, these results may be indicative of a computationally advantageous organization for processing spatial anal identity information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human primary auditory cortex (AI) is surrounded by several other auditory areas, which can be identified by cyto-, myelo- and chemoarchitectonic criteria. We report here on the pattern of calcium-binding protein immunoreactivity within these areas. The supratemporal regions of four normal human brains (eight hemispheres) were processed histologically, and serial sections were stained for parvalbumin, calretinin or calbindin. Each calcium-binding protein yielded a specific pattern of labelling, which differed between auditory areas. In AI, defined as area TC [see C. von Economo and L. Horn (1930) Z. Ges. Neurol. Psychiatr.,130, 678-757], parvalbumin labelling was dark in layer IV; several parvalbumin-positive multipolar neurons were distributed in layers III and IV. Calbindin yielded dark labelling in layers I-III and V; it revealed numerous multipolar and pyramidal neurons in layers II and III. Calretinin labelling was lighter than that of parvalbumin or calbindin in AI; calretinin-positive bipolar and bitufted neurons were present in supragranular layers. In non-primary auditory areas, the intensity of labelling tended to become progressively lighter while moving away from AI, with qualitative differences between the cytoarchitectonically defined areas. In analogy to non-human primates, our results suggest differences in intrinsic organization between auditory areas that are compatible with parallel and hierarchical processing of auditory information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Repetition of environmental sounds, like their visual counterparts, can facilitate behavior and modulate neural responses, exemplifying plasticity in how auditory objects are represented or accessed. It remains controversial whether such repetition priming/suppression involves solely plasticity based on acoustic features and/or also access to semantic features. To evaluate contributions of physical and semantic features in eliciting repetition-induced plasticity, the present functional magnetic resonance imaging (fMRI) study repeated either identical or different exemplars of the initially presented object; reasoning that identical exemplars share both physical and semantic features, whereas different exemplars share only semantic features. Participants performed a living/man-made categorization task while being scanned at 3T. Repeated stimuli of both types significantly facilitated reaction times versus initial presentations, demonstrating perceptual and semantic repetition priming. There was also repetition suppression of fMRI activity within overlapping temporal, premotor, and prefrontal regions of the auditory "what" pathway. Importantly, the magnitude of suppression effects was equivalent for both physically identical and semantically related exemplars. That the degree of repetition suppression was irrespective of whether or not both perceptual and semantic information was repeated is suggestive of a degree of acoustically independent semantic analysis in how object representations are maintained and retrieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evidence from human and non-human primate studies supports a dual-pathway model of audition, with partially segregated cortical networks for sound recognition and sound localisation, referred to as the What and Where processing streams. In normal subjects, these two networks overlap partially on the supra-temporal plane, suggesting that some early-stage auditory areas are involved in processing of either auditory feature alone or of both. Using high-resolution 7-T fMRI we have investigated the influence of positional information on sound object representations by comparing activation patterns to environmental sounds lateralised to the right or left ear. While unilaterally presented sounds induced bilateral activation, small clusters in specific non-primary auditory areas were significantly more activated by contra-laterally presented stimuli. Comparison of these data with histologically identified non-primary auditory areas suggests that the coding of sound objects within early-stage auditory areas lateral and posterior to primary auditory cortex AI is modulated by the position of the sound, while that within anterior areas is not.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial patterns of coherent activity across different brain areas have been identified during the resting-state fluctuations of the brain. However, recent studies indicate that resting-state activity is not stationary, but shows complex temporal dynamics. We were interested in the spatiotemporal dynamics of the phase interactions among resting-state fMRI BOLD signals from human subjects. We found that the global phase synchrony of the BOLD signals evolves on a characteristic ultra-slow (<0.01Hz) time scale, and that its temporal variations reflect the transient formation and dissolution of multiple communities of synchronized brain regions. Synchronized communities reoccurred intermittently in time and across scanning sessions. We found that the synchronization communities relate to previously defined functional networks known to be engaged in sensory-motor or cognitive function, called resting-state networks (RSNs), including the default mode network, the somato-motor network, the visual network, the auditory network, the cognitive control networks, the self-referential network, and combinations of these and other RSNs. We studied the mechanism originating the observed spatiotemporal synchronization dynamics by using a network model of phase oscillators connected through the brain's anatomical connectivity estimated using diffusion imaging human data. The model consistently approximates the temporal and spatial synchronization patterns of the empirical data, and reveals that multiple clusters that transiently synchronize and desynchronize emerge from the complex topology of anatomical connections, provided that oscillators are heterogeneous.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated respiratory responses during film clip viewing and their relation to the affective dimensions of valence and arousal. Seventy-six subjects participated in a study using a between groups design. To begin with, all participants viewed an emotionally neutral film clip. Then, they were presented with one out of four emotional film clips: a positive high-arousal, a positive low-arousal, a negative high-arousal and a negative low-arousal clip. Respiration, skin conductance level, heart rate, corrugator activity and affective judgments were measured. Expiratory time was shorter and inspiratory duty cycle, mean expiratory flow and minute ventilation were larger during the high-arousal clips compared to the low-arousal clips. The pleasantness of the stimuli had no influence on any respiratory measure. These findings confirm the importance of arousal in respiratory responding but also evidence differences in comparison to previous studies using visual and auditory stimuli. [Authors]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discriminating complex sounds relies on multiple stages of differential brain activity. The specific roles of these stages and their links to perception were the focus of the present study. We presented 250ms duration sounds of living and man-made objects while recording 160-channel electroencephalography (EEG). Subjects categorized each sound as that of a living, man-made or unknown item. We tested whether/when the brain discriminates between sound categories even when not transpiring behaviorally. We applied a single-trial classifier that identified voltage topographies and latencies at which brain responses are most discriminative. For sounds that the subjects could not categorize, we could successfully decode the semantic category based on differences in voltage topographies during the 116-174ms post-stimulus period. Sounds that were correctly categorized as that of a living or man-made item by the same subjects exhibited two periods of differences in voltage topographies at the single-trial level. Subjects exhibited differential activity before the sound ended (starting at 112ms) and on a separate period at ~270ms post-stimulus onset. Because each of these periods could be used to reliably decode semantic categories, we interpreted the first as being related to an implicit tuning for sound representations and the second as being linked to perceptual decision-making processes. Collectively, our results show that the brain discriminates environmental sounds during early stages and independently of behavioral proficiency and that explicit sound categorization requires a subsequent processing stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the early days of functional magnetic resonance imaging (fMRI), retinotopic mapping emerged as a powerful and widely-accepted tool, allowing the identification of individual visual cortical fields and furthering the study of visual processing. In contrast, tonotopic mapping in auditory cortex proved more challenging primarily because of the smaller size of auditory cortical fields. The spatial resolution capabilities of fMRI have since advanced, and recent reports from our labs and several others demonstrate the reliability of tonotopic mapping in human auditory cortex. Here we review the wide range of stimulus procedures and analysis methods that have been used to successfully map tonotopy in human auditory cortex. We point out that recent studies provide a remarkably consistent view of human tonotopic organisation, although the interpretation of the maps continues to vary. In particular, there remains controversy over the exact orientation of the primary gradients with respect to Heschl's gyrus, which leads to different predictions about the location of human A1, R, and surrounding fields. We discuss the development of this debate and argue that literature is converging towards an interpretation that core fields A1 and R fold across the rostral and caudal banks of Heschl's gyrus, with tonotopic gradients laid out in a distinctive V-shaped manner. This suggests an organisation that is largely homologous with non-human primates. This article is part of a Special Issue entitled Human Auditory Neuroimaging.