80 resultados para Visual and auditory processing
em Université de Lausanne, Switzerland
Resumo:
Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand.
Resumo:
Various lines of evidence accumulated over the past 30 years indicate that the cerebellum, long recognized as essential for motor control, also has considerable influence on perceptual processes. In this paper, we bring together experts from psychology and neuroscience, with the aim of providing a succinct but comprehensive overview of key findings related to the involvement of the cerebellum in sensory perception. The contributions cover such topics as anatomical and functional connectivity, evolutionary and comparative perspectives, visual and auditory processing, biological motion perception, nociception, self-motion, timing, predictive processing, and perceptual sequencing. While no single explanation has yet emerged concerning the role of the cerebellum in perceptual processes, this consensus paper summarizes the impressive empirical evidence on this problem and highlights diversities as well as commonalities between existing hypotheses. In addition to work with healthy individuals and patients with cerebellar disorders, it is also apparent that several neurological conditions in which perceptual disturbances occur, including autism and schizophrenia, are associated with cerebellar pathology. A better understanding of the involvement of the cerebellum in perceptual processes will thus likely be important for identifying and treating perceptual deficits that may at present go unnoticed and untreated. This paper provides a useful framework for further debate and empirical investigations into the influence of the cerebellum on sensory perception.
Resumo:
The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of visual motion. Based on these data and evidence from neurophysiological and neuroimaging studies we discuss the neural mechanisms likely to underlie this effect.
Resumo:
Recent evidence suggests the human auditory system is organized,like the visual system, into a ventral 'what' pathway, devoted toidentifying objects and a dorsal 'where' pathway devoted to thelocalization of objects in space w1x. Several brain regions have beenidentified in these two different pathways, but until now little isknown about the temporal dynamics of these regions. We investigatedthis issue using 128-channel auditory evoked potentials(AEPs).Stimuli were stationary sounds created by varying interaural timedifferences and environmental real recorded sounds. Stimuli ofeach condition (localization, recognition) were presented throughearphones in a blocked design, while subjects determined theirposition or meaning, respectively.AEPs were analyzed in terms of their topographical scalp potentialdistributions (segmentation maps) and underlying neuronalgenerators (source estimation) w2x.Fourteen scalp potential distributions (maps) best explained theentire data set.Ten maps were nonspecific (associated with auditory stimulationin general), two were specific for sound localization and two werespecific for sound recognition (P-values ranging from 0.02 to0.045).Condition-specific maps appeared at two distinct time periods:;200 ms and ;375-550 ms post-stimulus.The brain sources associated with the maps specific for soundlocalization were mainly situated in the inferior frontal cortices,confirming previous findings w3x. The sources associated withsound recognition were predominantly located in the temporal cortices,with a weaker activation in the frontal cortex.The data show that sound localization and sound recognitionengage different brain networks that are apparent at two distincttime periods.References1. Maeder et al. Neuroimage 2001.2. Michel et al. Brain Research Review 2001.3. Ducommun et al. Neuroimage 2002.
Resumo:
Les déficits auditifs spatiaux se produisent fréquemment après une lésion hémisphérique ; un précédent case report suggérait que la capacité explicite à reconnaître des positions sonores, comme dans la localisation des sons, peut être atteinte alors que l'utilisation implicite d'indices sonores pour la reconnaissance d'objets sonores dans un environnement bruyant reste préservée. En testant systématiquement des patients avec lésion hémisphérique inaugurale, nous avons montré que (1) l'utilisation explicite et/ou implicite des indices sonores peut être perturbée ; (2) la dissociation entre l'atteinte de l'utilisation explicite des indices sonores versus une préservation de l'utilisation implicite de ces indices est assez fréquente ; et (3) différents types de déficits dans la localisation des sons peuvent être associés avec une utilisation implicite préservée de ces indices sonores. Conceptuellement, la dissociation entre l'utilisation explicite et implicite de ces indices sonores peut illustrer la dichotomie des deux voies du système auditif. Nos résultats parlent en faveur d'une évaluation systématique des fonctions auditives spatiales dans un contexte clinique, surtout quand l'adaptation à un environnement sonore est en jeu. De plus, des études systématiques sont nécessaires afin de mettre en lien les troubles de l'utilisation explicite versus implicite de ces indices sonores avec les difficultés à effectuer les activités de la vie quotidienne, afin d'élaborer des stratégies de réhabilitation appropriées et afin de s'assurer jusqu'à quel point l'utilisation explicite et implicite des indices spatiaux peut être rééduquée à la suite d'un dommage cérébral.
Resumo:
Auditory spatial deficits occur frequently after hemispheric damage; a previous case report suggested that the explicit awareness of sound positions, as in sound localisation, can be impaired while the implicit use of auditory cues for the segregation of sound objects in noisy environments remains preserved. By assessing systematically patients with a first hemispheric lesion, we have shown that (1) explicit and/or implicit use can be disturbed; (2) impaired explicit vs. preserved implicit use dissociations occur rather frequently; and (3) different types of sound localisation deficits can be associated with preserved implicit use. Conceptually, the dissociation between the explicit and implicit use may reflect the dual-stream dichotomy of auditory processing. Our results speak in favour of systematic assessments of auditory spatial functions in clinical settings, especially when adaptation to auditory environment is at stake. Further, systematic studies are needed to link deficits of explicit vs. implicit use to disability in everyday activities, to design appropriate rehabilitation strategies, and to ascertain how far the explicit and implicit use of spatial cues can be retrained following brain damage.
Resumo:
Whether the somatosensory system, like its visual and auditory counterparts, is comprised of parallel functional pathways for processing identity and spatial attributes (so-called what and where pathways, respectively) has hitherto been studied in humans using neuropsychological and hemodynamic methods. Here, electrical neuroimaging of somatosensory evoked potentials (SEPs) identified the spatio-temporal mechanisms subserving vibrotactile processing during two types of blocks of trials. What blocks varied stimuli in their frequency (22.5 Hz vs. 110 Hz) independently of their location (left vs. right hand). Where blocks varied the same stimuli in their location independently of their frequency. In this way, there was a 2x2 within-subjects factorial design, counterbalancing the hand stimulated (left/right) and trial type (what/where). Responses to physically identical somatosensory stimuli differed within 200 ms post-stimulus onset, which is within the same timeframe we previously identified for audition (De Santis, L., Clarke, S., Murray, M.M., 2007. Automatic and intrinsic auditory "what" and "where" processing in humans revealed by electrical neuroimaging. Cereb Cortex 17, 9-17.). Initially (100-147 ms), responses to each hand were stronger to the what than where condition in a statistically indistinguishable network within the hemisphere contralateral to the stimulated hand, arguing against hemispheric specialization as the principal basis for somatosensory what and where pathways. Later (149-189 ms) responses differed topographically, indicative of the engagement of distinct configurations of brain networks. A common topography described responses to the where condition irrespective of the hand stimulated. By contrast, different topographies accounted for the what condition and also as a function of the hand stimulated. Parallel, functionally specialized pathways are observed across sensory systems and may be indicative of a computationally advantageous organization for processing spatial and identity information.
Resumo:
Neuroimaging studies analyzing neurophysiological signals are typically based on comparing averages of peri-stimulus epochs across experimental conditions. This approach can however be problematic in the case of high-level cognitive tasks, where response variability across trials is expected to be high and in cases where subjects cannot be considered part of a group. The main goal of this thesis has been to address this issue by developing a novel approach for analyzing electroencephalography (EEG) responses at the single-trial level. This approach takes advantage of the spatial distribution of the electric field on the scalp (topography) and exploits repetitions across trials for quantifying the degree of discrimination between experimental conditions through a classification scheme. In the first part of this thesis, I developed and validated this new method (Tzovara et al., 2012a,b). Its general applicability was demonstrated with three separate datasets, two in the visual modality and one in the auditory. This development allowed then to target two new lines of research, one in basic and one in clinical neuroscience, which represent the second and third part of this thesis respectively. For the second part of this thesis (Tzovara et al., 2012c), I employed the developed method for assessing the timing of exploratory decision-making. Using single-trial topographic EEG activity during presentation of a choice's payoff, I could predict the subjects' subsequent decisions. This prediction was due to a topographic difference which appeared on average at ~516ms after the presentation of payoff and was subject-specific. These results exploit for the first time the temporal correlates of individual subjects' decisions and additionally show that the underlying neural generators start differentiating their responses already ~880ms before the button press. Finally, in the third part of this project, I focused on a clinical study with the goal of assessing the degree of intact neural functions in comatose patients. Auditory EEG responses were assessed through a classical mismatch negativity paradigm, during the very early phase of coma, which is currently under-investigated. By taking advantage of the decoding method developed in the first part of the thesis, I could quantify the degree of auditory discrimination at the single patient level (Tzovara et al., in press). Our results showed for the first time that even patients who do not survive the coma can discriminate sounds at the neural level, during the first hours after coma onset. Importantly, an improvement in auditory discrimination during the first 48hours of coma was predictive of awakening and survival, with 100% positive predictive value. - L'analyse des signaux électrophysiologiques en neuroimagerie se base typiquement sur la comparaison des réponses neurophysiologiques à différentes conditions expérimentales qui sont moyennées après plusieurs répétitions d'une tâche. Pourtant, cette approche peut être problématique dans le cas des fonctions cognitives de haut niveau, où la variabilité des réponses entre les essais peut être très élevéeou dans le cas où des sujets individuels ne peuvent pas être considérés comme partie d'un groupe. Le but principal de cette thèse est d'investiguer cette problématique en développant une nouvelle approche pour l'analyse des réponses d'électroencephalographie (EEG) au niveau de chaque essai. Cette approche se base sur la modélisation de la distribution du champ électrique sur le crâne (topographie) et profite des répétitions parmi les essais afin de quantifier, à l'aide d'un schéma de classification, le degré de discrimination entre des conditions expérimentales. Dans la première partie de cette thèse, j'ai développé et validé cette nouvelle méthode (Tzovara et al., 2012a,b). Son applicabilité générale a été démontrée avec trois ensembles de données, deux dans le domaine visuel et un dans l'auditif. Ce développement a permis de cibler deux nouvelles lignes de recherche, la première dans le domaine des neurosciences cognitives et l'autre dans le domaine des neurosciences cliniques, représentant respectivement la deuxième et troisième partie de ce projet. En particulier, pour la partie cognitive, j'ai appliqué cette méthode pour évaluer l'information temporelle de la prise des décisions (Tzovara et al., 2012c). En se basant sur l'activité topographique de l'EEG au niveau de chaque essai pendant la présentation de la récompense liée à un choix, on a pu prédire les décisions suivantes des sujets (en termes d'exploration/exploitation). Cette prédiction s'appuie sur une différence topographique qui apparaît en moyenne ~516ms après la présentation de la récompense. Ces résultats exploitent pour la première fois, les corrélés temporels des décisions au niveau de chaque sujet séparément et montrent que les générateurs neuronaux de ces décisions commencent à différentier leurs réponses déjà depuis ~880ms avant que les sujets appuient sur le bouton. Finalement, pour la dernière partie de ce projet, je me suis focalisée sur une étude Clinique afin d'évaluer le degré des fonctions neuronales intactes chez les patients comateux. Des réponses EEG auditives ont été examinées avec un paradigme classique de mismatch negativity, pendant la phase précoce du coma qui est actuellement sous-investiguée. En utilisant la méthode de décodage développée dans la première partie de la thèse, j'ai pu quantifier le degré de discrimination auditive au niveau de chaque patient (Tzovara et al., in press). Nos résultats montrent pour la première fois que même des patients comateux qui ne vont pas survivre peuvent discriminer des sons au niveau neuronal, lors de la phase aigue du coma. De plus, une amélioration dans la discrimination auditive pendant les premières 48heures du coma a été observée seulement chez des patients qui se sont réveillés par la suite (100% de valeur prédictive pour un réveil).
Resumo:
We perceive our environment through multiple sensory channels. Nonetheless, research has traditionally focused on the investigation of sensory processing within single modalities. Thus, investigating how our brain integrates multisensory information is of crucial importance for understanding how organisms cope with a constantly changing and dynamic environment. During my thesis I have investigated how multisensory events impact our perception and brain responses, either when auditory-visual stimuli were presented simultaneously or how multisensory events at one point in time impact later unisensory processing. In "Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012) we investigated the neuronal substrates involved in motion detection in depth under multisensory vs. unisensory conditions. We have shown that congruent auditory-visual looming (i.e. approaching) signals are preferentially integrated by the brain. Further, we show that early effects under these conditions are relevant for behavior, effectively speeding up responses to these combined stimulus presentations. In "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), we investigated the behavioral impact of single encounters with meaningless auditory-visual object parings upon subsequent visual object recognition. In addition to showing that these encounters lead to impaired recognition accuracy upon repeated visual presentations, we have shown that the brain discriminates images as soon as ~100ms post-stimulus onset according to the initial encounter context. In "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review) we have addressed whether auditory object recognition is affected by single-trial multisensory memories, and whether recognition accuracy of sounds was similarly affected by the initial encounter context as visual objects. We found that this is in fact the case. We propose that a common underlying brain network is differentially involved during encoding and retrieval of images and sounds based on our behavioral findings. - Nous percevons l'environnement qui nous entoure à l'aide de plusieurs organes sensoriels. Antérieurement, la recherche sur la perception s'est focalisée sur l'étude des systèmes sensoriels indépendamment les uns des autres. Cependant, l'étude des processus cérébraux qui soutiennent l'intégration de l'information multisensorielle est d'une importance cruciale pour comprendre comment notre cerveau travail en réponse à un monde dynamique en perpétuel changement. Pendant ma thèse, j'ai ainsi étudié comment des événements multisensoriels impactent notre perception immédiate et/ou ultérieure et comment ils sont traités par notre cerveau. Dans l'étude " Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012), nous nous sommes intéressés aux processus neuronaux impliqués dans la détection de mouvements à l'aide de l'utilisation de stimuli audio-visuels seuls ou combinés. Nos résultats ont montré que notre cerveau intègre de manière préférentielle des stimuli audio-visuels combinés s'approchant de l'observateur. De plus, nous avons montré que des effets précoces, observés au niveau de la réponse cérébrale, influencent notre comportement, en accélérant la détection de ces stimuli. Dans l'étude "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), nous nous sommes intéressés à l'impact qu'a la présentation d'un stimulus audio-visuel sur l'exactitude de reconnaissance d'une image. Nous avons étudié comment la présentation d'une combinaison audio-visuelle sans signification, impacte, au niveau comportementale et cérébral, sur la reconnaissance ultérieure de l'image. Les résultats ont montré que l'exactitude de la reconnaissance d'images, présentées dans le passé, avec un son sans signification, est inférieure à celle obtenue dans le cas d'images présentées seules. De plus, notre cerveau différencie ces deux types de stimuli très tôt dans le traitement d'images. Dans l'étude "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review), nous nous sommes posés la question si l'exactitude de ia reconnaissance de sons était affectée de manière semblable par la présentation d'événements multisensoriels passés. Ceci a été vérifié par nos résultats. Nous avons proposé que cette similitude puisse être expliquée par le recrutement différentiel d'un réseau neuronal commun.
Resumo:
We investigated respiratory responses during film clip viewing and their relation to the affective dimensions of valence and arousal. Seventy-six subjects participated in a study using a between groups design. To begin with, all participants viewed an emotionally neutral film clip. Then, they were presented with one out of four emotional film clips: a positive high-arousal, a positive low-arousal, a negative high-arousal and a negative low-arousal clip. Respiration, skin conductance level, heart rate, corrugator activity and affective judgments were measured. Expiratory time was shorter and inspiratory duty cycle, mean expiratory flow and minute ventilation were larger during the high-arousal clips compared to the low-arousal clips. The pleasantness of the stimuli had no influence on any respiratory measure. These findings confirm the importance of arousal in respiratory responding but also evidence differences in comparison to previous studies using visual and auditory stimuli. [Authors]
Resumo:
ABSTRACT (English)An accurate processing of the order between sensory events at the millisecond time scale is crucial for both sensori-motor and cognitive functions. Temporal order judgment (TOJ) tasks, is the ability of discriminating the order of presentation of several stimuli presented in a rapid succession. The aim of the present thesis is to further investigate the spatio-temporal brain mechanisms supporting TOJ. In three studies we focus on the dependency of TOJ accuracy on the brain states preceding the presentation of TOJ stimuli, the neural correlates of accurate vs. inaccurate TOJ and whether and how TOJ performance can be improved with training.In "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011), we investigated if the brain activity immediately preceding the presentation of the stimuli modulates TOJ performance. By contrasting the electrophysiological activity before the stimulus presentation as a function of TOJ accuracy we observed a stronger pre-stimulus beta (20Hz) oscillatory activity within the left posterior sylvian region (PSR) before accurate than inaccurate TOJ trials.In "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), and "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), we investigated the spatio-temporal brain dynamics underlying auditory TOJ. In both studies we observed a topographic modulation as a function of TOJ performance at ~40ms after the onset of the first sound, indicating the engagement of distinct configurations of intracranial generators. Source estimations in the first study revealed a bilateral PSR activity for both accurate and inaccurate TOJ trials. Moreover, activity within left, but not right, PSR correlated with TOJ performance. Source estimations in the second study revealed a training-induced left lateralization of the initial bilateral (i.e. PSR) brain response. Moreover, the activity within the left PSR region correlated with TOJ performance.Based on these results, we suggest that a "temporal stamp" is established within left PSR on the first sound within the pair at early stages (i.e. ~40ms) of cortical processes, but is critically modulated by inputs from right PSR (Bernasconi et al., 2010a; b). The "temporal stamp" on the first sound may be established via a sensory gating or prior entry mechanism.Behavioral and brain responses to identical stimuli can vary due to attention modulation, vary with experimental and task parameters or "internal noise". In a fourth experiment (Bernasconi et al., 2011b) we investigated where and when "neural noise" manifest during the stimulus processing. Contrasting the AEPs of identical sound perceived as High vs. Low pitch, a topographic modulation occurred at ca. 100ms after the onset of the sound. Source estimation revealed activity within regions compatible with pitch discrimination. Thus, we provided neurophysiological evidence for the variation in perception induced by "neural noise".ABSTRACT (French)Un traitement précis de l'ordre des événements sensoriels sur une échelle de temps de milliseconde est crucial pour les fonctions sensori-motrices et cognitives. Les tâches de jugement d'ordre temporel (JOT), consistant à présenter plusieurs stimuli en succession rapide, sont traditionnellement employées pour étudier les mécanismes neuronaux soutenant le traitement d'informations sensorielles qui varient rapidement. Le but de cette thèse est d'étudier le mécanisme cérébral soutenant JOT. Dans les trois études présentées nous nous sommes concentrés sur les états du cerveau précédant la présentation des stimuli de JOT, les bases neurales pour un JOT correct vs. incorrect et sur la possibilité et les moyens d'améliorer l'exécution du JOT grâce à un entraînement.Dans "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011),, nous nous sommes intéressé à savoir si l'activité oscillatoire du cerveau au pré-stimulus modulait la performance du JOT. Nous avons contrasté l'activité électrophysiologique en fonction de la performance TOJ, mesurant une activité oscillatoire beta au pré-stimulus plus fort dans la région sylvian postérieure gauche (PSR) liée à un JOT correct.Dans "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), et "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), nous avons étudié la dynamique spatio-temporelle dans le cerveau impliqué dans le traitement du JOT auditif. Dans ses deux études, nous avons observé une modulation topographique à ~40ms après le début du premier son, en fonction de la performance JOT, indiquant l'engagement des configurations de générateurs intra- crâniens distincts. La localisation de source dans la première étude indique une activité bilatérale de PSR pour des JOT corrects vs. incorrects. Par ailleurs, l'activité dans PSR gauche, mais pas dans le droit, est corrélée avec la performance du JOT. La localisation de source dans la deuxième étude indiquait une latéralisation gauche induite par l'entraînement d'une réponse initialement bilatérale du cerveau. D'ailleurs, l'activité dans la région PSR gauche corrèlait avec la performance de TOJ.Basé sur ces résultats, nous proposons qu'un « timbre-temporel » soit établi très tôt (c.-à-d. à ~40ms) sur le premier son par le PSR gauche, mais module par l'activité du PSR droite (Bernasconi et al., 2010a ; b). « Le timbre- temporel » sur le premier son peut être établi par le mécanisme neuronal de type « sensory gating » ou « prior entry ».Les réponses comportementales et du cerveau aux stimuli identiques peut varier du à des modulations d'attention ou à des variations dans les paramètres des tâches ou au bruit interne du cerveau. Dans une quatrième expérience (Bernasconi et al. 2011B), nous avons étudié où et quand le »bruit neuronal« se manifeste pendant le traitement des stimuli. En contrastant les AEPs de sons identiques perçus comme aigus vs. grave, nous avons mesuré une modulation topographique à env. 100ms après l'apparition du son. L'estimation de source a révélé une activité dans les régions compatibles avec la discrimination de fréquences. Ainsi, nous avons fourni des preuves neurophysiologiques de la variation de la perception induite par le «bruit neuronal».
Resumo:
Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.
Resumo:
Brittle cornea syndrome (BCS) is an autosomal recessive disorder characterised by extreme corneal thinning and fragility. Corneal rupture can therefore occur either spontaneously or following minimal trauma in affected patients. Two genes, ZNF469 and PRDM5, have now been identified, in which causative pathogenic mutations collectively account for the condition in nearly all patients with BCS ascertained to date. Therefore, effective molecular diagnosis is now available for affected patients, and those at risk of being heterozygous carriers for BCS. We have previously identified mutations in ZNF469 in 14 families (in addition to 6 reported by others in the literature), and in PRDM5 in 8 families (with 1 further family now published by others). Clinical features include extreme corneal thinning with rupture, high myopia, blue sclerae, deafness of mixed aetiology with hypercompliant tympanic membranes, and variable skeletal manifestations. Corneal rupture may be the presenting feature of BCS, and it is possible that this may be incorrectly attributed to non-accidental injury. Mainstays of management include the prevention of ocular rupture by provision of protective polycarbonate spectacles, careful monitoring of visual and auditory function, and assessment for skeletal complications such as developmental dysplasia of the hip. Effective management depends upon appropriate identification of affected individuals, which may be challenging given the phenotypic overlap of BCS with other connective tissue disorders.