992 resultados para auditory attention detection
Resumo:
Attention, attentional blink, rapid serial visual presentation, RSVP, ERP, EEG, fMRI, gammaband, oscillatiory activity
Resumo:
Abstract : Auditory spatial functions are of crucial importance in everyday life. Determining the origin of sound sources in space plays a key role in a variety of tasks including orientation of attention, disentangling of complex acoustic patterns reaching our ears in noisy environments. Following brain damage, auditory spatial processing can be disrupted, resulting in severe handicaps. Complaints of patients with sound localization deficits include the inability to locate their crying child or being over-loaded by sounds in crowded public places. Yet, the brain bears a large capacity for reorganization following damage and/or learning. This phenomenon is referred as plasticity and is believed to underlie post-lesional functional recovery as well as learning-induced improvement. The aim of this thesis was to investigate the organization and plasticity of different aspects of auditory spatial functions. Overall, we report the outcomes of three studies: In the study entitled "Learning-induced plasticity in auditory spatial representations" (Spierer et al., 2007b), we focused on the neurophysiological and behavioral changes induced by auditory spatial training in healthy subjects. We found that relatively brief auditory spatial discrimination training improves performance and modifies the cortical representation of the trained sound locations, suggesting that cortical auditory representations of space are dynamic and subject to rapid reorganization. In the same study, we tested the generalization and persistence of training effects over time, as these are two determining factors in the development of neurorehabilitative intervention. In "The path to success in auditory spatial discrimination" (Spierer et al., 2007c), we investigated the neurophysiological correlates of successful spatial discrimination and contribute to the modeling of the anatomo-functional organization of auditory spatial processing in healthy subjects. We showed that discrimination accuracy depends on superior temporal plane (STP) activity in response to the first sound of a pair of stimuli. Our data support a model wherein refinement of spatial representations occurs within the STP and that interactions with parietal structures allow for transformations into coordinate frames that are required for higher-order computations including absolute localization of sound sources. In "Extinction of auditory stimuli in hemineglect: space versus ear" (Spierer et al., 2007a), we investigated auditory attentional deficits in brain-damaged patients. This work provides insight into the auditory neglect syndrome and its relation with neglect symptoms within the visual modality. Apart from contributing to a basic understanding of the cortical mechanisms underlying auditory spatial functions, the outcomes of the studies also contribute to develop neurorehabilitation strategies, which are currently being tested in clinical populations.
Resumo:
Abstract (English)General backgroundMultisensory stimuli are easier to recognize, can improve learning and a processed faster compared to unisensory ones. As such, the ability an organism has to extract and synthesize relevant sensory inputs across multiple sensory modalities shapes his perception of and interaction with the environment. A major question in the scientific field is how the brain extracts and fuses relevant information to create a unified perceptual representation (but also how it segregates unrelated information). This fusion between the senses has been termed "multisensory integration", a notion that derives from seminal animal single-cell studies performed in the superior colliculus, a subcortical structure shown to create a multisensory output differing from the sum of its unisensory inputs. At the cortical level, integration of multisensory information is traditionally deferred to higher classical associative cortical regions within the frontal, temporal and parietal lobes, after extensive processing within the sensory-specific and segregated pathways. However, many anatomical, electrophysiological and neuroimaging findings now speak for multisensory convergence and interactions as a distributed process beginning much earlier than previously appreciated and within the initial stages of sensory processing.The work presented in this thesis is aimed at studying the neural basis and mechanisms of how the human brain combines sensory information between the senses of hearing and touch. Early latency non-linear auditory-somatosensory neural response interactions have been repeatedly observed in humans and non-human primates. Whether these early, low-level interactions are directly influencing behavioral outcomes remains an open question as they have been observed under diverse experimental circumstances such as anesthesia, passive stimulation, as well as speeded reaction time tasks. Under laboratory settings, it has been demonstrated that simple reaction times to auditory-somatosensory stimuli are facilitated over their unisensory counterparts both when delivered to the same spatial location or not, suggesting that audi- tory-somatosensory integration must occur in cerebral regions with large-scale spatial representations. However experiments that required the spatial processing of the stimuli have observed effects limited to spatially aligned conditions or varying depending on which body part was stimulated. Whether those divergences stem from task requirements and/or the need for spatial processing has not been firmly established.Hypotheses and experimental resultsIn a first study, we hypothesized that auditory-somatosensory early non-linear multisensory neural response interactions are relevant to behavior. Performing a median split according to reaction time of a subset of behavioral and electroencephalographic data, we found that the earliest non-linear multisensory interactions measured within the EEG signal (i.e. between 40-83ms post-stimulus onset) were specific to fast reaction times indicating a direct correlation of early neural response interactions and behavior.In a second study, we hypothesized that the relevance of spatial information for task performance has an impact on behavioral measures of auditory-somatosensory integration. Across two psychophysical experiments we show that facilitated detection occurs even when attending to spatial information, with no modulation according to spatial alignment of the stimuli. On the other hand, discrimination performance with probes, quantified using sensitivity (d'), is impaired following multisensory trials in general and significantly more so following misaligned multisensory trials.In a third study, we hypothesized that behavioral improvements might vary depending which body part is stimulated. Preliminary results suggest a possible dissociation between behavioral improvements andERPs. RTs to multisensory stimuli were modulated by space only in the case when somatosensory stimuli were delivered to the neck whereas multisensory ERPs were modulated by spatial alignment for both types of somatosensory stimuli.ConclusionThis thesis provides insight into the functional role played by early, low-level multisensory interac-tions. Combining psychophysics and electrical neuroimaging techniques we demonstrate the behavioral re-levance of early and low-level interactions in the normal human system. Moreover, we show that these early interactions are hermetic to top-down influences on spatial processing suggesting their occurrence within cerebral regions having access to large-scale spatial representations. We finally highlight specific interactions between auditory space and somatosensory stimulation on different body parts. Gaining an in-depth understanding of how multisensory integration normally operates is of central importance as it will ultimately permit us to consider how the impaired brain could benefit from rehabilitation with multisensory stimula-Abstract (French)Background théoriqueDes stimuli multisensoriels sont plus faciles à reconnaître, peuvent améliorer l'apprentissage et sont traités plus rapidement comparé à des stimuli unisensoriels. Ainsi, la capacité qu'un organisme possède à extraire et à synthétiser avec ses différentes modalités sensorielles des inputs sensoriels pertinents, façonne sa perception et son interaction avec l'environnement. Une question majeure dans le domaine scientifique est comment le cerveau parvient à extraire et à fusionner des stimuli pour créer une représentation percep- tuelle cohérente (mais aussi comment il isole les stimuli sans rapport). Cette fusion entre les sens est appelée "intégration multisensorielle", une notion qui provient de travaux effectués dans le colliculus supérieur chez l'animal, une structure sous-corticale possédant des neurones produisant une sortie multisensorielle différant de la somme des entrées unisensorielles. Traditionnellement, l'intégration d'informations multisen- sorielles au niveau cortical est considérée comme se produisant tardivement dans les aires associatives supérieures dans les lobes frontaux, temporaux et pariétaux, suite à un traitement extensif au sein de régions unisensorielles primaires. Cependant, plusieurs découvertes anatomiques, électrophysiologiques et de neuroimageries remettent en question ce postulat, suggérant l'existence d'une convergence et d'interactions multisensorielles précoces.Les travaux présentés dans cette thèse sont destinés à mieux comprendre les bases neuronales et les mécanismes impliqués dans la combinaison d'informations sensorielles entre les sens de l'audition et du toucher chez l'homme. Des interactions neuronales non-linéaires précoces audio-somatosensorielles ont été observées à maintes reprises chez l'homme et le singe dans des circonstances aussi variées que sous anes- thésie, avec stimulation passive, et lors de tâches nécessitant un comportement (une détection simple de stimuli, par exemple). Ainsi, le rôle fonctionnel joué par ces interactions à une étape du traitement de l'information si précoce demeure une question ouverte. Il a également été démontré que les temps de réaction en réponse à des stimuli audio-somatosensoriels sont facilités par rapport à leurs homologues unisensoriels indépendamment de leur position spatiale. Ce résultat suggère que l'intégration audio- somatosensorielle se produit dans des régions cérébrales possédant des représentations spatiales à large échelle. Cependant, des expériences qui ont exigé un traitement spatial des stimuli ont produits des effets limités à des conditions où les stimuli multisensoriels étaient, alignés dans l'espace ou encore comme pouvant varier selon la partie de corps stimulée. Il n'a pas été établi à ce jour si ces divergences pourraient être dues aux contraintes liées à la tâche et/ou à la nécessité d'un traitement de l'information spatiale.Hypothèse et résultats expérimentauxDans une première étude, nous avons émis l'hypothèse que les interactions audio- somatosensorielles précoces sont pertinentes pour le comportement. En effectuant un partage des temps de réaction par rapport à la médiane d'un sous-ensemble de données comportementales et électroencépha- lographiques, nous avons constaté que les interactions multisensorielles qui se produisent à des latences précoces (entre 40-83ms) sont spécifique aux temps de réaction rapides indiquant une corrélation directe entre ces interactions neuronales précoces et le comportement.Dans une deuxième étude, nous avons émis l'hypothèse que si l'information spatiale devient perti-nente pour la tâche, elle pourrait exercer une influence sur des mesures comportementales de l'intégration audio-somatosensorielles. Dans deux expériences psychophysiques, nous montrons que même si les participants prêtent attention à l'information spatiale, une facilitation de la détection se produit et ce toujours indépendamment de la configuration spatiale des stimuli. Cependant, la performance de discrimination, quantifiée à l'aide d'un index de sensibilité (d') est altérée suite aux essais multisensoriels en général et de manière plus significative pour les essais multisensoriels non-alignés dans l'espace.Dans une troisième étude, nous avons émis l'hypothèse que des améliorations comportementales pourraient différer selon la partie du corps qui est stimulée (la main vs. la nuque). Des résultats préliminaires suggèrent une dissociation possible entre une facilitation comportementale et les potentiels évoqués. Les temps de réactions étaient influencés par la configuration spatiale uniquement dans le cas ou les stimuli somatosensoriels étaient sur la nuque alors que les potentiels évoqués étaient modulés par l'alignement spatial pour les deux types de stimuli somatosensorielles.ConclusionCette thèse apporte des éléments nouveaux concernant le rôle fonctionnel joué par les interactions multisensorielles précoces de bas niveau. En combinant la psychophysique et la neuroimagerie électrique, nous démontrons la pertinence comportementale des ces interactions dans le système humain normal. Par ailleurs, nous montrons que ces interactions précoces sont hermétiques aux influences dites «top-down» sur le traitement spatial suggérant leur occurrence dans des régions cérébrales ayant accès à des représentations spatiales de grande échelle. Nous soulignons enfin des interactions spécifiques entre l'espace auditif et la stimulation somatosensorielle sur différentes parties du corps. Approfondir la connaissance concernant les bases neuronales et les mécanismes impliqués dans l'intégration multisensorielle dans le système normale est d'une importance centrale car elle permettra d'examiner et de mieux comprendre comment le cerveau déficient pourrait bénéficier d'une réhabilitation avec la stimulation multisensorielle.
Resumo:
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.
Resumo:
Mapping the human auditory cortex with standard functional imaging techniques is difficult because of its small size and angular position along the Sylvian fissure. As a result, the exact number and location of auditory cortex areas in the human remains unknown. In a first experiment, we measured the two largest tonotopic areas of primary auditory cortex (PAC, Al and R) using high-resolution functional MRI at 7 Tesla relative to the underlying anatomy of Heschl's gyrus (HG). The data reveals a clear anatomical- functional relationship that indicates the location of PAC across the range of common morphological variants of HG (single gyri, partial duplication and complete duplication). Human PAC tonotopic areas are oriented along an oblique posterior-to-anterior axis with mirror-symmetric frequency gradients perpendicular to HG, as in the macaque. In a second experiment, we tested whether these primary frequency-tuned units were modulated by selective attention to preferred vs. non-preferred sound frequencies in the dynamic manner needed to account for human listening abilities in noisy environments, such as cocktail parties or busy streets. We used a dual-stream selective attention experiment where subjects attended to one of two competing tonal streams presented simultaneously to different ears. Attention to low-frequency tones (250 Hz) enhanced neural responses within low-frequency-tuned voxels relative to high (4000 Hz), and vice versa when at-tention switched from high to low. Human PAC is able to tune into attended frequency channels and can switch frequencies on demand, like a radio. In a third experiment, we investigated repetition suppression effects to environmental sounds within primary and non-primary early-stage auditory areas, identified with the tonotopic mapping design. Repeated presentations of sounds from the same sources, as compared to different sources, gave repetition suppression effects within posterior and medial non-primary areas of the right hemisphere, reflecting their potential involvement in semantic representations. These three studies were conducted at 7 Tesla with high-resolution imaging. However, 7 Tesla scanners are, for the moment, not yet used for clinical diagnosis and mostly reside in institutions external to hospitals. Thus, hospital-based clinical functional and structural studies are mainly performed using lower field systems (1.5 or 3 Tesla). In a fourth experiment, we acquired tonotopic maps at 3 and 7 Tesla and evaluated the consistency of a tonotopic mapping paradigm between scanners. Mirror-symmetric gradients within PAC were highly similar at 7 and 3 Tesla across renderings at different spatial resolutions. We concluded that the tonotopic mapping paradigm is robust and suitable for definition of primary tonotopic areas, also at 3 Tesla. Finally, in a fifth study, we considered whether focal brain lesions alter tonotopic representations in the intact ipsi- and contralesional primary auditory cortex in three patients with hemispheric or cerebellar lesions, without and with auditory complaints. We found evidence for tonotopic reorganisation at the level of the primary auditory cortex in cases of brain lesions independently of auditory complaints. Overall, these results reflect a certain degree of plasticity within primary auditory cortex in different populations of subjects, assessed at different field strengths. - La cartographie du cortex auditif chez l'humain est difficile à réaliser avec des techniques d'imagerie fonctionnelle standard, étant donné sa petite taille et position angulaire le long de la fissure sylvienne. En conséquence, le nombre et l'emplacement exacts des différentes aires du cortex auditif restent inconnus chez l'homme. Lors d'une première expérience, nous avons mesuré, avec de l'imagerie par résonance magnétique à haute intensité (IRMf à 7 Tesla) chez des sujets humains sains, deux larges aires au sein du cortex auditif primaire (PAC; Al et R) avec une représentation spécifique des fréquences pures préférées - ou tonotopie. Nos résultats ont démontré une relation anatomico- fonctionnelle qui définit clairement la position du PAC à travers toutes les variantes du gyrus d'Heschl's (HG). Les aires tonotopiques du PAC humain sont orientées le long d'un axe postéro-antérieur oblique avec des gradients de fréquences spécifiques perpendiculaires à HG, d'une manière similaire à celles mesurées chez le singe. Dans une deuxième expérience, nous avons testé si ces aires primaires pouvaient être modulées, de façon dynamique, par une attention sélective pour des fréquences préférées par rapport à celles non-préférées. Cette modulation est primordiale lors d'interactions sociales chez l'humain en présence de bruits distracteurs tels que d'autres discussions ou un environnement sonore nuisible (comme par exemple, dans la circulation routière). Dans cette étude, nous avons utilisé une expérience d'attention sélective où le sujet devait être attentif à une des deux voies sonores présentées simultanément à chaque oreille. Lorsque le sujet portait était attentif aux sons de basses fréquences (250 Hz), la réponse neuronale relative à ces fréquences augmentait par rapport à celle des hautes fréquences (4000 Hz), et vice versa lorsque l'attention passait des hautes aux basses fréquences. De ce fait, nous pouvons dire que PAC est capable de focaliser sur la fréquence attendue et de changer de canal selon la demande, comme une radio. Lors d'une troisième expérience, nous avons étudié les effets de suppression due à la répétition de sons environnementaux dans les aires auditives primaires et non-primaires, d'abord identifiées via le protocole de la première étude. La présentation répétée de sons provenant de la même source sonore, par rapport à de sons de différentes sources sonores, a induit un effet de suppression dans les aires postérieures et médiales auditives non-primaires de l'hémisphère droite, reflétant une implication de ces aires dans la représentation de la catégorie sémantique. Ces trois études ont été réalisées avec de l'imagerie à haute résolution à 7 Tesla. Cependant, les scanners 7 Tesla ne sont pour le moment utilisés que pour de la recherche fondamentale, principalement dans des institutions externes, parfois proches du patient mais pas directement à son chevet. L'imagerie fonctionnelle et structurelle clinique se fait actuellement principalement avec des infrastructures cliniques à 1.5 ou 3 Tesla. Dans le cadre dune quatrième expérience, nous avons avons évalués la cohérence du paradigme de cartographie tonotopique à travers différents scanners (3 et 7 Tesla) chez les mêmes sujets. Nos résultats démontrent des gradients de fréquences définissant PAC très similaires à 3 et 7 Tesla. De ce fait, notre paradigme de définition des aires primaires auditives est robuste et applicable cliniquement. Finalement, nous avons évalués l'impact de lésions focales sur les représentations tonotopiques des aires auditives primaires des hémisphères intactes contralésionales et ipsilésionales chez trois patients avec des lésions hémisphériques ou cérébélleuses avec ou sans plaintes auditives. Nous avons trouvé l'évidence d'une certaine réorganisation des représentations topographiques au niveau de PAC dans le cas de lésions cérébrales indépendamment des plaintes auditives. En conclusion, nos résultats démontrent une certaine plasticité du cortex auditif primaire avec différentes populations de sujets et différents champs magnétiques.
Resumo:
La modélisation, chez l'animal, de maladies psychiatriques telles que la schizophrénie repose sur différentes démarches visant à induire des perturbations cérébrales similaires à celles observées dans la maladie. Nous avons cherché à étudier chez le rat les effets d'une diminution (50%) transitoire en glutathion (GSH) durant le développement (PND 5 à PND 16) à partir de l'implication, chez des adultes, des conséquences de cette perturbation dans des mécanismes fondamentaux de traitement de l'information sensorielle. Cette thèse évalue et documente les déficits de compétences de navigation spatiale dans ce modèle. Nous avons mis en évidence des effets comportementaux à partir de l'identification de différences particulières dans des tâches d'orientation: des difficultés, chez les rats ayant subi un déficit en GSH, à élaborer une représentation globale de l'environnement dans lequel ils se déplacent, difficultés compensées par une attention particulière aux détails visuels le composant. Cette stratégie réactive compensatoire est efficace lorsque les conditions permettent un ajustement continu aux repères visuels environnementaux. Elle ne permet cependant pas des prédictions et des attentes sur ce qui devrait être rencontré et perçu dans une certaine direction, dès qu'une partie des informations visuelles familières disparaît. Il faudrait pour cela une capacité fondée sur une représentation abstraite, à distance des modalités sensorielles qui en ont permis son élaboration. Notre thèse soutient que les déficits, supposés participer à l'émergence de certains symptômes de la maladie, auraient également des conséquences sur l'élaboration de la représentation spatiale nécessaire à des capacités d'orientation effectives et symboliques. - The study of a psychiatric disease such as schizophrenia in an animal model relies on different approaches attempting to replicate brain perturbations similar to those observed in the illness. In the present work, behavioural consequences of a functional deficit in brain connectivity and coordination were assessed in rats with a transitory glutathione (GSH) deficit induced during the postnatal development (PND 5-PND 16) with daily injections of BSO (1- buthionine-(S,R)- sulfoximine). We searched for a theoretical syndrome associating ecologically relevant behavioural adaptive deficits and resulting from the weakening of sensory integration processes. Our results revealed significant and specific deficit of BSO treated rats in spatial orientation tasks designed to test for cognitive mapping abilities. Treated rats behaved as if impaired in the proactive strategies supported by an abstract representation such as a cognitive map. In contrast their performances were preserved whenever the environmental conditions allowed for adaptative reactive strategies, an equivalent of the visual affordances described by Gibson (1958). This supports our thesis that BSO treated rats expressed difficulties in elaborating a global representation of the environment. This deficit was completely - or - partially compensated by the development of an increased attention to the environment's visual details. This compensatory reactive strategy requires a rich environment allowing for continuous adjustment to visual cues. However, such adjustment doesn't allow to predictions and expectancies about what should be met and perceived in a certain direction, when familiar visual spatial cues are missing. Such competencies require orientation based on the use of an abstract spatial representation, independent from the specific sensory modalities that have participated to its elaboration. The impairment of BSO rats such spatial representation could result from a deficit in the integration and organization of perceptual information. Our model leads to the hypothesis that these fundamental deficits might account for certain symptoms of schizophrenia. They would also interfere with in the capacity to elaborate spatial representation necessary for optimal orientation in natural, artificial or symbolic environment.
Resumo:
The 2×2 MIMO profiles included in Mobile WiMAX specifications are Alamouti’s space-time code (STC) fortransmit diversity and spatial multiplexing (SM). The former hasfull diversity and the latter has full rate, but neither of them hasboth of these desired features. An alternative 2×2 STC, which is both full rate and full diversity, is the Golden code. It is the best known 2×2 STC, but it has a high decoding complexity. Recently, the attention was turned to the decoder complexity, this issue wasincluded in the STC design criteria, and different STCs wereproposed. In this paper, we first present a full-rate full-diversity2×2 STC design leading to substantially lower complexity ofthe optimum detector compared to the Golden code with only a slight performance loss. We provide the general optimized form of this STC and show that this scheme achieves the diversitymultiplexing frontier for square QAM signal constellations. Then, we present a variant of the proposed STC, which provides a further decrease in the detection complexity with a rate reduction of 25% and show that this provides an interesting trade-off between the Alamouti scheme and SM.
Resumo:
Background: Earlier contributions have documented significant changes in sensory, attention-related endogenous event-related potential (ERP) components and θ band oscillatory responses during working memory activation in patients with schizophrenia. In patients with first-episode psychosis, such studies are still scarce and mostly focused on auditory sensory processing. The present study aimed to explore whether subtle deficits of cortical activation are present in these patients before the decline of working memory performance. Methods: We assessed exogenous and endogenous ERPs and frontal θ event-related synchronization (ERS) in patients with first-episode psychosis and healthy controls who successfully performed an adapted 2-back working memory task, including 2 visual n-backworking memory tasks as well as oddball detection and passive fixation tasks. Results: We included 15 patients with first-episode psychosis and 18 controls in this study. Compared with controls, patients with first-episode psychosis displayed increased latencies of early visual ERPs and phasic θ ERS culmination peak in all conditions. However, they also showed a rapid recruitment of working memory-related neural generators, even in pure attention tasks, as indicated by the decreased N200 latency and increased amplitude of sustained θ ERS in detection compared with controls. Limitations: Owing to the limited sample size, no distinction was made between patients with first-episode psychosis with positive and negative symptoms. Although we controlled for the global load of neuroleptics, medication effect cannot be totally ruled out. Conclusion: The present findings support the concept of a blunted electroencephalographic response in patients with first-episode psychosis who recruit the maximum neural generators in simple attention conditions without being able to modulate their brain activation with increased complexity of working memory tasks.
Resumo:
Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799-1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with "visual preference" showed enhanced phosphene perception irrespective of L-sound velocity, those with "auditory preference" showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.
Resumo:
Background Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
Resumo:
Mismatch negativity (MMN) overlaps with other auditory event-related potential (ERP) components. We examined the ERPs of 50 9- to 11-year-old children for vowels /i/, /y/ and equivalent complex tones. The goal was to separate MMN from obligatory ERP components using principal component analysis and equal probability control condition. In addition to the contrast of the deviant minus standard response, we employed the contrast of the deviant minus control response, to see whether the obligatory processing contributes to MMN in children. When looking for differences in speech deviant minus standard contrast, MMN starts around 112 ms. However, when both contrasts are examined, MMN emerges for speech at 160 ms whereas for nonspeech MMN is observed at 112 ms regardless of contrast. We argue that this discriminative response to speech stimuli at 112 ms is obligatory in nature rather than reflecting change detection processing.
Resumo:
Previous research has provided inconsistent results regarding the spatial modulation of auditory-somatosensory interactions. The present study reports three experiments designed to investigate the nature of these interactions in the space close to the head. Human participants made speeded detection responses to unimodal auditory, somatosensory, or simultaneous auditory-somatosensory stimuli. In Experiment 1, electrocutaneous stimuli were presented to either earlobe, while auditory stimuli were presented from the same versus opposite sides, and from one of two distances (20 vs. 70cm) from the participant's head. The results demonstrated a spatial modulation of auditory-somatosensory interactions when auditory stimuli were presented from close to the head. In Experiment 2, electrocutaneous stimuli were delivered to the hands, which were placed either close to or far from the head, while the auditory stimuli were again presented at one of two distances. The results revealed that the spatial modulation observed in Experiment 1 was specific to the particular body part stimulated (head) rather than to the region of space (i.e. around the head) where the stimuli were presented. The results of Experiment 3 demonstrate that sounds that contain high-frequency components are particularly effective in eliciting this auditory-somatosensory spatial effect. Taken together, these findings help to resolve inconsistencies in the previous literature and suggest that auditory-somatosensory multisensory integration is modulated by the stimulated body surface and acoustic spectra of the stimuli presented.
Resumo:
Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand.
Resumo:
Although paraphrasing is the linguistic mechanism underlying many plagiarism cases, little attention has been paid to its analysis in the framework of automatic plagiarism detection. Therefore, state-of-the-art plagiarism detectors find it difficult to detect cases of paraphrase plagiarism. In this article, we analyse the relationship between paraphrasing and plagiarism, paying special attention to which paraphrase phenomena underlie acts of plagiarism and which of them are detected by plagiarism detection systems. With this aim in mind, we created the P4P corpus, a new resource which uses a paraphrase typology to annotate a subset of the PAN-PC-10 corpus for automatic plagiarism detection. The results of the Second International Competition on Plagiarism Detection were analysed in the light of this annotation. The presented experiments show that (i) more complex paraphrase phenomena and a high density of paraphrase mechanisms make plagiarism detection more difficult, (ii) lexical substitutions are the paraphrase mechanisms used the most when plagiarising, and (iii) paraphrase mechanisms tend to shorten the plagiarized text. For the first time, the paraphrase mechanisms behind plagiarism have been analysed, providing critical insights for the improvement of automatic plagiarism detection systems.
Resumo:
Spatial hearing refers to a set of abilities enabling us to determine the location of sound sources, redirect our attention toward relevant acoustic events, and recognize separate sound sources in noisy environments. Determining the location of sound sources plays a key role in the way in which humans perceive and interact with their environment. Deficits in sound localization abilities are observed after lesions to the neural tissues supporting these functions and can result in serious handicaps in everyday life. These deficits can, however, be remediated (at least to a certain degree) by the surprising capacity of reorganization that the human brain possesses following damage and/or learning, namely, the brain plasticity. In this thesis, our aim was to investigate the functional organization of auditory spatial functions and the learning-induced plasticity of these functions. Overall, we describe the results of three studies. The first study entitled "The role of the right parietal cortex in sound localization: A chronometric single pulse transcranial magnetic stimulation study" (At et al., 2011), study A, investigated the role of the right parietal cortex in spatial functions and its chronometry (i.e. the critical time window of its contribution to sound localizations). We concentrated on the behavioral changes produced by the temporarily inactivation of the parietal cortex with transcranial magnetic stimulation (TMS). We found that the integrity of the right parietal cortex is crucial for localizing sounds in the space and determined a critical time window of its involvement, suggesting a right parietal dominance for auditory spatial discrimination in both hemispaces. In "Distributed coding of the auditory space in man: evidence from training-induced plasticity" (At et al., 2013a), study B, we investigated the neurophysiological correlates and changes of the different sub-parties of the right auditory hemispace induced by a multi-day auditory spatial training in healthy subjects with electroencephalography (EEG). We report a distributed coding for sound locations over numerous auditory regions, particular auditory areas code specifically for precise parts of the auditory space, and this specificity for a distinct region is enhanced with training. In the third study "Training-induced changes in auditory spatial mismatch negativity" (At et al., 2013b), study C, we investigated the pre-attentive neurophysiological changes induced with a training over 4 days in healthy subjects with a passive mismatch negativity (MMN) paradigm. We showed that training changed the mechanisms for the relative representation of sound positions and not the specific lateralization themselves and that it changed the coding in right parahippocampal regions. - L'audition spatiale désigne notre capacité à localiser des sources sonores dans l'espace, de diriger notre attention vers les événements acoustiques pertinents et de reconnaître des sources sonores appartenant à des objets distincts dans un environnement bruyant. La localisation des sources sonores joue un rôle important dans la façon dont les humains perçoivent et interagissent avec leur environnement. Des déficits dans la localisation de sons sont souvent observés quand les réseaux neuronaux impliqués dans cette fonction sont endommagés. Ces déficits peuvent handicaper sévèrement les patients dans leur vie de tous les jours. Cependant, ces déficits peuvent (au moins à un certain degré) être réhabilités grâce à la plasticité cérébrale, la capacité du cerveau humain à se réorganiser après des lésions ou un apprentissage. L'objectif de cette thèse était d'étudier l'organisation fonctionnelle de l'audition spatiale et la plasticité induite par l'apprentissage de ces fonctions. Dans la première étude intitulé « The role of the right parietal cortex in sound localization : A chronometric single pulse study » (At et al., 2011), étude A, nous avons examiné le rôle du cortex pariétal droit dans l'audition spatiale et sa chronométrie, c'est-à- dire le moment critique de son intervention dans la localisation de sons. Nous nous sommes concentrés sur les changements comportementaux induits par l'inactivation temporaire du cortex pariétal droit par le biais de la Stimulation Transcrânienne Magnétique (TMS). Nous avons démontré que l'intégrité du cortex pariétal droit est cruciale pour localiser des sons dans l'espace. Nous avons aussi défini le moment critique de l'intervention de cette structure. Dans « Distributed coding of the auditory space : evidence from training-induced plasticity » (At et al., 2013a), étude B, nous avons examiné la plasticité cérébrale induite par un entraînement des capacités de discrimination auditive spatiale de plusieurs jours. Nous avons montré que le codage des positions spatiales est distribué dans de nombreuses régions auditives, que des aires auditives spécifiques codent pour des parties données de l'espace et que cette spécificité pour des régions distinctes est augmentée par l'entraînement. Dans « Training-induced changes in auditory spatial mismatch negativity » (At et al., 2013b), étude C, nous avons examiné les changements neurophysiologiques pré- attentionnels induits par un entraînement de quatre jours. Nous avons montré que l'entraînement modifie la représentation des positions spatiales entraînées et non-entrainées, et que le codage de ces positions est modifié dans des régions parahippocampales.