92 resultados para sensory stimuli
em Université de Lausanne, Switzerland
Resumo:
The neural response to a violation of sequences of identical sounds is a typical example of the brain's sensitivity to auditory regularities. Previous literature interprets this effect as a pre-attentive and unconscious processing of sensory stimuli. By contrast, a violation to auditory global regularities, i.e. based on repeating groups of sounds, is typically detectable when subjects can consciously perceive them. Here, we challenge the notion that global detection implies consciousness by testing the neural response to global violations in a group of 24 patients with post-anoxic coma (three females, age range 45-87 years), treated with mild therapeutic hypothermia and sedation. By applying a decoding analysis to electroencephalographic responses to standard versus deviant sound sequences, we found above-chance decoding performance in 10 of 24 patients (Wilcoxon signed-rank test, P < 0.001), despite five of them being mildly hypothermic, sedated and unarousable. Furthermore, consistently with previous findings based on the mismatch negativity the progression of this decoding performance was informative of patients' chances of awakening (78% predictive of awakening). Our results show for the first time that detection of global regularities at neural level exists despite a deeply unconscious state.
Resumo:
PURPOSE: We characterized the pupil responses that reflect rod, cone, and melanopsin function in a genetically homogeneous cohort of patients with autosomal dominant retinitis pigmentosa (adRP). METHODS: Nine patients with Gly56Arg mutation of the NR2E3 gene and 12 control subjects were studied. Pupil and subjective visual responses to red and blue light flashes over a 7 log-unit range of intensities were recorded under dark and light adaptation. The pupil responses were plotted against stimulus intensity to obtain red-light and blue-light response curves. RESULTS: In the dark-adapted blue-light stimulus condition, patients showed significantly higher threshold intensities for visual perception and for a pupil response compared to controls (P = 0.02 and P = 0.006, respectively). The rod-dependent, blue-light pupil responses decreased with disease progression. In contrast, the cone-dependent pupil responses (light-adapted red-light stimulus condition) did not differ between patients and controls. The difference in the retinal sensitivity to blue and red stimuli was the most sensitive parameter to detect photoreceptor dysfunction. Unexpectedly, the melanopsin-mediated pupil response was decreased in patients (P = 0.02). CONCLUSIONS: Pupil responses of patients with NR2E3-associated adRP demonstrated reduced retinal sensitivity to dim blue light under dark adaptation, presumably reflecting decreased rod function. Rod-dependent pupil responses were quantifiable in all patients, including those with non-recordable scotopic electroretinogram, and correlated with the extent of clinical disease. Thus, the chromatic pupil light reflex can be used to monitor photoreceptor degeneration over a larger range of disease progression compared to standard electrophysiology.
Resumo:
BACKGROUND: Members of the degenerin/epithelial (DEG/ENaC) sodium channel family are mechanosensors in C elegans, and Nav1.7 and Nav1.8 voltage-gated sodium channel knockout mice have major deficits in mechanosensation. β and γENaC sodium channel subunits are present with acid sensing ion channels (ASICs) in mammalian sensory neurons of the dorsal root ganglia (DRG). The extent to which epithelial or voltage-gated sodium channels are involved in transduction of mechanical stimuli is unclear. RESULTS: Here we show that deleting β and γENaC sodium channels in sensory neurons does not result in mechanosensory behavioural deficits. We had shown previously that Nav1.7/Nav1.8 double knockout mice have major deficits in behavioural responses to noxious mechanical pressure. However, all classes of mechanically activated currents in DRG neurons are unaffected by deletion of the two sodium channels. In contrast, the ability of Nav1.7/Nav1.8 knockout DRG neurons to generate action potentials is compromised with 50% of the small diameter sensory neurons unable to respond to electrical stimulation in vitro. CONCLUSION: Behavioural deficits in Nav1.7/Nav1.8 knockout mice reflects a failure of action potential propagation in a mechanosensitive set of sensory neurons rather than a loss of primary transduction currents. DEG/ENaC sodium channels are not mechanosensors in mouse sensory neurons.
Resumo:
Neural comparisons of bilateral sensory inputs are essential for visual depth perception and accurate localization of sounds in space. All animals, from single-cell prokaryotes to humans, orient themselves in response to environmental chemical stimuli, but the contribution of spatial integration of neural activity in olfaction remains unclear. We investigated this problem in Drosophila melanogaster larvae. Using high-resolution behavioral analysis, we studied the chemotaxis behavior of larvae with a single functional olfactory neuron on either the left or right side of the head, allowing us to examine unilateral or bilateral olfactory input. We developed new spectroscopic methods to create stable odorant gradients in which odor concentrations were experimentally measured. In these controlled environments, we observed that a single functional neuron provided sufficient information to permit larval chemotaxis. We found additional evidence that the overall accuracy of navigation is enhanced by the increase in the signal-to-noise ratio conferred by bilateral sensory input.
Resumo:
Multisensory interactions are observed in species from single-cell organisms to humans. Important early work was primarily carried out in the cat superior colliculus and a set of critical parameters for their occurrence were defined. Primary among these were temporal synchrony and spatial alignment of bisensory inputs. Here, we assessed whether spatial alignment was also a critical parameter for the temporally earliest multisensory interactions that are observed in lower-level sensory cortices of the human. While multisensory interactions in humans have been shown behaviorally for spatially disparate stimuli (e.g. the ventriloquist effect), it is not clear if such effects are due to early sensory level integration or later perceptual level processing. In the present study, we used psychophysical and electrophysiological indices to show that auditory-somatosensory interactions in humans occur via the same early sensory mechanism both when stimuli are in and out of spatial register. Subjects more rapidly detected multisensory than unisensory events. At just 50 ms post-stimulus, neural responses to the multisensory 'whole' were greater than the summed responses from the constituent unisensory 'parts'. For all spatial configurations, this effect followed from a modulation of the strength of brain responses, rather than the activation of regions specifically responsive to multisensory pairs. Using the local auto-regressive average source estimation, we localized the initial auditory-somatosensory interactions to auditory association areas contralateral to the side of somatosensory stimulation. Thus, multisensory interactions can occur across wide peripersonal spatial separations remarkably early in sensory processing and in cortical regions traditionally considered unisensory.
Resumo:
Calbindin and calretinin are two homologous calcium-binding proteins that are expressed by subpopulations of primary sensory neurons. In the present work, we have studied the distribution of the neurons expressing calbindin and calretinin in dorsal root ganglia of the rat and their peripheral projections. Calbindin and calretinin immunoreactivities were expressed by subpopulations of large- and small-sized primary sensory neurons and colocalized in a majority of large-sized ones. The axons emerging from calbindin- or calretinin-immunoreactive neurons innervated muscle spindles, Pacini corpuscles and subepidermal lamellar corpuscles in the glabrous skin, formed palisades of lanceolate endings around hairs and vibrissae, and gave rise to intraepidermal nerve endings in the digital skin. Since most of these afferents are considered as rapidly adapting mechanoreceptors, it is concluded that calbindin- or calretinin-expressing neurons innervate particular mechanoreceptors that display physiological characteristics of rapid adaptation to stimuli.
Resumo:
Accurate perception of taste information is crucial for animal survival. In adult Drosophila, gustatory receptor neurons (GRNs) perceive chemical stimuli of one specific gustatory modality associated with a stereotyped behavioural response, such as aversion or attraction. We show that GRNs of Drosophila larvae employ a surprisingly different mode of gustatory information coding. Using a novel method for calcium imaging in the larval gustatory system, we identify a multimodal GRN that responds to chemicals of different taste modalities with opposing valence, such as sweet sucrose and bitter denatonium, reliant on different sensory receptors. This multimodal neuron is essential for bitter compound avoidance, and its artificial activation is sufficient to mediate aversion. However, the neuron is also essential for the integration of taste blends. Our findings support a model for taste coding in larvae, in which distinct receptor proteins mediate different responses within the same, multimodal GRN.
Resumo:
Whether the somatosensory system, like its visual and auditory counterparts, is comprised of parallel functional pathways for processing identity and spatial attributes (so-called what and where pathways, respectively) has hitherto been studied in humans using neuropsychological and hemodynamic methods. Here, electrical neuroimaging of somatosensory evoked potentials (SEPs) identified the spatio-temporal mechanisms subserving vibrotactile processing during two types of blocks of trials. What blocks varied stimuli in their frequency (22.5 Hz vs. 110 Hz) independently of their location (left vs. right hand). Where blocks varied the same stimuli in their location independently of their frequency. In this way, there was a 2x2 within-subjects factorial design, counterbalancing the hand stimulated (left/right) and trial type (what/where). Responses to physically identical somatosensory stimuli differed within 200 ms post-stimulus onset, which is within the same timeframe we previously identified for audition (De Santis, L., Clarke, S., Murray, M.M., 2007. Automatic and intrinsic auditory "what" and "where" processing in humans revealed by electrical neuroimaging. Cereb Cortex 17, 9-17.). Initially (100-147 ms), responses to each hand were stronger to the what than where condition in a statistically indistinguishable network within the hemisphere contralateral to the stimulated hand, arguing against hemispheric specialization as the principal basis for somatosensory what and where pathways. Later (149-189 ms) responses differed topographically, indicative of the engagement of distinct configurations of brain networks. A common topography described responses to the where condition irrespective of the hand stimulated. By contrast, different topographies accounted for the what condition and also as a function of the hand stimulated. Parallel, functionally specialized pathways are observed across sensory systems and may be indicative of a computationally advantageous organization for processing spatial and identity information.
Resumo:
Research has suggested that exogenous opioid substances can have direct effects on cardiac muscle or influence neurotransmitter release via presynaptic modulation of neuronal inputs to the heart. In the present study, multiple-labelling immunohistochemistry was employed to determine the distribution of endogenous opioid peptides within the guinea-pig heart. Approximately 40% of cardiac ganglion cells contained immunoreactivity for dynorphin A (1-8), dynorphin A (1-17) and dynorphin B whilst 20% displayed leu-enkephalin immunoreactivity. Different populations of opioid-containing ganglion cells were identified according to the co-existence of opioid immunoreactivity with immunoreactivity for somatostatin and neuropeptide Y. Immunoreactivity for prodynorphin-derived peptides was observed in many sympathetic axons in the heart and was also observed, though to a lesser extent, in sensory axons. Leu-enkephalin immunoreactivity was observed in occasional sympathetic and sensory axons. No immunoreactivity was observed for met-enkephalin-arg-gly-leu or for beta-endorphin. These results demonstrate that prodynorphin-derived peptides are present in parasympathetic, sympathetic and sensory nerves within the heart, but suggest that only the prodynorphin gene is expressed in guinea-pig cardiac nerves. This study has shown that endogenous opioid peptides are well placed to regulate cardiac function via both autonomic and sensory pathways.
Resumo:
Abstract (English)General backgroundMultisensory stimuli are easier to recognize, can improve learning and a processed faster compared to unisensory ones. As such, the ability an organism has to extract and synthesize relevant sensory inputs across multiple sensory modalities shapes his perception of and interaction with the environment. A major question in the scientific field is how the brain extracts and fuses relevant information to create a unified perceptual representation (but also how it segregates unrelated information). This fusion between the senses has been termed "multisensory integration", a notion that derives from seminal animal single-cell studies performed in the superior colliculus, a subcortical structure shown to create a multisensory output differing from the sum of its unisensory inputs. At the cortical level, integration of multisensory information is traditionally deferred to higher classical associative cortical regions within the frontal, temporal and parietal lobes, after extensive processing within the sensory-specific and segregated pathways. However, many anatomical, electrophysiological and neuroimaging findings now speak for multisensory convergence and interactions as a distributed process beginning much earlier than previously appreciated and within the initial stages of sensory processing.The work presented in this thesis is aimed at studying the neural basis and mechanisms of how the human brain combines sensory information between the senses of hearing and touch. Early latency non-linear auditory-somatosensory neural response interactions have been repeatedly observed in humans and non-human primates. Whether these early, low-level interactions are directly influencing behavioral outcomes remains an open question as they have been observed under diverse experimental circumstances such as anesthesia, passive stimulation, as well as speeded reaction time tasks. Under laboratory settings, it has been demonstrated that simple reaction times to auditory-somatosensory stimuli are facilitated over their unisensory counterparts both when delivered to the same spatial location or not, suggesting that audi- tory-somatosensory integration must occur in cerebral regions with large-scale spatial representations. However experiments that required the spatial processing of the stimuli have observed effects limited to spatially aligned conditions or varying depending on which body part was stimulated. Whether those divergences stem from task requirements and/or the need for spatial processing has not been firmly established.Hypotheses and experimental resultsIn a first study, we hypothesized that auditory-somatosensory early non-linear multisensory neural response interactions are relevant to behavior. Performing a median split according to reaction time of a subset of behavioral and electroencephalographic data, we found that the earliest non-linear multisensory interactions measured within the EEG signal (i.e. between 40-83ms post-stimulus onset) were specific to fast reaction times indicating a direct correlation of early neural response interactions and behavior.In a second study, we hypothesized that the relevance of spatial information for task performance has an impact on behavioral measures of auditory-somatosensory integration. Across two psychophysical experiments we show that facilitated detection occurs even when attending to spatial information, with no modulation according to spatial alignment of the stimuli. On the other hand, discrimination performance with probes, quantified using sensitivity (d'), is impaired following multisensory trials in general and significantly more so following misaligned multisensory trials.In a third study, we hypothesized that behavioral improvements might vary depending which body part is stimulated. Preliminary results suggest a possible dissociation between behavioral improvements andERPs. RTs to multisensory stimuli were modulated by space only in the case when somatosensory stimuli were delivered to the neck whereas multisensory ERPs were modulated by spatial alignment for both types of somatosensory stimuli.ConclusionThis thesis provides insight into the functional role played by early, low-level multisensory interac-tions. Combining psychophysics and electrical neuroimaging techniques we demonstrate the behavioral re-levance of early and low-level interactions in the normal human system. Moreover, we show that these early interactions are hermetic to top-down influences on spatial processing suggesting their occurrence within cerebral regions having access to large-scale spatial representations. We finally highlight specific interactions between auditory space and somatosensory stimulation on different body parts. Gaining an in-depth understanding of how multisensory integration normally operates is of central importance as it will ultimately permit us to consider how the impaired brain could benefit from rehabilitation with multisensory stimula-Abstract (French)Background théoriqueDes stimuli multisensoriels sont plus faciles à reconnaître, peuvent améliorer l'apprentissage et sont traités plus rapidement comparé à des stimuli unisensoriels. Ainsi, la capacité qu'un organisme possède à extraire et à synthétiser avec ses différentes modalités sensorielles des inputs sensoriels pertinents, façonne sa perception et son interaction avec l'environnement. Une question majeure dans le domaine scientifique est comment le cerveau parvient à extraire et à fusionner des stimuli pour créer une représentation percep- tuelle cohérente (mais aussi comment il isole les stimuli sans rapport). Cette fusion entre les sens est appelée "intégration multisensorielle", une notion qui provient de travaux effectués dans le colliculus supérieur chez l'animal, une structure sous-corticale possédant des neurones produisant une sortie multisensorielle différant de la somme des entrées unisensorielles. Traditionnellement, l'intégration d'informations multisen- sorielles au niveau cortical est considérée comme se produisant tardivement dans les aires associatives supérieures dans les lobes frontaux, temporaux et pariétaux, suite à un traitement extensif au sein de régions unisensorielles primaires. Cependant, plusieurs découvertes anatomiques, électrophysiologiques et de neuroimageries remettent en question ce postulat, suggérant l'existence d'une convergence et d'interactions multisensorielles précoces.Les travaux présentés dans cette thèse sont destinés à mieux comprendre les bases neuronales et les mécanismes impliqués dans la combinaison d'informations sensorielles entre les sens de l'audition et du toucher chez l'homme. Des interactions neuronales non-linéaires précoces audio-somatosensorielles ont été observées à maintes reprises chez l'homme et le singe dans des circonstances aussi variées que sous anes- thésie, avec stimulation passive, et lors de tâches nécessitant un comportement (une détection simple de stimuli, par exemple). Ainsi, le rôle fonctionnel joué par ces interactions à une étape du traitement de l'information si précoce demeure une question ouverte. Il a également été démontré que les temps de réaction en réponse à des stimuli audio-somatosensoriels sont facilités par rapport à leurs homologues unisensoriels indépendamment de leur position spatiale. Ce résultat suggère que l'intégration audio- somatosensorielle se produit dans des régions cérébrales possédant des représentations spatiales à large échelle. Cependant, des expériences qui ont exigé un traitement spatial des stimuli ont produits des effets limités à des conditions où les stimuli multisensoriels étaient, alignés dans l'espace ou encore comme pouvant varier selon la partie de corps stimulée. Il n'a pas été établi à ce jour si ces divergences pourraient être dues aux contraintes liées à la tâche et/ou à la nécessité d'un traitement de l'information spatiale.Hypothèse et résultats expérimentauxDans une première étude, nous avons émis l'hypothèse que les interactions audio- somatosensorielles précoces sont pertinentes pour le comportement. En effectuant un partage des temps de réaction par rapport à la médiane d'un sous-ensemble de données comportementales et électroencépha- lographiques, nous avons constaté que les interactions multisensorielles qui se produisent à des latences précoces (entre 40-83ms) sont spécifique aux temps de réaction rapides indiquant une corrélation directe entre ces interactions neuronales précoces et le comportement.Dans une deuxième étude, nous avons émis l'hypothèse que si l'information spatiale devient perti-nente pour la tâche, elle pourrait exercer une influence sur des mesures comportementales de l'intégration audio-somatosensorielles. Dans deux expériences psychophysiques, nous montrons que même si les participants prêtent attention à l'information spatiale, une facilitation de la détection se produit et ce toujours indépendamment de la configuration spatiale des stimuli. Cependant, la performance de discrimination, quantifiée à l'aide d'un index de sensibilité (d') est altérée suite aux essais multisensoriels en général et de manière plus significative pour les essais multisensoriels non-alignés dans l'espace.Dans une troisième étude, nous avons émis l'hypothèse que des améliorations comportementales pourraient différer selon la partie du corps qui est stimulée (la main vs. la nuque). Des résultats préliminaires suggèrent une dissociation possible entre une facilitation comportementale et les potentiels évoqués. Les temps de réactions étaient influencés par la configuration spatiale uniquement dans le cas ou les stimuli somatosensoriels étaient sur la nuque alors que les potentiels évoqués étaient modulés par l'alignement spatial pour les deux types de stimuli somatosensorielles.ConclusionCette thèse apporte des éléments nouveaux concernant le rôle fonctionnel joué par les interactions multisensorielles précoces de bas niveau. En combinant la psychophysique et la neuroimagerie électrique, nous démontrons la pertinence comportementale des ces interactions dans le système humain normal. Par ailleurs, nous montrons que ces interactions précoces sont hermétiques aux influences dites «top-down» sur le traitement spatial suggérant leur occurrence dans des régions cérébrales ayant accès à des représentations spatiales de grande échelle. Nous soulignons enfin des interactions spécifiques entre l'espace auditif et la stimulation somatosensorielle sur différentes parties du corps. Approfondir la connaissance concernant les bases neuronales et les mécanismes impliqués dans l'intégration multisensorielle dans le système normale est d'une importance centrale car elle permettra d'examiner et de mieux comprendre comment le cerveau déficient pourrait bénéficier d'une réhabilitation avec la stimulation multisensorielle.
Resumo:
Real-world objects are often endowed with features that violate Gestalt principles. In our experiment, we examined the neural correlates of binding under conflict conditions in terms of the binding-by-synchronization hypothesis. We presented an ambiguous stimulus ("diamond illusion") to 12 observers. The display consisted of four oblique gratings drifting within circular apertures. Its interpretation fluctuates between bound ("diamond") and unbound (component gratings) percepts. To model a situation in which Gestalt-driven analysis contradicts the perceptually explicit bound interpretation, we modified the original diamond (OD) stimulus by speeding up one grating. Using OD and modified diamond (MD) stimuli, we managed to dissociate the neural correlates of Gestalt-related (OD vs. MD) and perception-related (bound vs. unbound) factors. Their interaction was expected to reveal the neural networks synchronized specifically in the conflict situation. The synchronization topography of EEG was analyzed with the multivariate S-estimator technique. We found that good Gestalt (OD vs. MD) was associated with a higher posterior synchronization in the beta-gamma band. The effect of perception manifested itself as reciprocal modulations over the posterior and anterior regions (theta/beta-gamma bands). Specifically, higher posterior and lower anterior synchronization supported the bound percept, and the opposite was true for the unbound percept. The interaction showed that binding under challenging perceptual conditions is sustained by enhanced parietal synchronization. We argue that this distributed pattern of synchronization relates to the processes of multistage integration ranging from early grouping operations in the visual areas to maintaining representations in the frontal networks of sensory memory.
Resumo:
Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.
Resumo:
Experts in the field of conversion disorder have suggested for the upcoming DSM-V edition to put less weight on the associated psychological factors and to emphasise the role of clinical findings. Indeed, a critical step in reaching a diagnosis of conversion disorder is careful bedside neurological examination, aimed at excluding organic signs and identifying 'positive' signs suggestive of a functional disorder. These positive signs are well known to all trained neurologists but their validity is still not established. The aim of this study is to provide current evidence regarding their sensitivity and specificity. We conducted a systematic search on motor, sensory and gait functional signs in Embase, Medline, PsycINfo from 1965 to June 2012. Studies in English, German or French reporting objective data on more than 10 participants in a controlled design were included in a systematic review. Other relevant signs are discussed in a narrative review. Eleven controlled studies (out of 147 eligible articles) describing 14 signs (7 motor, 5 sensory, 2 gait) reported low sensitivity of 8-100% but high specificity of 92-100%. Studies were evidence class III, only two had a blinded design and none reported on inter-rater reliability of the signs. Clinical signs for functional neurological symptoms are numerous but only 14 have been validated; overall they have low sensitivity but high specificity and their use should thus be recommended, especially with the introduction of the new DSM-V criteria.
Resumo:
Primary sensory neurons display various neuronal phenotypes which may be influenced by factors present in central or peripheral targets. In the case of DRG cells expressing substance P (SP), the influence of peripheral or central targets was tested on the neuronal expression of this neuropeptide. DRG cells were cultured from chick embryo at E6 or E10 (before or after establishment of functional connections with targets). Preprotachykinin mRNA was visualized in DRG cell cultures by either Northern blot or in situ hybridization using an antisense labeled riboprobe, while the neuropeptide SP was detected by immunostaining with a monoclonal antibody. In DRG cell cultures from E10, only 60% of neurons expressed SP. In contrast, DRG cell cultures performed at E6 showed a significant hybridization signal and SP-like immunoreactivity in virtually all the neurons (98%). The addition of extracts from muscle, skin, brain or spinal cord to DRG cells cultured at E6 reduced by 20% the percentage of neurons which express preprotachykinin mRNA and SP-like immunoreactivity. Our results indicate that factors issued from targets inhibit SP-expression by a subset of primary sensory neurons and act on the transcriptional control of preprotachykinin gene.