990 resultados para Neuroscience informatique
Resumo:
Tout au long de la vie, le cerveau développe des représentations de son environnement permettant à l’individu d’en tirer meilleur profit. Comment ces représentations se développent-elles pendant la quête de récompenses demeure un mystère. Il est raisonnable de penser que le cortex est le siège de ces représentations et que les ganglions de la base jouent un rôle important dans la maximisation des récompenses. En particulier, les neurones dopaminergiques semblent coder un signal d’erreur de prédiction de récompense. Cette thèse étudie le problème en construisant, à l’aide de l’apprentissage machine, un modèle informatique intégrant de nombreuses évidences neurologiques. Après une introduction au cadre mathématique et à quelques algorithmes de l’apprentissage machine, un survol de l’apprentissage en psychologie et en neuroscience et une revue des modèles de l’apprentissage dans les ganglions de la base, la thèse comporte trois articles. Le premier montre qu’il est possible d’apprendre à maximiser ses récompenses tout en développant de meilleures représentations des entrées. Le second article porte sur l'important problème toujours non résolu de la représentation du temps. Il démontre qu’une représentation du temps peut être acquise automatiquement dans un réseau de neurones artificiels faisant office de mémoire de travail. La représentation développée par le modèle ressemble beaucoup à l’activité de neurones corticaux dans des tâches similaires. De plus, le modèle montre que l’utilisation du signal d’erreur de récompense peut accélérer la construction de ces représentations temporelles. Finalement, il montre qu’une telle représentation acquise automatiquement dans le cortex peut fournir l’information nécessaire aux ganglions de la base pour expliquer le signal dopaminergique. Enfin, le troisième article évalue le pouvoir explicatif et prédictif du modèle sur différentes situations comme la présence ou l’absence d’un stimulus (conditionnement classique ou de trace) pendant l’attente de la récompense. En plus de faire des prédictions très intéressantes en lien avec la littérature sur les intervalles de temps, l’article révèle certaines lacunes du modèle qui devront être améliorées. Bref, cette thèse étend les modèles actuels de l’apprentissage des ganglions de la base et du système dopaminergique au développement concurrent de représentations temporelles dans le cortex et aux interactions de ces deux structures.
Resumo:
Previous chapters have presented the latest findings in neuroscience research, and have pointed to potential treatment and prevention strategies. However, there are many ethical implications of the research itself, as well as the treatment and prevention strategies, that must be considered. The rapid pace of change in the field of neuroscience brings with it a host of new ethical issues, which need to be addressed. This chapter considers the important ethical and human rights issues that are raised by neuroscience research on psychoactive substance dependence.
Resumo:
The brain is a complex system that, in the normal condition, has emergent properties like those associated with activity-dependent plasticity in learning and memory, and in pathological situations, manifests abnormal long-term phenomena like the epilepsies. Data from our laboratory and from the literature were classified qualitatively as sources of complexity and emergent properties from behavior to electrophysiological, cellular, molecular, and computational levels. We used such models as brainstem-dependent acute audiogenic seizures and forebrain-dependent kindled audiogenic seizures. Additionally we used chemical OF electrical experimental models of temporal lobe epilepsy that induce status epilepticus with behavioral, anatomical, and molecular sequelae such as spontaneous recurrent seizures and long-term plastic changes. Current Computational neuroscience tools will help the interpretation. storage, and sharing of the exponential growth of information derived from those studies. These strategies are considered solutions to deal with the complexity of brain pathologies such as the epilepsies. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Mental health awareness has been rising worldwide, motivated by its social and economic costs. Despite the investment in research in neuroscience in the recent years, little is known about the underlying mechanisms in the brain that are correlated with psychiatric conditions. This project, through two feature articles suitable to be published in magazines, provides perspectives onto mental health research. First it presents an example where psychiatry joins forces with neuroscience and computer science in an interdisciplinary effort to improve the life of those affected by mental disorders. The second article gathers opinions which claim that mental health research priorities should be set by patients themselves, or even that people with lived experience of mental health issues should have an active role in that research. This project was planned and researched while I was an Erasmus student at Nottingham Trent University, in the United Kingdom.
Resumo:
Quantum indeterminism is frequently invoked as a solution to the problem of how a disembodied soul might interact with the brain (as Descartes proposed), and is sometimes invoked in theories of libertarian free will even when they do not involve dualistic assumptions. Taking as example the Eccles-Beck model of interaction between self (or soul) and brain at the level of synaptic exocytosis, I here evaluate the plausibility of these approaches. I conclude that Heisenbergian uncertainty is too small to affect synaptic function, and that amplification by chaos or by other means does not provide a solution to this problem. Furthermore, even if Heisenbergian effects did modify brain functioning, the changes would be swamped by those due to thermal noise. Cells and neural circuits have powerful noise-resistance mechanisms, that are adequate protection against thermal noise and must therefore be more than sufficient to buffer against Heisenbergian effects. Other forms of quantum indeterminism must be considered, because these can be much greater than Heisenbergian uncertainty, but these have not so far been shown to play a role in the brain.
Resumo:
In his timely article, Cherniss offers his vision for the future of "Emotional Intelligence" (EI). However, his goal of clarifying the concept by distinguishing definitions from models and his support for "Emotional and Social Competence" (ESC) models will, in our opinion, not make the field advance. To be upfront, we agree that emotions are important for effective decision-making, leadership, performance and the like; however, at this time, EI and ESC have not yet demonstrated incremental validity over and above IQ and personality tests in meta-analyses (Harms & Credé, 2009; Van Rooy & Viswesvaran, 2004). If there is a future for EI, we see it in the ability model of Mayer, Salovey and associates (e.g, Mayer, Caruso, & Salovey, 2000), which detractors and supporters agree holds the most promise (Antonakis, Ashkanasy, & Dasborough, 2009; Zeidner, Roberts, & Matthews, 2008). With their use of quasi-objective scoring measures, the ability model grounds EI in existing frameworks of intelligence, thus differentiating itself from ESC models and their self-rated trait inventories. In fact, we do not see the value of ESC models: They overlap too much with current personality models to offer anything new for science and practice (Zeidner, et al., 2008). In this commentary we raise three concerns we have with Cherniss's suggestions for ESC models: (1) there are important conceptual problems in both the definition of ESC and the distinction of ESC from EI; (2) Cherniss's interpretation of neuroscience findings as supporting the constructs of EI and ESC is outdated, and (3) his interpretation of the famous marshmallow experiment as indicating the existence of ESCs is flawed. Building on the promise of ability models, we conclude by providing suggestions to improve research in EI.
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
Minimal models for the explanation of decision-making in computational neuroscience are based on the analysis of the evolution for the average firing rates of two interacting neuron populations. While these models typically lead to multi-stable scenario for the basic derived dynamical systems, noise is an important feature of the model taking into account finite-size effects and robustness of the decisions. These stochastic dynamical systems can be analyzed by studying carefully their associated Fokker-Planck partial differential equation. In particular, we discuss the existence, positivity and uniqueness for the solution of the stationary equation, as well as for the time evolving problem. Moreover, we prove convergence of the solution to the the stationary state representing the probability distribution of finding the neuron families in each of the decision states characterized by their average firing rates. Finally, we propose a numerical scheme allowing for simulations performed on the Fokker-Planck equation which are in agreement with those obtained recently by a moment method applied to the stochastic differential system. Our approach leads to a more detailed analytical and numerical study of this decision-making model in computational neuroscience.