8 resultados para Dopaminergic agonists
em Cambridge University Engineering Department Publications Database
Resumo:
Midbrain dopaminergic neurons are endowed with endogenous slow pacemaking properties. In recent years, many different groups have studied the basis for this phenomenon, often with conflicting conclusions. In particular, the role of a slowly-inactivating L-type calcium channel in the depolarizing phase between spikes is controversial, and the analysis of slow oscillatory potential (SOP) recordings during the blockade of sodium channels has led to conflicting conclusions. Based on a minimal model of a dopaminergic neuron, our analysis suggests that the same experimental protocol may lead to drastically different observations in almost identical neurons. For example, complete L-type calcium channel blockade eliminates spontaneous firing or has almost no effect in two neurons differing by less than 1% in their maximal sodium conductance. The same prediction can be reproduced in a state of the art detailed model of a dopaminergic neuron. Some of these predictions are confirmed experimentally using single-cell recordings in brain slices. Our minimal model exhibits SOPs when sodium channels are blocked, these SOPs being uncorrelated with the spiking activity, as has been shown experimentally. We also show that block of a specific conductance (in this case, the SK conductance) can have a different effect on these two oscillatory behaviors (pacemaking and SOPs), despite the fact that they have the same initiating mechanism. These results highlight the fact that computational approaches, besides their well known confirmatory and predictive interests in neurophysiology, may also be useful to resolve apparent discrepancies between experimental results. © 2011 Drion et al.
Resumo:
Midbrain dopaminergic neurons in the substantia nigra, pars compacta and ventral tegmental area are critically important in many physiological functions. These neurons exhibit firing patterns that include tonic slow pacemaking, irregular firing and bursting, and the amount of dopamine that is present in the synaptic cleft is much increased during bursting. The mechanisms responsible for the switch between these spiking patterns remain unclear. Using both in-vivo recordings combined with microiontophoretic or intraperitoneal drug applications and in-vitro experiments, we have found that M-type channels, which are present in midbrain dopaminergic cells, modulate the firing during bursting without affecting the background low-frequency pacemaker firing. Thus, a selective blocker of these channels, 10,10-bis(4-pyridinylmethyl)-9(10H)- anthracenone dihydrochloride, specifically potentiated burst firing. Computer modeling of the dopamine neuron confirmed the possibility of a differential influence of M-type channels on excitability during various firing patterns. Therefore, these channels may provide a novel target for the treatment of dopamine-related diseases, including Parkinson's disease and drug addiction. Moreover, our results demonstrate that the influence of M-type channels on the excitability of these slow pacemaker neurons is conditional upon their firing pattern. © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Resumo:
Background: Bradykinesia is a cardinal feature of Parkinson's disease (PD). Despite its disabling impact, the precise cause of this symptom remains elusive. Recent thinking suggests that bradykinesia may be more than simply a manifestation of motor slowness, and may in part reflect a specific deficit in the operation of motivational vigour in the striatum. In this paper we test the hypothesis that movement time in PD can be modulated by the specific nature of the motivational salience of possible action-outcomes. Methodology/Principal Findings: We developed a novel movement time paradigm involving winnable rewards and avoidable painful electrical stimuli. The faster the subjects performed an action the more likely they were to win money (in appetitive blocks) or to avoid a painful shock (in aversive blocks). We compared PD patients when OFF dopaminergic medication with controls. Our key finding is that PD patients OFF dopaminergic medication move faster to avoid aversive outcomes (painful electric shocks) than to reap rewarding outcomes (winning money) and, unlike controls, do not speed up in the current trial having failed to win money in the previous one. We also demonstrate that sensitivity to distracting stimuli is valence specific. Conclusions/Significance: We suggest this pattern of results can be explained in terms of low dopamine levels in the Parkinsonian state leading to an insensitivity to appetitive outcomes, and thus an inability to modulate movement speed in the face of rewards. By comparison, sensitivity to aversive stimuli is relatively spared. Our findings point to a rarely described property of bradykinesia in PD, namely its selective regulation by everyday outcomes. © 2012 Shiner et al.
Resumo:
The role dopamine plays in decision-making has important theoretical, empirical and clinical implications. Here, we examined its precise contribution by exploiting the lesion deficit model afforded by Parkinson's disease. We studied patients in a two-stage reinforcement learning task, while they were ON and OFF dopamine replacement medication. Contrary to expectation, we found that dopaminergic drug state (ON or OFF) did not impact learning. Instead, the critical factor was drug state during the performance phase, with patients ON medication choosing correctly significantly more frequently than those OFF medication. This effect was independent of drug state during initial learning and appears to reflect a facilitation of generalization for learnt information. This inference is bolstered by our observation that neural activity in nucleus accumbens and ventromedial prefrontal cortex, measured during simultaneously acquired functional magnetic resonance imaging, represented learnt stimulus values during performance. This effect was expressed solely during the ON state with activity in these regions correlating with better performance. Our data indicate that dopamine modulation of nucleus accumbens and ventromedial prefrontal cortex exerts a specific effect on choice behaviour distinct from pure learning. The findings are in keeping with the substantial other evidence that certain aspects of learning are unaffected by dopamine lesions or depletion, and that dopamine plays a key role in performance that may be distinct from its role in learning.
Resumo:
Reward processing is linked to specific neuromodulatory systems with a dopaminergic contribution to reward learning and motivational drive being well established. Neuromodulatory influences on hedonic responses to actual receipt of reward, or punishment, referred to as experienced utility are less well characterized, although a link to the endogenous opioid system is suggested. Here, in a combined functional magnetic resonance imaging-psychopharmacological investigation, we used naloxone to block central opioid function while subjects performed a gambling task associated with rewards and losses of different magnitudes, in which the mean expected value was always zero. A graded influence of naloxone on reward outcome was evident in an attenuation of pleasure ratings for larger reward outcomes, an effect mirrored in attenuation of brain activity to increasing reward magnitude in rostral anterior cingulate cortex. A more striking effect was seen for losses such that under naloxone all levels of negative outcome were rated as more unpleasant. This hedonic effect was associated with enhanced activity in anterior insula and caudal anterior cingulate cortex, areas implicated in aversive processing. Our data indicate that a central opioid system contributes to both reward and loss processing in humans and directly modulates the hedonic experience of outcomes.
Resumo:
Theories of instrumental learning are centred on understanding how success and failure are used to improve future decisions. These theories highlight a central role for reward prediction errors in updating the values associated with available actions. In animals, substantial evidence indicates that the neurotransmitter dopamine might have a key function in this type of learning, through its ability to modulate cortico-striatal synaptic efficacy. However, no direct evidence links dopamine, striatal activity and behavioural choice in humans. Here we show that, during instrumental learning, the magnitude of reward prediction error expressed in the striatum is modulated by the administration of drugs enhancing (3,4-dihydroxy-L-phenylalanine; L-DOPA) or reducing (haloperidol) dopaminergic function. Accordingly, subjects treated with L-DOPA have a greater propensity to choose the most rewarding action relative to subjects treated with haloperidol. Furthermore, incorporating the magnitude of the prediction errors into a standard action-value learning algorithm accurately reproduced subjects' behavioural choices under the different drug conditions. We conclude that dopamine-dependent modulation of striatal activity can account for how the human brain uses reward prediction errors to improve future decisions.
Resumo:
Decision making in an uncertain environment poses a conflict between the opposing demands of gathering and exploiting information. In a classic illustration of this 'exploration-exploitation' dilemma, a gambler choosing between multiple slot machines balances the desire to select what seems, on the basis of accumulated experience, the richest option, against the desire to choose a less familiar option that might turn out more advantageous (and thereby provide information for improving future decisions). Far from representing idle curiosity, such exploration is often critical for organisms to discover how best to harvest resources such as food and water. In appetitive choice, substantial experimental evidence, underpinned by computational reinforcement learning (RL) theory, indicates that a dopaminergic, striatal and medial prefrontal network mediates learning to exploit. In contrast, although exploration has been well studied from both theoretical and ethological perspectives, its neural substrates are much less clear. Here we show, in a gambling task, that human subjects' choices can be characterized by a computationally well-regarded strategy for addressing the explore/exploit dilemma. Furthermore, using this characterization to classify decisions as exploratory or exploitative, we employ functional magnetic resonance imaging to show that the frontopolar cortex and intraparietal sulcus are preferentially active during exploratory decisions. In contrast, regions of striatum and ventromedial prefrontal cortex exhibit activity characteristic of an involvement in value-based exploitative decision making. The results suggest a model of action selection under uncertainty that involves switching between exploratory and exploitative behavioural modes, and provide a computationally precise characterization of the contribution of key decision-related brain systems to each of these functions.
Resumo:
The role dopamine plays in decision-making has important theoretical, empirical and clinical implications. Here, we examined its precise contribution by exploiting the lesion deficit model afforded by Parkinson's disease. We studied patients in a two-stage reinforcement learning task, while they were ON and OFF dopamine replacement medication. Contrary to expectation, we found that dopaminergic drug state (ON or OFF) did not impact learning. Instead, the critical factor was drug state during the performance phase, with patients ON medication choosing correctly significantly more frequently than those OFF medication. This effect was independent of drug state during initial learning and appears to reflect a facilitation of generalization for learnt information. This inference is bolstered by our observation that neural activity in nucleus accumbens and ventromedial prefrontal cortex, measured during simultaneously acquired functional magnetic resonance imaging, represented learnt stimulus values during performance. This effect was expressed solely during the ON state with activity in these regions correlating with better performance. Our data indicate that dopamine modulation of nucleus accumbens and ventromedial prefrontal cortex exerts a specific effect on choice behaviour distinct from pure learning. The findings are in keeping with the substantial other evidence that certain aspects of learning are unaffected by dopamine lesions or depletion, and that dopamine plays a key role in performance that may be distinct from its role in learning. © 2012 The Author.