7 resultados para Neuroeconomics
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Neuroeconomics is a rapidly growing new research discipline aimed at describing the neural substrate of decision-making using incentivized decisions introduced in experimental economics. The novel combination of economic decision theory and neuroscience has the potential to better examine the interactions of social, psychological and neural factors with regard to motivational forces that may underlie psychiatric problems. Game theory will provide psychiatry with computationally principled measures of cognitive dysfunction. Given the relatively high heritability of these measures, they may contribute to improving phenotypic definitions of psychiatric conditions. The game-theoretical concepts of optimal behavior will allow description of psychopathology as deviation from optimal functioning. Neuroeconomists have successfully used normative or near-normative models to interpret the function of neurotransmitters; these models have the potential to significantly improve neurotransmitter theories of psychiatric disorders. This paper will review recent evidence from neuroeconomics and psychiatry in support of applying economic concepts such as risk/uncertainty preference, time preference and social preference to psychiatric research to improve diagnostic classification, prevention and therapy.
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.
Resumo:
Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.
Resumo:
Objective: Impaired social interactions and repetitive behavior are key features of autism spectrum disorder (ASD). In the present study we compared social decision-making in subjects with and without ASD. Subjects performed five social decision-making games in order to assess trust, fairness, cooperation & competition behavior and social value orientation. Methods: 19 adults with autism spectrum disorder and 17 controls, matched for age and education, participated in the study. Each subject performed five social decision-making tasks. In the trust game, subjects could maximize their gain by sharing some of their money with another person. In the punishment game, subjects played two versions of the Dictator’s Dilemma. In the dictator condition they could share an amount of 0-100 points with another person. In the punishment condition, the opponent was able to punish the subject if he/she was not satisfied with the amount of points received. In the cooperation game, subjects played with a small group of 3 people. Each of them could (anonymously) select an amount of 5, 7.5 or 10 Swiss francs. The goal of the game was to achieve a high group minimum. In the competition game, subjects performed a dexterity task. Before performing the task, they were asked whether they wanted to compete (winner takes it all) or cooperation (sharing the joint achieved amount of points) with a randomly selected person. Lastly, subjects performed a social value orientation task where they were playing for themselves and for another person. Results: There was no overall difference between healthy controls an ASD subjects in investment in the trust game. However, healthy controls increased their investment over number of trials whereas ASD subjects did not. A similar pattern was found for the punishment game. Furthermore, ASD subjects revealed a decreased investment in the dictator condition of the punishment game. There were no mean differences in competition behavior and social value orientation. Conclusions: The results provide evidence for differences between ASD subjects and healthy controls in social decision-making. Subjects with ASD showed a more consistent behavior than healthy controls in the trust game and the dictator dilemma. The present findings provide evidence for impaired social learning in ASD.