941 resultados para Temporal-difference learning


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been recent interest in using temporal difference learning methods to attack problems of prediction and control. While these algorithms have been brought to bear on many problems, they remain poorly understood. It is the purpose of this thesis to further explore these algorithms, presenting a framework for viewing them and raising a number of practical issues and exploring those issues in the context of several case studies. This includes applying the TD(lambda) algorithm to: 1) learning to play tic-tac-toe from the outcome of self-play and of play against a perfectly-playing opponent and 2) learning simple one-dimensional segmentation tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a diffusion-based algorithm in which multiple agents cooperate to predict a common and global statevalue function by sharing local estimates and local gradient information among neighbors. Our algorithm is a fully distributed implementation of the gradient temporal difference with linear function approximation, to make it applicable to multiagent settings. Simulations illustrate the benefit of cooperation in learning, as made possible by the proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In probabilistic decision tasks, an expected value (EV) of a choice is calculated, and after the choice has been made, this can be updated based on a temporal difference (TD) prediction error between the EV and the reward magnitude (RM) obtained. The EV is measured as the probability of obtaining a reward x RM. To understand the contribution of different brain areas to these decision-making processes, functional magnetic resonance imaging activations related to EV versus RM (or outcome) were measured in a probabilistic decision task. Activations in the medial orbitofrontal cortex were correlated with both RM and with EV and confirmed in a conjunction analysis to extend toward the pregenual cingulate cortex. From these representations, TD reward prediction errors could be produced. Activations in areas that receive from the orbitofrontal cortex including the ventral striatum, midbrain, and inferior frontal gyrus were correlated with the TD error. Activations in the anterior insula were correlated negatively with EV, occurring when low reward outcomes were expected, and also with the uncertainty of the reward, implicating this region in basic and crucial decision-making parameters, low expected outcomes, and uncertainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tout au long de la vie, le cerveau développe des représentations de son environnement permettant à l’individu d’en tirer meilleur profit. Comment ces représentations se développent-elles pendant la quête de récompenses demeure un mystère. Il est raisonnable de penser que le cortex est le siège de ces représentations et que les ganglions de la base jouent un rôle important dans la maximisation des récompenses. En particulier, les neurones dopaminergiques semblent coder un signal d’erreur de prédiction de récompense. Cette thèse étudie le problème en construisant, à l’aide de l’apprentissage machine, un modèle informatique intégrant de nombreuses évidences neurologiques. Après une introduction au cadre mathématique et à quelques algorithmes de l’apprentissage machine, un survol de l’apprentissage en psychologie et en neuroscience et une revue des modèles de l’apprentissage dans les ganglions de la base, la thèse comporte trois articles. Le premier montre qu’il est possible d’apprendre à maximiser ses récompenses tout en développant de meilleures représentations des entrées. Le second article porte sur l'important problème toujours non résolu de la représentation du temps. Il démontre qu’une représentation du temps peut être acquise automatiquement dans un réseau de neurones artificiels faisant office de mémoire de travail. La représentation développée par le modèle ressemble beaucoup à l’activité de neurones corticaux dans des tâches similaires. De plus, le modèle montre que l’utilisation du signal d’erreur de récompense peut accélérer la construction de ces représentations temporelles. Finalement, il montre qu’une telle représentation acquise automatiquement dans le cortex peut fournir l’information nécessaire aux ganglions de la base pour expliquer le signal dopaminergique. Enfin, le troisième article évalue le pouvoir explicatif et prédictif du modèle sur différentes situations comme la présence ou l’absence d’un stimulus (conditionnement classique ou de trace) pendant l’attente de la récompense. En plus de faire des prédictions très intéressantes en lien avec la littérature sur les intervalles de temps, l’article révèle certaines lacunes du modèle qui devront être améliorées. Bref, cette thèse étend les modèles actuels de l’apprentissage des ganglions de la base et du système dopaminergique au développement concurrent de représentations temporelles dans le cortex et aux interactions de ces deux structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a model for plasticity induction in reinforcement learning which is based on a cascade of synaptic memory traces. In the cascade of these so called eligibility traces presynaptic input is first corre lated with postsynaptic events, next with the behavioral decisions and finally with the external reinforcement. A population of leaky integrate and fire neurons endowed with this plasticity scheme is studied by simulation on different tasks. For operant co nditioning with delayed reinforcement, learning succeeds even when the delay is so large that the delivered reward reflects the appropriateness, not of the immediately preceeding response, but of a decision made earlier on in the stimulus - decision sequence . So the proposed model does not rely on the temporal contiguity between decision and pertinent reward and thus provides a viable means of addressing the temporal credit assignment problem. In the same task, learning speeds up with increasing population si ze, showing that the plasticity cascade simultaneously addresses the spatial problem of assigning credit to the different population neurons. Simulations on other task such as sequential decision making serve to highlight the robustness of the proposed sch eme and, further, contrast its performance to that of temporal difference based approaches to reinforcement learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

n learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Shared attention is a type of communication very important among human beings. It is sometimes reserved for the more complex form of communication being constituted by a sequence of four steps: mutual gaze, gaze following, imperative pointing and declarative pointing. Some approaches have been proposed in Human-Robot Interaction area to solve part of shared attention process, that is, the most of works proposed try to solve the first two steps. Models based on temporal difference, neural networks, probabilistic and reinforcement learning are methods used in several works. In this article, we are presenting a robotic architecture that provides a robot or agent, the capacity of learning mutual gaze, gaze following and declarative pointing using a robotic head interacting with a caregiver. Three learning methods have been incorporated to this architecture and a comparison of their performance has been done to find the most adequate to be used in real experiment. The learning capabilities of this architecture have been analyzed by observing the robot interacting with the human in a controlled environment. The experimental results show that the robotic head is able to produce appropriate behavior and to learn from sociable interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The spatio-temporal variations in diversity and abundance of deep-sea macrofaunal assemblages (excluding meiofaunal taxa, as Nematoda, Copepoda and Ostracoda) from the Blanes Canyon (BC) and adjacent open slope are described. The Catalan Sea basin is characterized by the presence of numerous submarine canyons, which are globally acknowledged as biodiversity hot-spots, due to their disturbance regime and incremented conveying of organic matter. This area is subjected to local deep-sea fisheries activities, and to recurrent cold water cascading events from the shelf. The upper canyon (~900 m), middle slope (~1200 m) and lower slope (~1500 m) habitats were investigated during three different months (October 2008, May 2009 and September 2009). A total of 624 specimens belonging to 16 different taxa were found into 67 analyzed samples, which had been collected from the two study areas. Of these, Polychaeta, Mollusca and Crustacea were always the most abundant groups. As expected, the patterns of species diversity and evenness were different in time and space. Both in BC and open slope, taxa diversity and abundance are higher in the shallowest depth and lowest at -1500 m depth. This is probably due to different trophic regimes at these depths. The abundance of filter-feeders is higher inside BC than in the adjacent open slope, which is also related with an increment of predator polychaetes. Surface deposit-feeders are more abundant in the open slope than in BC, along with a decrement of filter-feeders and their predators. Probably these differences are due to higher quantities of suspended organic matter reaching the canyon. The multivariate analyses conducted on major taxa point out major differences effective taxa richness between depths and stations. In September 2009 the analyzed communities double their abundances, with a corresponding increase in richness of taxa. This could be related to a mobilizing event, like the release of accumulated food-supply in a nepheloid layer associated to the arrival of autumn. The highest abundance in BC is detected in the shallowest depth and in late summer (September), probably due to higher food availability caused by stronger flood events coming from Tordera River. The effects of such events seemed to involve adjacent open slope too. The nMDS conducted on major taxa abundance shows a slight temporal difference between the three campaigns samples, with a clear clustering between samples of Sept 09. All depth and all months were dominated by Polychaeta, which have been identified to family level and submitted to further analysis. Family richness have clearly minimum at the -1200 m depth of BC, highlighting the presence of a general impact affecting the populations in the middle slope. Three different matrices have been created, each with a different taxonomic level (All Taxa “AT”, Phylum Level “PL” and Polychaeta Families “PF”). Multivariate analysis (MDS, SIMPER) conducted on PL matrix showed a clear spatial differences between stations (BC and open slope) and depths. MDSs conducted on other two matrices (AT and PF) showed similar patterns, but different from PL analysis. A 2 nd stage analysis have been conducted to understand differences between different taxonomic levels, and PL level has been chosen as the most representative of variation. The faunal differences observed were explained by depth, station and season. All work has been accomplished in the Centre d’estudis avançats de Blanes (CEAB-CSIC), within the framework of Spanish PROMETEO project "Estudio Integrado de Cañones y Taludes PROfundos del MEdiTErráneo Occidental: un hábitat esencial", Ref. CTM2007-66316-C02- 01/MAR.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An apparently paradoxical role for IFN-g in human Chagas' disease was observed when studying the pattern of cytokine production by peripheral blood mononuclear cells (PBMC) obtained from two groups of chagasic patients after specific stimulation with Trypanosoma cruzi-derived antigens. The groups studied were 1) patients treated with benznidazole during the acute phase of Trypanosoma cruzi infection and 2) chronically infected untreated patients. In the treated group, higher levels of IFN-g were produced by PBMC from individuals cured after treatment when compared to non-cured patients. In contrast, in the chronically infected group (not treated) higher levels of IFN-g were produced by PBMC from cardiac patients in comparison with asymptomatic (indeterminate) patients. This apparently paradoxical role for IFN-g in human Chagas' disease is discussed in terms of the possibility of a temporal difference in IFN-g production during the initial stages of the infection (acute phase) in the presence or absence of chemotherapy. The maintenance of an immune response with high levels of IFN-g production during the chronic phase of the infection may favor cure or influence the development of the cardiac form of the disease

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.