32 resultados para Computational learning theory

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mainstreaming the LforS approach is a challenge due to dive rging institutional priorities, customs, and expectations of classically traine d staff. A workshop to test LforS theory and practice, and explore how to mainstream it, took place in a concrete context in a rural district of Mozambique, focusing on agricultural, forest and water resources. The evaluation showed that the principles of interaction applied pe rmitted to link rational know ledge with practical experience through mutual learning and iterative self-reflection. The combination of learning techniques was considered usef ul; participants called for further opportunities to apply the LforS methodology, proposing next steps.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is growing interest in and knowledge about the interplay of learning and emotion. However, the different approaches and empirical studies correspond to each other only to a low extent. To prevent this research field from increasing fragmentation, a shared basis of theory and research is needed. The presentation aims at giving an overview of the state of the art, developing a general framework for theory and research, and outlining crucial topics for future theory and research. The presentation focuses on the influence of emotions on learning. First, theories about the impact of emotions on learning are introduced. Second, the importance of these theories for school learning are discussed. Third, empirical evidence resulting from school-based research about the role of emotions for learning is presented. Finally, further research demands are stressed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In 1999, all student teachers at secondary I level at the University of Bern who had to undertake an internship were asked to participate in a study on learning processes during practicum: 150 students and their mentors in three types of practicum participated—introductory practicum (after the first half‐year of studies), intermediate practicum (after two years of studies) and final practicum (after three years of studies). At the end of the practicum, student teachers and mentors completed questionnaires on preparing, teaching and post‐processing lessons. All student teachers, additionally, rated their professional skills and aspects of personality (attitudes towards pupils, self‐assuredness and well‐being) before and after the practicum. Forty‐six student teachers wrote daily semi‐structured diaries about essential learning situations during their practicum. Results indicate that in each practicum students improved significantly in preparing, conducting and post‐processing lessons. The mentors rated these changes as being greater than did the student teachers. From the perspective of the student teachers their general teaching skills also improved, and their attitudes toward pupils became more open. Furthermore, during practicum their self‐esteem and subjective well‐being increased. Diary data confirmed that there are no differences between different levels of practicum in terms of learning outcomes, but give some first insight into different ways of learning during internship.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate a recently proposed model for decision learning in a population of spiking neurons where synaptic plasticity is modulated by a population signal in addition to reward feedback. For the basic model, binary population decision making based on spike/no-spike coding, a detailed computational analysis is given about how learning performance depends on population size and task complexity. Next, we extend the basic model to n-ary decision making and show that it can also be used in conjunction with other population codes such as rate or even latency coding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disturbances in reward processing have been implicated in bulimia nervosa (BN). Abnormalities in processing reward-related stimuli might be linked to dysfunctions of the catecholaminergic neurotransmitter system, but findings have been inconclusive. A powerful way to investigate the relationship between catecholaminergic function and behavior is to examine behavioral changes in response to experimental catecholamine depletion (CD). The purpose of this study was to uncover putative catecholaminergic dysfunction in remitted subjects with BN who performed a reinforcement-learning task after CD. CD was achieved by oral alpha-methyl-para-tyrosine (AMPT) in 19 unmedicated female subjects with remitted BN (rBN) and 28 demographically matched healthy female controls (HC). Sham depletion administered identical capsules containing diphenhydramine. The study design consisted of a randomized, double-blind, placebo-controlled crossover, single-site experimental trial. The main outcome measures were reward learning in a probabilistic reward task analyzed using signal-detection theory. Secondary outcome measures included self-report assessments, including the Eating Disorder Examination-Questionnaire. Relative to healthy controls, rBN subjects were characterized by blunted reward learning in the AMPT-but not in placebo-condition. Highlighting the specificity of these findings, groups did not differ in their ability to perceptually distinguish between stimuli. Increased CD-induced anhedonic (but not eating disorder) symptoms were associated with a reduced response bias toward a more frequently rewarded stimulus. In conclusion, under CD, rBN subjects showed reduced reward learning compared with healthy control subjects. These deficits uncover disturbance of the central reward processing systems in rBN related to altered brain catecholamine levels, which might reflect a trait-like deficit increasing vulnerability to BN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic core-shell nanoparticles have received increasing attention in recent years. This paper presents a detailed study of Au-Hg nanoalloys, whose composing elements show a large difference in cohesive energy. A simple method to prepare Au@Hg particles with precise control over the composition up to 15 atom% mercury is introduced, based on reacting a citrate stabilized gold sol with elemental mercury. Transmission electron microscopy shows an increase of particle size with increasing mercury content and, together with X-ray powder diffraction, points towards the presence of a core-shell structure with a gold core surrounded by an Au-Hg solid solution layer. The amalgamation process is described by pseudo-zero-order reaction kinetics, which indicates slow dissolution of mercury in water as the rate determining step, followed by fast scavenging by nanoparticles in solution. Once adsorbed at the surface, slow diffusion of Hg into the particle lattice occurs, to a depth of ca. 3 nm, independent of Hg concentration. Discrete dipole approximation calculations relate the UV-vis spectra to the microscopic details of the nanoalloy structure. Segregation energies and metal distribution in the nanoalloys were modeled by density functional theory calculations. The results indicate slow metal interdiffusion at the nanoscale, which has important implications for synthetic methods aimed at core-shell particles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study synaptic plasticity in a complex neuronal cell model where NMDA-spikes can arise in certain dendritic zones. In the context of reinforcement learning, two kinds of plasticity rules are derived, zone reinforcement (ZR) and cell reinforcement (CR), which both optimize the expected reward by stochastic gradient ascent. For ZR, the synaptic plasticity response to the external reward signal is modulated exclusively by quantities which are local to the NMDA-spike initiation zone in which the synapse is situated. CR, in addition, uses nonlocal feedback from the soma of the cell, provided by mechanisms such as the backpropagating action potential. Simulation results show that, compared to ZR, the use of nonlocal feedback in CR can drastically enhance learning performance. We suggest that the availability of nonlocal feedback for learning is a key advantage of complex neurons over networks of simple point neurons, which have previously been found to be largely equivalent with regard to computational capability.