24 resultados para Action Learning Cycle
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The assumption that social skills are necessary ingredients of collaborative learning is well established but rarely empirically tested. In addition, most theories on collaborative learning focus on social skills only at the personal level, while the social skill configurations within a learning group might be of equal importance. Using the integrative framework, this study investigates which social skills at the personal level and at the group level are predictive of task-related e-mail communication, satisfaction with performance and perceived quality of collaboration. Data collection took place in a technology-enhanced long-term project-based learning setting for pre-service teachers. For data collection, two questionnaires were used, one at the beginning and one at the end of the learning cycle which lasted 3 months. During the project phase, the e-mail communication between group members was captured as well. The investigation of 60 project groups (N = 155 for the questionnaires; group size: two or three students) and 33 groups for the e-mail communication (N = 83) revealed that personal social skills played only a minor role compared to group level configurations of social skills in predicting satisfaction with performance, perceived quality of collaboration and communication behaviour. Members from groups that showed a high and/or homogeneous configuration of specific social skills (e.g., cooperation/compromising, leadership) usually were more satisfied and saw their group as more efficient than members from groups with a low and/or heterogeneous configuration of skills.
Resumo:
Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.
Resumo:
Over the last decade, the end-state comfort effect (e.g., Rosenbaum et al., 2006) has received a considerable amount of attention. However, some of the underlying mechanisms are still to be investigated, amongst others, how sequential planning affects end-state comfort and how this effect develops over learning. In a two-step sequencing task, e.g., postural comfort can be planned on the intermediate position (next state) or on the actual end position (final state). It might be hypothesized that, in initial acquisition, next state’s comfort is crucial for action planning but that, in the course of learning, final state’s comfort is taken more and more into account. To test this hypothesis, a variant of Rosenbaum’s vertical stick transportation task was used. Participants (N = 16, right-handed) received extensive practice on a two-step transportation task (10,000 trials over 12 sessions). From the initial position on the middle stair of a staircase in front of the participant, the stick had to be transported either 20 cm upwards and then 40 cm downwards or 20 cm downwards and then 40 cm upwards (N = 8 per subgroup). Participants were supposed to produce fluid movements without changing grasp. In the pre- and posttest, participants were tested on both two-step sequencing tasks as well as on 20 cm single-step upwards and downwards movements (10 trials per condition). For the test trials, grasp height was calculated kinematographically. In the pretest, large end/next/final-state comfort effects for single-step transportation tasks and large next-state comfort effects for sequenced tasks were found. However, no change in grasp height from pre- to posttest could be revealed. Results show that, in vertical stick transportation sequences, the final state is not taken into account when planning grasp height. Instead, action planning seems to be solely based on aspects of the next action goal that is to be reached.
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.
Resumo:
Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.
Resumo:
The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.
Resumo:
The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.
Resumo:
microRNAs (miRNAs) are small non-coding RNAs that are frequently involved in carcinogenesis. Although many miRNAs form part of integrated networks, little information is available how they interact with each other to control cellular processes. miR-34a and miR-15a/16 are functionally related; they share common targets and control similar processes including G1-S cell cycle progression and apoptosis. The aim of this study was to investigate the combined action of miR-34a and miR-15a/16 in non-small cell lung cancer (NSCLC) cells.
Resumo:
The primary aim was to investigate the effect of combined butafosfan and cyanocobalamin on liver metabolism in early lactating cows through mRNA expression measurements of genes encoding 31 enzymes and transport proteins of major metabolic processes in the liver using 16 multiparous early lactating dairy cows. The treatments included i.v. injection of 10 mL/100 kg of body weight combined butafosfan and cyanocobalamin (TG, n = 8) on 3 d consecutively at 25 +/- 3 d in milk or injection with physiological saline solution similarly applied (CG, n = 8). Results include a higher daily milk production for TG cows (41.1 +/- 0.9 kg, mean +/- SEM) compared with CG cows (39.5 +/- 0.7 kg). In plasma, the concentration of inorganic phosphorus was lower in the TG cows (1.25 +/- 0.08 mmol/L) after the treatment than in the CG cows (1.33 +/- 0.07 mmol/L). The plasma beta-hydroxybutyrate concentration was 0.65 +/- 0.13 mmol/L for all cows before the treatment, and remained unaffected post treatment. The unique result was that in the liver, the mRNA abundance of acyl-coenzyme A synthetase long-chain family member 1, involved in fatty acid oxidation and biosynthesis, was lower across time points after the treatment for TG compared with CG cows (17.5 +/- 0.15 versus 18.1 +/- 0.24 cycle threshold, log(2), respectively). In conclusion, certain effects of combined butafosfan and cyanocobalamin were observed on mRNA abundance of a gene in the liver of nonketotic early lactating cows.
Resumo:
We study synaptic plasticity in a complex neuronal cell model where NMDA-spikes can arise in certain dendritic zones. In the context of reinforcement learning, two kinds of plasticity rules are derived, zone reinforcement (ZR) and cell reinforcement (CR), which both optimize the expected reward by stochastic gradient ascent. For ZR, the synaptic plasticity response to the external reward signal is modulated exclusively by quantities which are local to the NMDA-spike initiation zone in which the synapse is situated. CR, in addition, uses nonlocal feedback from the soma of the cell, provided by mechanisms such as the backpropagating action potential. Simulation results show that, compared to ZR, the use of nonlocal feedback in CR can drastically enhance learning performance. We suggest that the availability of nonlocal feedback for learning is a key advantage of complex neurons over networks of simple point neurons, which have previously been found to be largely equivalent with regard to computational capability.
Resumo:
Sustainable natural resource use requires that multiple actors reassess their situation in a systemic perspective. This can be conceptualised as a social learning process between actors from rural communities and the experts from outside organisations. A specifically designed workshop oriented towards a systemic view of natural resource use and the enhancement of mutual learning between local and external actors, provided the background for evaluating the potentials and constraints of intensified social learning processes. Case studies in rural communities in India, Bolivia, Peru and Mali showed that changes in the narratives of the participants of the workshop followed a similar temporal sequence relatively independently from their specific contexts. Social learning processes were found to be more likely to be successful if they 1) opened new space for communicative action, allowing for an intersubjective re-definition of the present situation, 2) contributed to rebalance the relationships between social capital and social, emotional and cognitive competencies within and between local and external actors.
Resumo:
This article provides a selective overview of the functional neuroimaging literature with an emphasis on emotional activation processes. Emotions are fast and flexible response systems that provide basic tendencies for adaptive action. From the range of involved component functions, we first discuss selected automatic mechanisms that control basic adaptational changes. Second, we illustrate how neuroimaging work has contributed to the mapping of the network components associated with basic emotion families (fear, anger, disgust, happiness), and secondary dimensional concepts that organise the meaning space for subjective experience and verbal labels (emotional valence, activity/intensity, approach/withdrawal, etc.). Third, results and methodological difficulties are discussed in view of own neuroimaging experiments that investigated the component functions involved in emotional learning. The amygdala, prefrontal cortex, and striatum form a network of reciprocal connections that show topographically distinct patterns of activity as a correlate of up and down regulation processes during an emotional episode. Emotional modulations of other brain systems have attracted recent research interests. Emotional neuroimaging calls for more representative designs that highlight the modulatory influences of regulation strategies and socio-cultural factors responsible for inhibitory control and extinction. We conclude by emphasising the relevance of the temporal process dynamics of emotional activations that may provide improved prediction of individual differences in emotionality.
Resumo:
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).