21 resultados para ASSIGNMENT
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.
Resumo:
Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.
Resumo:
We present a model for plasticity induction in reinforcement learning which is based on a cascade of synaptic memory traces. In the cascade of these so called eligibility traces presynaptic input is first corre lated with postsynaptic events, next with the behavioral decisions and finally with the external reinforcement. A population of leaky integrate and fire neurons endowed with this plasticity scheme is studied by simulation on different tasks. For operant co nditioning with delayed reinforcement, learning succeeds even when the delay is so large that the delivered reward reflects the appropriateness, not of the immediately preceeding response, but of a decision made earlier on in the stimulus - decision sequence . So the proposed model does not rely on the temporal contiguity between decision and pertinent reward and thus provides a viable means of addressing the temporal credit assignment problem. In the same task, learning speeds up with increasing population si ze, showing that the plasticity cascade simultaneously addresses the spatial problem of assigning credit to the different population neurons. Simulations on other task such as sequential decision making serve to highlight the robustness of the proposed sch eme and, further, contrast its performance to that of temporal difference based approaches to reinforcement learning.
Resumo:
n learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.
Resumo:
OBJECTIVE: The aim of this study was to estimate intra- and post-operative risk using the American Society of Anaesthesiologists (ASA) classification which is an important predictor of an intervention and of the entire operating programme. STUDY DESIGN: In this retrospective study, 4435 consecutive patients undergoing elective and emergency surgery at the Gynaecological Clinic of the University Hospital of Zurich were included. The ASA classification for pre-operative risk assessment was determined by an anaesthesiologist after a thorough physical examination. We observed several pre-, intra- and post-operative parameters, such as age, body-mass-index, duration of anaesthesia, duration of surgery, blood loss, duration of post-operative stay, complicated post-operative course, morbidity and mortality. The investigation of different risk factors was achieved by a multiple linear regression model for log-transformed duration of hospitalisation. RESULTS: Age and obesity were responsible for a higher ASA classification. ASA grade correlates with the duration of anaesthesia and the duration of the surgery itself. There was a significant difference in blood loss between ASA grades I (113+/-195 ml) and III (222+/-470 ml) and between classes II (176+/-432 ml) and III. The duration of post-operative hospitalisation could also be correlated with ASA class. ASA class I=1.7+/-3.0 days, ASA class II=3.6+/-4.3 days, ASA class III=6.8+/-8.2 days, and ASA class IV=6.2+/-3.9 days. The mean post-operative in-hospital stay was 2.5+/-4.0 days without complications, and 8.7+/-6.7 days with post-operative complications. Multiple linear regression model showed that not only the ASA classification contained an important information for the duration of hospitalisation. Parameters such as age, class of diagnosis, post-operative complications, etc. also have an influence on the duration of hospitalisation. CONCLUSION: This study shows that the ASA classification can be used as a good and early available predictor for the planning of an intervention in gynaecological surgery. The ASA classification helps the surgeon to assess the peri-operative risk profile of which important information can be derived for the planning of the operation programme.
Resumo:
The loci of the porcine tumour necrosis factor genes, alpha (TNFA) and beta (TNFB), have been chromosomally assigned by radioactive in situ hybridization. The genomic probes for TNFA and TNFB yielded signals above 7p11-q11, a region that has been shown earlier to carry the porcine major histocompatibility locus (SLA). These mapping data along with preliminary molecular studies suggest a genomic organization of the SLA that is similar to that of human and murine major histocompatibility complexes.
Resumo:
We have cloned the complete coding region of the porcine TNFSF10 gene. The porcine TNFSF10 cDNA has an ORF of 870 nucleotides and shares 85% identity with human TNFSF10, and 75% and 72% identity with rat and mouse Tnfsf10 coding sequences, respectively. The deduced porcine TNFSF10 protein consists of 289 amino acids with the calculated molecular mass of 33.5 kDa and a predicted pI of 8.15. The amino acid sequence similarities correspond to 86, 72 and 70% when compared with human, rat and mouse sequences, respectively. Northern blot analysis detected TNFSF10-specific transcripts (approximately 1.7 kb) in various organs of a 10-week-old pig, suggesting ubiquitous expression. Real-time RT-PCR studies of various organs from fetal (days 73 and 98) and postnatal stages (two weeks, eight months) demonstrated developmental and tissue-specific regulation of TNFSF10 mRNA abundance. The chromosomal location of the porcine TNFSF10 gene was determined by FISH of a specific BAC clone to metaphase chromosomes. This TNFSF10 BAC clone has been assigned to SSC13q34-->q36. Additionally, the localization of the TNFSF10 gene was verified by RH mapping on the porcine IMpRH panel.
Resumo:
Pinschers affected by coat color dilution show a specific pigmentation phenotype. The dilute pigmentation phenotype leads to a silver-blue appearance of the eumelanin-containing fur and a pale sandy color of pheomelanin-containing fur. In Pinscher breeding, dilute black-and-tan dogs are called "blue," and dilute red or brown animals are termed "fawn" or "Isabella fawn." Coat color dilution in Pinschers is sometimes accompanied by hair loss and a recurrent infection of the hair follicles. In human and mice, several well-characterized genes are responsible for similar pigment variations. To investigate the genetic cause of the coat color dilution in Pinschers, we isolated BAC clones containing the canine ortholog of the known murine color dilution gene Mlph. RH mapping of the canine MLPH gene was performed using an STS marker derived from BAC sequences. Additionally, one MLPH BAC clone was used as probe for FISH mapping, and the canine MLPH gene was assigned to CFA25q24.