26 resultados para time-place learning

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a simulation model of glucose-insulin metabolism for Type 1 diabetes patients is presented. The proposed system is based on the combination of Compartmental Models (CMs) and artificial Neural Networks (NNs). This model aims at the development of an accurate system, in order to assist Type 1 diabetes patients to handle their blood glucose profile and recognize dangerous metabolic states. Data from a Type 1 diabetes patient, stored in a database, have been used as input to the hybrid system. The data contain information about measured blood glucose levels, insulin intake, and description of food intake, along with the corresponding time. The data are passed to three separate CMs, which produce estimations about (i) the effect of Short Acting (SA) insulin intake on blood insulin concentration, (ii) the effect of Intermediate Acting (IA) insulin intake on blood insulin concentration, and (iii) the effect of carbohydrate intake on blood glucose absorption from the gut. The outputs of the three CMs are passed to a Recurrent NN (RNN) in order to predict subsequent blood glucose levels. The RNN is trained with the Real Time Recurrent Learning (RTRL) algorithm. The resulted blood glucose predictions are promising for the use of the proposed model for blood glucose level estimation for Type 1 diabetes patients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, an Insulin Infusion Advisory System (IIAS) for Type 1 diabetes patients, which use insulin pumps for the Continuous Subcutaneous Insulin Infusion (CSII) is presented. The purpose of the system is to estimate the appropriate insulin infusion rates. The system is based on a Non-Linear Model Predictive Controller (NMPC) which uses a hybrid model. The model comprises a Compartmental Model (CM), which simulates the absorption of the glucose to the blood due to meal intakes, and a Neural Network (NN), which simulates the glucose-insulin kinetics. The NN is a Recurrent NN (RNN) trained with the Real Time Recurrent Learning (RTRL) algorithm. The output of the model consists of short term glucose predictions and provides input to the NMPC, in order for the latter to estimate the optimum insulin infusion rates. For the development and the evaluation of the IIAS, data generated from a Mathematical Model (MM) of a Type 1 diabetes patient have been used. The proposed control strategy is evaluated at multiple meal disturbances, various noise levels and additional time delays. The results indicate that the implemented IIAS is capable of handling multiple meals, which correspond to realistic meal profiles, large noise levels and time delays.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper two models for the simulation of glucose-insulin metabolism of children with Type 1 diabetes are presented. The models are based on the combined use of Compartmental Models (CMs) and artificial Neural Networks (NNs). Data from children with Type 1 diabetes, stored in a database, have been used as input to the models. The data are taken from four children with Type 1 diabetes and contain information about glucose levels taken from continuous glucose monitoring system, insulin intake and food intake, along with corresponding time. The influences of taken insulin on plasma insulin concentration, as well as the effect of food intake on glucose input into the blood from the gut, are estimated from the CMs. The outputs of CMs, along with previous glucose measurements, are fed to a NN, which provides short-term prediction of glucose values. For comparative reasons two different NN architectures have been tested: a Feed-Forward NN (FFNN) trained with the back-propagation algorithm with adaptive learning rate and momentum, and a Recurrent NN (RNN), trained with the Real Time Recurrent Learning (RTRL) algorithm. The results indicate that the best prediction performance can be achieved by the use of RNN.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Is numerical mimicry a third way of establishing truth? Kevin Heng received his M.S. and Ph.D. in astrophysics from the Joint Institute for Laboratory Astrophysics (JILA) and the University of Colorado at Boulder. He joined the Institute for Advanced Study in Princeton from 2007 to 2010, first as a Member and later as the Frank & Peggy Taplin Member. From 2010 to 2012 he was a Zwicky Prize Fellow at ETH Z¨urich (the Swiss Federal Institute of Technology). In 2013, he joined the Center for Space and Habitability (CSH) at the University of Bern, Switzerland, as a tenure-track assistant professor, where he leads the Exoplanets and Exoclimes Group. He has worked on, and maintains, a broad range of interests in astrophysics: shocks, extrasolar asteroid belts, planet formation, fluid dynamics, brown dwarfs and exoplanets. He coordinates the Exoclimes Simulation Platform (ESP), an open-source set of theoretical tools designed for studying the basic physics and chemistry of exoplanetary atmospheres and climates (www.exoclime.org). He is involved in the CHEOPS (Characterizing Exoplanet Satellite) space telescope, a mission approved by the European Space Agency (ESA) and led by Switzerland. He spends a fair amount of time humbly learning the lessons gleaned from studying the Earth and Solar System planets, as related to him by atmospheric, climate and planetary scientists. He received a Sigma Xi Grant-in-Aid of Research in 2006

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this chapter I explore the ambiguous, contradictory and often transient ways the past enters into our lives. I shed light on the interplay of mobility and temporality in the lifeworlds of two Somalis who left Mogadishu with the outbreak of the war in the 1990s. Looking into the ways they actively make sense of this crucial ‘memory-place’ (Ricoeur 2004), a place that that has been turned into a landscape of ruins and rubble, alternative understandings of memory and temporality will emerge. Instead of producing a continuum between here and there, and now and then, the stories and photographs discussed in this chapter form dialectical images – images that refuse to be woven into a coherent picture of the past. By emphasising the dialectical ways these two individuals make sense of Mogadishu’s past and presence, I am following Walter Benjamin’s cue to rethink deeply modern analytical categories such as history, memory and temporality by highlighting the brief, fragmented moments of their appearance in everyday life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have developed a haptic-based approach for retraining of interjoint coordination following stroke called time-independent functional training (TIFT) and implemented this mode in the ARMin III robotic exoskeleton. The ARMin III robot was developed by Drs. Robert Riener and Tobias Nef at the Swiss Federal Institute of Technology Zurich (Eidgenossische Technische Hochschule Zurich, or ETH Zurich), in Zurich, Switzerland. In the TIFT mode, the robot maintains arm movements within the proper kinematic trajectory via haptic walls at each joint. These arm movements focus training of interjoint coordination with highly intuitive real-time feedback of performance; arm movements advance within the trajectory only if their movement coordination is correct. In initial testing, 37 nondisabled subjects received a single session of learning of a complex pattern. Subjects were randomized to TIFT or visual demonstration or moved along with the robot as it moved though the pattern (time-dependent [TD] training). We examined visual demonstration to separate the effects of action observation on motor learning from the effects of the two haptic guidance methods. During these training trials, TIFT subjects reduced error and interaction forces between the robot and arm, while TD subject performance did not change. All groups showed significant learning of the trajectory during unassisted recall trials, but we observed no difference in learning between groups, possibly because this learning task is dominated by vision. Further testing in stroke populations is warranted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This publication offers concrete suggestions for implementing an integrative and learning-oriented approach to agricultural extension with the goal of fostering sustainable development. It targets governmental and non-governmental organisations, development agencies, and extension staff working in the field of rural development. The book looks into the conditions and trends that influence extension today, and outlines new challenges and necessary adaptations. It offers a basic reflection on the goals, the criteria for success and the form of a state-of-the-art approach to extension. The core of the book consists of a presentation of Learning for Sustainability (LforS), an example of an integrative, learning-oriented approach that is based on three crucial elements: stakeholder dialogue, knowledge management, and organizational development. Awareness raising and capacity building, social mobilization, and monitoring & evaluation are additional building blocks. The structure and organisation of the LforS approach as well as a selection of appropriate methods and tools are presented. The authors also address key aspects of developing and managing a learning-oriented extension approach. The book illustrates how LforS can be implemented by presenting two case studies, one from Madagascar and one from Mongolia. It addresses conceptual questions and at the same time it is practice-oriented. In contrast to other extension approaches, LforS does not limit its focus to production-related aspects and the development of value chains: it also addresses livelihood issues in a broad sense. With its focus on learning processes LforS seeks to create a better understanding of the links between different spheres and different levels of decision-making; it also seeks to foster integration of the different actors’ perspectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.