21 resultados para Learning Society
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
There is a growing demand for better understanding of the link between research, policy and practice in development. This article provides findings from a study that aimed to gain insights into how researchers engage with their non-academic partners. It draws on experiences from the National Centre of Competence in Research North-South programme, a development research network of Swiss, African, Asian and Latin American institutions. Conceptually, this study is concerned with research effectiveness as a means to identify knowledge useful for society. Research can be improved and adapted when monitoring the effects of interactions between researchers and non-academic partners. Therefore, a monitoring and learning approach was chosen. This study reveals researchers' strategies in engaging with non-academic partners and points to framing conditions considered decisive for soccessful interactions. It concludes that reserachrs need to systematically analyse the socio-political context in which they intervene. By providing insights from the ground and reflecting on them in the light of the latest theoretical concepts, this article contributes to the emerging literature founded on practice-based experience.
Resumo:
Abstract Radiation metabolomics employing mass spectral technologies represents a plausible means of high-throughput minimally invasive radiation biodosimetry. A simplified metabolomics protocol is described that employs ubiquitous gas chromatography-mass spectrometry and open source software including random forests machine learning algorithm to uncover latent biomarkers of 3 Gy gamma radiation in rats. Urine was collected from six male Wistar rats and six sham-irradiated controls for 7 days, 4 prior to irradiation and 3 after irradiation. Water and food consumption, urine volume, body weight, and sodium, potassium, calcium, chloride, phosphate and urea excretion showed major effects from exposure to gamma radiation. The metabolomics protocol uncovered several urinary metabolites that were significantly up-regulated (glyoxylate, threonate, thymine, uracil, p-cresol) and down-regulated (citrate, 2-oxoglutarate, adipate, pimelate, suberate, azelaate) as a result of radiation exposure. Thymine and uracil were shown to derive largely from thymidine and 2'-deoxyuridine, which are known radiation biomarkers in the mouse. The radiation metabolomic phenotype in rats appeared to derive from oxidative stress and effects on kidney function. Gas chromatography-mass spectrometry is a promising platform on which to develop the field of radiation metabolomics further and to assist in the design of instrumentation for use in detecting biological consequences of environmental radiation release.
Resumo:
Training a system to recognize handwritten words is a task that requires a large amount of data with their correct transcription. However, the creation of such a training set, including the generation of the ground truth, is tedious and costly. One way of reducing the high cost of labeled training data acquisition is to exploit unlabeled data, which can be gathered easily. Making use of both labeled and unlabeled data is known as semi-supervised learning. One of the most general versions of semi-supervised learning is self-training, where a recognizer iteratively retrains itself on its own output on new, unlabeled data. In this paper we propose to apply semi-supervised learning, and in particular self-training, to the problem of cursive, handwritten word recognition. The special focus of the paper is on retraining rules that define what data are actually being used in the retraining phase. In a series of experiments it is shown that the performance of a neural network based recognizer can be significantly improved through the use of unlabeled data and self-training if appropriate retraining rules are applied.
Resumo:
Prior studies suggest that clients need to actively govern knowledge transfer to vendor staff in offshore outsourcing. In this paper, we analyze longitudinal data from four software maintenance offshore out-sourcing projects to explore why governance may be needed for knowledge transfer and how governance and the individual learning of vendor engineers inter-act over time. Our results suggest that self-control is central to learning, but may be hampered by low levels of trust and expertise at the outset of projects. For these foundations to develop, clients initially need to exert high amounts of formal and clan controls to enforce learning activities against barriers to knowledge sharing. Once learning activities occur, trust and expertise increase and control portfolios may show greater emphases on self-control.
Resumo:
This paper examines the social impacts of weather extremes and the processes of social and communicative learning a society undertakes to find alternative ways to deal with the consequences of a crisis. In the beginning of the 20th Century hunger seemed to be expelled from Europe. Switzerland – like many other European countries – was involved in a global interdependent trade system, which provided necessary goods. But at the end of World War I very cold and wet summers in 1916/17 (causing crop failure) and the difficulties in war-trade led to malnutrition and enormous price risings of general living-standards in Switzerland, which shocked the people and caused revolutionary uprisings in 1918. The experience of malnutrition during the last two years of war made clear that the traditional ways of food supply in Switzerland lacked crisis stability. Therefore various agents in the field of food production, distribution and consumption searched for alternative ways of food supply. In that sense politicians, industrialists, consumer-groups, left-wing communitarians and farmers developed several strategies for new ways in food production. Traditionally there were political conflicts in Switzerland between farmers and consumers regarding price policies, which led mainly to the conflict in 1918. Consumers accused famers of holding back food to control extortionate prices while the farmers pointed to the bad harvest causing the price rising. The collaboration of these groups in search for new forms of food-stability made social integration possible again. In addition to other crisis-factors, weather extremes can have disastrous impacts and destroy a society’s self-confidence to its core. But even such crisis can lead to processes of substantial learning that allows a regeneration of confidence and show positive influence on political stabilization. The paper focuses on the process of learning and the alternative methods of food production that were suggested by various agents working in the field during the Interwar period. To achieve that goal documents of the various associations are analyzed and newspapers have been taken into consideration. Through the method of discourse-analysis of food-production during the Interwar period, possible solutions that crossed the minds of the agents should be brought to light.
Resumo:
Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback–Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.
Resumo:
Implicit task sequence learning (TSL) can be considered as an extension of implicit sequence learning which is typically tested with the classical serial reaction time task (SRTT). By design, in the SRTT there is a correlation between the sequence of stimuli to which participants must attend and the sequence of motor movements/key presses with which participants must respond. The TSL paradigm allows to disentangle this correlation and to separately manipulate the presences/absence of a sequence of tasks, a sequence of responses, and even other streams of information such as stimulus locations or stimulus-response mappings. Here I review the state of TSL research which seems to point at the critical role of the presence of correlated streams of information in implicit sequence learning. On a more general level, I propose that beyond correlated streams of information, a simple statistical learning mechanism may also be involved in implicit sequence learning, and that the relative contribution of these two explanations differ according to task requirements. With this differentiation, conflicting results can be integrated into a coherent framework.
Resumo:
Artificial pancreas is in the forefront of research towards the automatic insulin infusion for patients with type 1 diabetes. Due to the high inter- and intra-variability of the diabetic population, the need for personalized approaches has been raised. This study presents an adaptive, patient-specific control strategy for glucose regulation based on reinforcement learning and more specifically on the Actor-Critic (AC) learning approach. The control algorithm provides daily updates of the basal rate and insulin-to-carbohydrate (IC) ratio in order to optimize glucose regulation. A method for the automatic and personalized initialization of the control algorithm is designed based on the estimation of the transfer entropy (TE) between insulin and glucose signals. The algorithm has been evaluated in silico in adults, adolescents and children for 10 days. Three scenarios of initialization to i) zero values, ii) random values and iii) TE-based values have been comparatively assessed. The results have shown that when the TE-based initialization is used, the algorithm achieves faster learning with 98%, 90% and 73% in the A+B zones of the Control Variability Grid Analysis for adults, adolescents and children respectively after five days compared to 95%, 78%, 41% for random initialization and 93%, 88%, 41% for zero initial values. Furthermore, in the case of children, the daily Low Blood Glucose Index reduces much faster when the TE-based tuning is applied. The results imply that automatic and personalized tuning based on TE reduces the learning period and improves the overall performance of the AC algorithm.
Resumo:
This paper applies a policy analysis approach to the question of how to effectively regulate micropollution in a sustainable manner. Micropollution is a complex policy problem characterized by a huge number and diversity of chemical substances, as well as various entry paths into the aquatic environment. It challenges traditional water quality management by calling for new technologies in wastewater treatment and behavioral changes in industry, agriculture and civil society. In light of such challenges, the question arises as to how to regulate such a complex phenomenon to ensure water quality is maintained in the future? What can we learn from past experiences in water quality regulation? To answer these questions, policy analysis strongly focuses on the design and choice of policy instruments and the mix of such measures. In this paper, we review instruments commonly used in past water quality regulation. We evaluate their ability to respond to the characteristics of a more recent water quality problem, i.e., micropollution, in a sustainable way. This way, we develop a new framework that integrates both the problem dimension (i.e., causes and effects of a problem) as well as the sustainability dimension (e.g., long-term, cross-sectoral and multi-level) to assess which policy instruments are best suited to regulate micropollution. We thus conclude that sustainability criteria help to identify an appropriate instrument mix of end-of-pipe and source-directed measures to reduce aquatic micropollution.