951 resultados para institutional learning
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
In an uncertain environment, probabilities are key to predicting future events and making adaptive choices. However, little is known about how humans learn such probabilities and where and how they are encoded in the brain, especially when they concern more than two outcomes. During functional magnetic resonance imaging (fMRI), young adults learned the probabilities of uncertain stimuli through repetitive sampling. Stimuli represented payoffs and participants had to predict their occurrence to maximize their earnings. Choices indicated loss and risk aversion but unbiased estimation of probabilities. BOLD response in medial prefrontal cortex and angular gyri increased linearly with the probability of the currently observed stimulus, untainted by its value. Connectivity analyses during rest and task revealed that these regions belonged to the default mode network. The activation of past outcomes in memory is evoked as a possible mechanism to explain the engagement of the default mode network in probability learning. A BOLD response relating to value was detected only at decision time, mainly in striatum. It is concluded that activity in inferior parietal and medial prefrontal cortex reflects the amount of evidence accumulated in favor of competing and uncertain outcomes.
Resumo:
In fear conditioning, an animal learns to associate an unconditioned stimulus (US), such as a shock, and a conditioned stimulus (CS), such as a tone, so that the presentation of the CS alone can trigger conditioned responses. Recent research on the lateral amygdala has shown that following cued fear conditioning, only a subset of higher-excitable neurons are recruited in the memory trace. Their selective deletion after fear conditioning results in a selective erasure of the fearful memory. I hypothesize that the recruitment of highly excitable neurons depends on responsiveness to stimuli, intrinsic excitability and local connectivity. In addition, I hypothesize that neurons recruited for an initial memory also participate in subsequent memories, and that changes in neuronal excitability affect secondary fear learning. To address these hypotheses, I will show that A) a rat can learn to associate two successive short-term fearful memories; B) neuronal populations in the LA are competitively recruited in the memory traces depending on individual neuronal advantages, as well as advantages granted by the local network. By performing two successive cued fear conditioning experiments, I found that rats were able to learn and extinguish the two successive short-term memories, when tested 1 hour after learning for each memory. These rats were equipped with a system of stable extracellular recordings that I developed, which allowed to monitor neuronal activity during fear learning. 233 individual putative pyramidal neurons could modulate their firing rate in response to the conditioned tone (conditioned neurons) and/or non- conditioned tones (generalizing neurons). Out of these recorded putative pyramidal neurons 86 (37%) neurons were conditioned to one or both tones. More precisely, one population of neurons encoded for a shared memory while another group of neurons likely encoded the memories' new features. Notably, in spite of a successful behavioral extinction, the firing rate of those conditioned neurons in response to the conditioned tone remained unchanged throughout memory testing. Furthermore, by analyzing the pre-conditioning characteristics of the conditioned neurons, I determined that it was possible to predict neuronal recruitment based on three factors: 1) initial sensitivity to auditory inputs, with tone-sensitive neurons being more easily recruited than tone- insensitive neurons; 2) baseline excitability levels, with more highly excitable neurons being more likely to become conditioned; and 3) the number of afferent connections received from local neurons, with neurons destined to become conditioned receiving more connections than non-conditioned neurons. - En conditionnement de la peur, un animal apprend à associer un stimulus inconditionnel (SI), tel un choc électrique, et un stimulus conditionné (SC), comme un son, de sorte que la présentation du SC seul suffit pour déclencher des réflexes conditionnés. Des recherches récentes sur l'amygdale latérale (AL) ont montré que, suite au conditionnement à la peur, seul un sous-ensemble de neurones plus excitables sont recrutés pour constituer la trace mnésique. Pour apprendre à associer deux sons au même SI, je fais l'hypothèse que les neurones entrent en compétition afin d'être sélectionnés lors du recrutement pour coder la trace mnésique. Ce recrutement dépendrait d'un part à une activation facilité des neurones ainsi qu'une activation facilité de réseaux de neurones locaux. En outre, je fais l'hypothèse que l'activation de ces réseaux de l'AL, en soi, est suffisante pour induire une mémoire effrayante. Pour répondre à ces hypothèses, je vais montrer que A) selon un processus de mémoire à court terme, un rat peut apprendre à associer deux mémoires effrayantes apprises successivement; B) des populations neuronales dans l'AL sont compétitivement recrutées dans les traces mnésiques en fonction des avantages neuronaux individuels, ainsi que les avantages consentis par le réseau local. En effectuant deux expériences successives de conditionnement à la peur, des rats étaient capables d'apprendre, ainsi que de subir un processus d'extinction, pour les deux souvenirs effrayants. La mesure de l'efficacité du conditionnement à la peur a été effectuée 1 heure après l'apprentissage pour chaque souvenir. Ces rats ont été équipés d'un système d'enregistrements extracellulaires stables que j'ai développé, ce qui a permis de suivre l'activité neuronale pendant l'apprentissage de la peur. 233 neurones pyramidaux individuels pouvaient moduler leur taux d'activité en réponse au son conditionné (neurones conditionnés) et/ou au son non conditionné (neurones généralisant). Sur les 233 neurones pyramidaux putatifs enregistrés 86 (37%) d'entre eux ont été conditionnés à un ou deux tons. Plus précisément, une population de neurones code conjointement pour un souvenir partagé, alors qu'un groupe de neurones différent code pour de nouvelles caractéristiques de nouveaux souvenirs. En particulier, en dépit d'une extinction du comportement réussie, le taux de décharge de ces neurones conditionné en réponse à la tonalité conditionnée est resté inchangée tout au long de la mesure d'apprentissage. En outre, en analysant les caractéristiques de pré-conditionnement des neurones conditionnés, j'ai déterminé qu'il était possible de prévoir le recrutement neuronal basé sur trois facteurs : 1) la sensibilité initiale aux entrées auditives, avec les neurones sensibles aux sons étant plus facilement recrutés que les neurones ne répondant pas aux stimuli auditifs; 2) les niveaux d'excitabilité des neurones, avec les neurones plus facilement excitables étant plus susceptibles d'être conditionnés au son ; et 3) le nombre de connexions reçues, puisque les neurones conditionné reçoivent plus de connexions que les neurones non-conditionnés. Enfin, nous avons constaté qu'il était possible de remplacer de façon satisfaisante le SI lors d'un conditionnement à la peur par des injections bilatérales de bicuculline, un antagoniste des récepteurs de l'acide y-Aminobutirique.
Resumo:
An active learning method is proposed for the semi-automatic selection of training sets in remote sensing image classification. The method adds iteratively to the current training set the unlabeled pixels for which the prediction of an ensemble of classifiers based on bagged training sets show maximum entropy. This way, the algorithm selects the pixels that are the most uncertain and that will improve the model if added in the training set. The user is asked to label such pixels at each iteration. Experiments using support vector machines (SVM) on an 8 classes QuickBird image show the excellent performances of the methods, that equals accuracies of both a model trained with ten times more pixels and a model whose training set has been built using a state-of-the-art SVM specific active learning method
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.