882 resultados para reinforcement learning,cryptography,machine learning,deep learning,Deep Q-Learning (DQN),AES
Resumo:
Ecologically and evolutionarily oriented research on learning has traditionally been carried out on vertebrates and bees. While less sophisticated than those animals, fruit flies (Drosophila) are capable of several forms of learning, and have an advantage of a short generation time, which makes them an ideal system for experimental evolution studies. This review summarizes the insights into evolutionary questions about learning gained in the last decade from evolutionary experiments on Drosophila. These experiments demonstrate that Drosophila have the genetic potential to evolve substantially improved learning performance in ecologically relevant learning tasks. In at least one set of selected populations the improved learning generalized to another task than that used to impose selection, involving a different behavior, different stimuli, and a different sensory channel for the aversive reinforcement. This improvement in learning ability was associated with reduction in other fitness-related traits, such as larval competitive ability and lifespan, pointing out to evolutionary trade-offs of improved learning. These trade-offs were confirmed by other evolutionary experiments where reduction in learning performance was observed as a correlated response to selection for tolerance to larval nutritional stress or for delayed aging. Such trade-offs could be one reason why fruit flies have not fully used up their evolutionary potential for learning ability. Finally, another evolutionary experiment with Drosophila provided the first direct evidence for the long-standing ideas that learning can under some circumstances accelerate and in other slow down genetically-based evolutionary change. These results demonstrate the usefulness of fruit flies as a model system to address evolutionary questions about learning.
Resumo:
Background: One characteristic of post traumatic stress disorder is an inability to adapt to a safe environment i.e. to change behavior when predictions of adverse outcomes are not met. Recent studies have also indicated that PTSD patients have altered pain processing, with hyperactivation of the putamen and insula to aversive stimuli (Geuze et al, 2007). The present study examined neuronal responses to aversive and predicted aversive events. Methods: Twenty-four trauma exposed non-PTSD controls and nineteen subjects with PTSD underwent fMRI imaging during a partial reinforcement fear conditioning paradigm, with a mild electric shock as the unconditioned stimuli (UCS). Three conditions were analyzed: actual presentations of the UCS, events when a UCS was expected, but omitted (CS+), and events when the UCS was neither expected nor delivered (CS-). Results: The UCS evoked significant alterations in the pain matrix consisting of the brainstem, the midbrain, the thalamus, the insula, the anterior and middle cingulate and the contralateral somatosensory cortex. PTSD subjects displayed bilaterally elevated putamen activity to the electric shock, as compared to controls. In trials when USC was expected, but omitted, significant activations were observed in the brainstem, the midbrain, the anterior insula and the anterior cingulate. PTSD subjects displayed similar activations, but also elevated activations in the amygdala and the posterior insula. Conclusions: These results indicate altered fear and safety learning in PTSD, and neuronal activations are further explored in terms of functional connectivity using psychophysiological interaction analyses.
Resumo:
The EVS4CSCL project starts in the context of a Computer Supported Collaborative Learning environment (CSCL). Previous UOC projects created a CSCL generic platform (CLPL) to facilitate the development of CSCL applications. A discussion forum (DF) was the first application developed over the framework. This discussion forum was different from other products on the marketplace because of its focus on the learning process. The DF carried out the specification and elaboration phases from the discussion learning process but there was a lack in the consensus phase. The consensus phase in a learning environment is not something to be achieved but tested. Common tests are done by Electronic Voting System (EVS) tools, but consensus test is not an assessment test. We are not evaluating our students by their answers but by their discussion activity. Our educational EVS would be used as a discussion catalyst proposing a discussion about the results after an initial query or it would be used after a discussion period in order to manifest how the discussion changed the students mind (consensus). It should be also used by the teacher as a quick way to know where the student needs some reinforcement. That is important in a distance-learning environment where there is no direct contact between the teacher and the student and it is difficult to detect the learning lacks. In an educational environment, assessment it is a must and the EVS will provide direct assessment by peer usefulness evaluation, teacher marks on every query created and indirect assessment from statistics regarding the user activity.
Resumo:
We present a novel filtering method for multispectral satellite image classification. The proposed method learns a set of spatial filters that maximize class separability of binary support vector machine (SVM) through a gradient descent approach. Regularization issues are discussed in detail and a Frobenius-norm regularization is proposed to efficiently exclude uninformative filters coefficients. Experiments carried out on multiclass one-against-all classification and target detection show the capabilities of the learned spatial filters.
Resumo:
In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).
Resumo:
The influence of proximal olfactory cues on place learning and memory was tested in two different spatial tasks. Rats were trained to find a hole leading to their home cage or a single food source in an array of petri dishes. The two apparatuses differed both by the type of reinforcement (return to the home cage or food reward) and the local characteristics of the goal (masked holes or salient dishes). In both cases, the goal was in a fixed location relative to distant visual landmarks and could be marked by a local olfactory cue. Thus, the position of the goal was defined by two sets of redundant cues, each of which was sufficient to allow the discrimination of the goal location. These experiments were conducted with two strains of hooded rats (Long-Evans and PVG), which show different speeds of acquisition in place learning tasks. They revealed that the presence of an olfactory cue marking the goal facilitated learning of its location and that the facilitation persisted after the removal of the cue. Thus, the proximal olfactory cue appeared to potentiate learning and memory of the goal location relative to distant environmental cues. This facilitating effect was only detected when the expression of spatial memory was not already optimal, i.e., during the early phase of acquisition. It was not limited to a particular strain.
Resumo:
The present research deals with an application of artificial neural networks for multitask learning from spatial environmental data. The real case study (sediments contamination of Geneva Lake) consists of 8 pollutants. There are different relationships between these variables, from linear correlations to strong nonlinear dependencies. The main idea is to construct a subsets of pollutants which can be efficiently modeled together within the multitask framework. The proposed two-step approach is based on: 1) the criterion of nonlinear predictability of each variable ?k? by analyzing all possible models composed from the rest of the variables by using a General Regression Neural Network (GRNN) as a model; 2) a multitask learning of the best model using multilayer perceptron and spatial predictions. The results of the study are analyzed using both machine learning and geostatistical tools.
Resumo:
Our work is focused on alleviating the workload for designers of adaptive courses on the complexity task of authoring adaptive learning designs adjusted to specific user characteristics and the user context. We propose an adaptation platform that consists in a set of intelligent agents where each agent carries out an independent adaptation task. The agents apply machine learning techniques to support the user modelling for the adaptation process
Resumo:
This paper presents SiMR, a simulator of the Rudimentary Machine designed to be used in a first course of computer architecture of Software Engineering and Computer Engineering programmes. The Rudimentary Machine contains all the basic elements in a RISC computer, and SiMR allows editing, assembling and executing programmes for this processor. SiMR is used at the Universitat Oberta de Catalunya as one of the most important resources in the Virtual Computing Architecture and Organisation Laboratory, since students work at home with the simulator and reports containing their work are automatically generated to be evaluated by lecturers. The results obtained from a survey show that most of the students consider SiMR as a highly necessary or even an indispensable resource to learn the basic concepts about computer architecture.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
Peer-reviewed
Resumo:
Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.