775 resultados para Learning from Examples
Resumo:
This thesis attempts to quantify the amount of information needed to learn certain tasks. The tasks chosen vary from learning functions in a Sobolev space using radial basis function networks to learning grammars in the principles and parameters framework of modern linguistic theory. These problems are analyzed from the perspective of computational learning theory and certain unifying perspectives emerge.
Resumo:
This paper describes the processes used by students to learn from worked-out examples and by working through problems. Evidence is derived from protocols of students learning secondary school mathematics and physics. The students acquired knowledge from the examples in the form of productions (condition-->action): first discovering conditions under which the actions are appropriate and then elaborating the conditions to enhance efficiency. Students devoted most of their attention to the condition side of the productions. Subsequently, they generalized the productions for broader application and acquired specialized productions for special problem classes.
Resumo:
An approach is proposed for inferring implicative logical rules from examples. The concept of a good diagnostic test for a given set of positive examples lies in the basis of this approach. The process of inferring good diagnostic tests is considered as a process of inductive common sense reasoning. The incremental approach to learning algorithms is implemented in an algorithm DIAGaRa for inferring implicative rules from examples.
Resumo:
Learning from mistakes has proven to be an effective way of learning in the interactive document classifications. In this paper we propose an approach to effectively learning from mistakes in the email filtering process. Our system has employed both SVM and Winnow machine learning algorithms to learn from misclassified email documents and refine the email filtering process accordingly. Our experiments have shown that the training of an email filter becomes much effective and faster
Resumo:
This briefing provides a summary of learning from three workshops on HEA, and examples of completed or near-completed HEAs to illustrate these learning points. It is recognised that this experience is evolving.
Resumo:
Learning case examples and best practice from the pilot areas of Communities for Health. These pilots detail how communities have addressed a wide range of health issues and tackled health inequalities. Rural and urban deprived areas have worked to address obesity, healthy eating, mental health and sexual health.
Resumo:
There are many learning problems for which the examples given by the teacher are ambiguously labeled. In this thesis, we will examine one framework of learning from ambiguous examples known as Multiple-Instance learning. Each example is a bag, consisting of any number of instances. A bag is labeled negative if all instances in it are negative. A bag is labeled positive if at least one instance in it is positive. Because the instances themselves are not labeled, each positive bag is an ambiguous example. We would like to learn a concept which will correctly classify unseen bags. We have developed a measure called Diverse Density and algorithms for learning from multiple-instance examples. We have applied these techniques to problems in drug design, stock prediction, and image database retrieval. These serve as examples of how to translate the ambiguity in the application domain into bags, as well as successful examples of applying Diverse Density techniques.
Resumo:
We study the dynamics of on-line learning in multilayer neural networks where training examples are sampled with repetition and where the number of examples scales with the number of network weights. The analysis is carried out using the dynamical replica method aimed at obtaining a closed set of coupled equations for a set of macroscopic variables from which both training and generalization errors can be calculated. We focus on scenarios whereby training examples are corrupted by additive Gaussian output noise and regularizers are introduced to improve the network performance. The dependence of the dynamics on the noise level, with and without regularizers, is examined, as well as that of the asymptotic values obtained for both training and generalization errors. We also demonstrate the ability of the method to approximate the learning dynamics in structurally unrealizable scenarios. The theoretical results show good agreement with those obtained by computer simulations.
Resumo:
In this chapter, the way in which varied terms such as Networked learning, e-learning and Technology Enhanced Learning (TEL) have each become colonised to support a dominant, economically-based world view of educational technology is discussed. Critical social theory about technology, language and learning is brought into dialogue with examples from a corpus-based Critical Discourse Analysis (CDA) of UK policy texts for educational technology between1997 and 2012. Though these policy documents offer much promise for enhancement of people’s performance via technology, the human presence to enact such innovation is missing. Given that ‘academic workload’ is a ‘silent barrier’ to the implementation of TEL strategies (Gregory and Lodge, 2015), analysis further exposes, through empirical examples, that the academic labour of both staff and students appears to be unacknowledged. Global neoliberal capitalist values have strongly territorialised the contemporary university (Hayes & Jandric, 2014), utilising existing naïve, utopian arguments about what technology alone achieves. Whilst the chapter reveals how humans are easily ‘evicted’, even from discourse about their own learning (Hayes, 2015), it also challenges staff and students to seek to re-occupy the important territory of policy to subvert the established order. We can use the very political discourse that has disguised our networked learning practices, in new explicit ways, to restore our human visibility.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
Except for a few large scale projects, language planners have tended to talk and argue among themselves rather than to see language policy development as an inherently political process. A comparison with a social policy example, taken from the United States, suggests that it is important to understand the problem and to develop solutions in the context of the political process, as this is where decisions will ultimately be made.
Resumo:
This article was written by a Swiss-German historical demographer after having visited different Brazilian Universities in 1984 as a guest-professor. It aims at promoting a real dialog between developed and developing countries, commencing the discussion with the question: Can we learn from each other? An affirmative answer is given, but not in the superficial manner in which the discussion partners simply want to give each other some "good advice" or in which the one declares his country's own development to be the solely valid standard. Three points are emphasized: 1. Using infant mortality in S. Paulo from 1908 to 1983 as an example, it is shown that Brazil has at its disposal excellent, highly varied research literature that is unjustifiably unknown to us (in Europe) for the most part. Brazil by no means needs our tutoring lessons as regards the causal relationships; rather, we could learn two things from Brazil about this. For one, it becomes clear that our almost exclusively medical-biological view is inappropriate for passing a judgment on the present-day problems in Brazil and that any conclusions so derived are thus only transferable to a limited extent. For another, we need to reinterpret the history of infant mortality in our own countries up to the past few decades in a much more encompassing "Brazilian" sense. 2. A fruitful dialog can only take place if both partners frankly present their problems. For this reason, the article refers with much emprasis to our present problems in dealing with death and dying - problems arising near the end of the demographic and epidemiologic transitions: the superanuation of the population, chronic-incurable illnesses as the main causes of death, the manifold dependencies of more and more elderly and really old people at the end of a long life. Brazil seems to be catching up to us in this and will be confronted with these problems sooner or later. A far-sighted discussion already at this time seems thus to be useful. 3. The article, however, does not want to conclude with the rather depressing state of affairs of problems alternatingly superseding each other. Despite the caution which definitely has a place when prognoses are being made on the basis of extrapolations from historical findings, the foreseeable development especially of the epidemiologic transition in the direction of a rectangular survival curve does nevertheless provide good reason for being rather optimistic towards the future: first in regards to the development in our own countries, but then - assuming that the present similar tendencies of development are stuck to - also in regard to Brazil.