35 resultados para computer-mediated learning
Resumo:
Context traditionally has been regarded in vision research as a determinant for the interpretation of sensory information on the basis of previously acquired knowledge. Here we propose a novel, complementary perspective by showing that context also specifically affects visual category learning. In two experiments involving sets of Compound Gabor patterns we explored how context, as given by the stimulus set to be learned, affects the internal representation of pattern categories. In Experiment 1, we changed the (local) context of the individual signal classes by changing the configuration of the learning set. In Experiment 2, we varied the (global) context of a fixed class configuration by changing the degree of signal accentuation. Generalization performance was assessed in terms of the ability to recognize contrast-inverted versions of the learning patterns. Both contextual variations yielded distinct effects on learning and generalization thus indicating a change in internal category representation. Computer simulations suggest that the latter is related to changes in the set of attributes underlying the production rules of the categories. The implications of these findings for phenomena of contrast (in)variance in visual perception are discussed.
Resumo:
Sustained fixation of a bright coloured stimulus will, on extinction of the stimulus and continued steady fixation, induce an afterimage whose colour is complementary to that of the initial stimulus; an effect thought to be caused by fatigue of cones and/or of cone-opponent processes to different colours. However, to date, very little is known about the specific pathway that causes the coloured afterimage. Using isoluminant coloured stimuli recent studies have shown that pupil constriction is induced by onset and offset of the stimulus, the latter being attributed specifically to the subsequent emergence of the coloured afterimage. The aim of the study was to investigate how the offset pupillary constriction is generated in terms of input signals from discrete functional elements of the magno- and/or parvo-cellular pathways, which are known principally to convey, respectively, luminance and colour signals. Changes in pupil size were monitored continuously by digital analysis of an infra-red image of the pupil while observers viewed isoluminant green pulsed, ramped or luminance masked stimuli presented on a computer monitor. It was found that the amplitude of the offset pupillary constriction decreases when a pulsed stimulus is replaced by a temporally ramped stimulus and is eliminated by a luminance mask. These findings indicate for the first time that pupillary constriction associated with a coloured afterimage is mediated by the magno-cellular pathway. © 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
In recent years, learning word vector representations has attracted much interest in Natural Language Processing. Word representations or embeddings learned using unsupervised methods help addressing the problem of traditional bag-of-word approaches which fail to capture contextual semantics. In this paper we go beyond the vector representations at the word level and propose a novel framework that learns higher-level feature representations of n-grams, phrases and sentences using a deep neural network built from stacked Convolutional Restricted Boltzmann Machines (CRBMs). These representations have been shown to map syntactically and semantically related n-grams to closeby locations in the hidden feature space. We have experimented to additionally incorporate these higher-level features into supervised classifier training for two sentiment analysis tasks: subjectivity classification and sentiment classification. Our results have demonstrated the success of our proposed framework with 4% improvement in accuracy observed for subjectivity classification and improved the results achieved for sentiment classification over models trained without our higher level features.
Resumo:
The quantum Jensen-Shannon divergence kernel [1] was recently introduced in the context of unattributed graphs where it was shown to outperform several commonly used alternatives. In this paper, we study the separability properties of this kernel and we propose a way to compute a low-dimensional kernel embedding where the separation of the different classes is enhanced. The idea stems from the observation that the multidimensional scaling embeddings on this kernel show a strong horseshoe shape distribution, a pattern which is known to arise when long range distances are not estimated accurately. Here we propose to use Isomap to embed the graphs using only local distance information onto a new vectorial space with a higher class separability. The experimental evaluation shows the effectiveness of the proposed approach. © 2013 Springer-Verlag.
Resumo:
Graph-based representations have been used with considerable success in computer vision in the abstraction and recognition of object shape and scene structure. Despite this, the methodology available for learning structural representations from sets of training examples is relatively limited. In this paper we take a simple yet effective Bayesian approach to attributed graph learning. We present a naïve node-observation model, where we make the important assumption that the observation of each node and each edge is independent of the others, then we propose an EM-like approach to learn a mixture of these models and a Minimum Message Length criterion for components selection. Moreover, in order to avoid the bias that could arise with a single estimation of the node correspondences, we decide to estimate the sampling probability over all the possible matches. Finally we show the utility of the proposed approach on popular computer vision tasks such as 2D and 3D shape recognition. © 2011 Springer-Verlag.