34 resultados para 380305 Knowledge Representation and Machine Learning
Resumo:
Successful motor performance requires the ability to adapt motor commands to task dynamics. A central question in movement neuroscience is how these dynamics are represented. Although it is widely assumed that dynamics (e.g., force fields) are represented in intrinsic, joint-based coordinates (Shadmehr R, Mussa-Ivaldi FA. J Neurosci 14: 3208-3224, 1994), recent evidence has questioned this proposal. Here we reexamine the representation of dynamics in two experiments. By testing generalization following changes in shoulder, elbow, or wrist configurations, the first experiment tested for extrinsic, intrinsic, or object-centered representations. No single coordinate frame accounted for the pattern of generalization. Rather, generalization patterns were better accounted for by a mixture of representations or by models that assumed local learning and graded, decaying generalization. A second experiment, in which we replicated the design of an influential study that had suggested encoding in intrinsic coordinates (Shadmehr and Mussa-Ivaldi 1994), yielded similar results. That is, we could not find evidence that dynamics are represented in a single coordinate system. Taken together, our experiments suggest that internal models do not employ a single coordinate system when generalizing and may well be represented as a mixture of coordinate systems, as a single system with local learning, or both.
Resumo:
This paper investigates several approaches to bootstrapping a new spoken language understanding (SLU) component in a target language given a large dataset of semantically-annotated utterances in some other source language. The aim is to reduce the cost associated with porting a spoken dialogue system from one language to another by minimising the amount of data required in the target language. Since word-level semantic annotations are costly, Semantic Tuple Classifiers (STCs) are used in conjunction with statistical machine translation models both of which are trained from unaligned data to further reduce development time. The paper presents experiments in which a French SLU component in the tourist information domain is bootstrapped from English data. Results show that training STCs on automatically translated data produced the best performance for predicting the utterance's dialogue act type, however individual slot/value pairs are best predicted by training STCs on the source language and using them to decode translated utterances. © 2010 ISCA.
Resumo:
Information theoretic active learning has been widely studied for probabilistic models. For simple regression an optimal myopic policy is easily tractable. However, for other tasks and with more complex models, such as classification with nonparametric models, the optimal solution is harder to compute. Current approaches make approximations to achieve tractability. We propose an approach that expresses information gain in terms of predictive entropies, and apply this method to the Gaussian Process Classifier (GPC). Our approach makes minimal approximations to the full information theoretic objective. Our experimental performance compares favourably to many popular active learning algorithms, and has equal or lower computational complexity. We compare well to decision theoretic approaches also, which are privy to more information and require much more computational time. Secondly, by developing further a reformulation of binary preference learning to a classification problem, we extend our algorithm to Gaussian Process preference learning.