20 resultados para Representations of language
em Cambridge University Engineering Department Publications Database
Resumo:
To manipulate an object skillfully, the brain must learn its dynamics, specifying the mapping between applied force and motion. A fundamental issue in sensorimotor control is whether such dynamics are represented in an extrinsic frame of reference tied to the object or an intrinsic frame of reference linked to the arm. Although previous studies have suggested that objects are represented in arm-centered coordinates [1-6], all of these studies have used objects with unusual and complex dynamics. Thus, it is not known how objects with natural dynamics are represented. Here we show that objects with simple (or familiar) dynamics and those with complex (or unfamiliar) dynamics are represented in object- and arm-centered coordinates, respectively. We also show that objects with simple dynamics are represented with an intermediate coordinate frame when vision of the object is removed. These results indicate that object dynamics can be flexibly represented in different coordinate frames by the brain. We suggest that with experience, the representation of the dynamics of a manipulated object may shift from a coordinate frame tied to the arm toward one that is linked to the object. The additional complexity required to represent dynamics in object-centered coordinates would be economical for familiar objects because such a representation allows object use regardless of the orientation of the object in hand.
Resumo:
Rhythmic and discrete arm movements occur ubiquitously in everyday life, and there is a debate as to whether these two classes of movements arise from the same or different underlying neural mechanisms. Here we examine interference in a motor-learning paradigm to test whether rhythmic and discrete movements employ at least partially separate neural representations. Subjects were required to make circular movements of their right hand while they were exposed to a velocity-dependent force field that perturbed the circularity of the movement path. The direction of the force-field perturbation reversed at the end of each block of 20 revolutions. When subjects made only rhythmic or only discrete circular movements, interference was observed when switching between the two opposing force fields. However, when subjects alternated between blocks of rhythmic and discrete movements, such that each was uniquely associated with one of the perturbation directions, interference was significantly reduced. Only in this case did subjects learn to corepresent the two opposing perturbations, suggesting that different neural resources were employed for the two movement types. Our results provide further evidence that rhythmic and discrete movements employ at least partially separate control mechanisms in the motor system.
Resumo:
Uncertainty is ubiquitous in our sensorimotor interactions, arising from factors such as sensory and motor noise and ambiguity about the environment. Setting it apart from previous theories, a quintessential property of the Bayesian framework for making inference about the state of world so as to select actions, is the requirement to represent the uncertainty associated with inferences in the form of probability distributions. In the context of sensorimotor control and learning, the Bayesian framework suggests that to respond optimally to environmental stimuli the central nervous system needs to construct estimates of the sensorimotor transformations, in the form of internal models, as well as represent the structure of the uncertainty in the inputs, outputs and in the transformations themselves. Here we review Bayesian inference and learning models that have been successful in demonstrating the sensitivity of the sensorimotor system to different forms of uncertainty as well as recent studies aimed at characterizing the representation of the uncertainty at different computational levels.
Resumo:
Skillful tool use requires knowledge of the dynamic properties of tools in order to specify the mapping between applied force and tool motion. Importantly, this mapping depends on the orientation of the tool in the hand. Here we investigate the representation of dynamics during skillful manipulation of a tool that can be grasped at different orientations. We ask whether the motor system uses a single general representation of dynamics for all grasp contexts or whether it uses multiple grasp-specific representations. Using a novel robotic interface, subjects rotated a virtual tool whose orientation relative to the hand could be varied. Subjects could immediately anticipate the force direction for each orientation of the tool based on its visual geometry, and, with experience, they learned to parameterize the force magnitude. Surprisingly, this parameterization of force magnitude showed limited generalization when the orientation of the tool changed. Had subjects parameterized a single general representation, full generalization would be expected. Thus, our results suggest that object dynamics are captured by multiple representations, each of which encodes the mapping associated with a specific grasp context. We suggest that the concept of grasp-specific representations may provide a unifying framework for interpreting previous results related to dynamics learning.
Resumo:
A dynamical system can exhibit structure on multiple levels. Different system representations can capture different elements of a dynamical system's structure. We consider LTI input-output dynamical systems and present four representations of structure: complete computational structure, subsystem structure, signal structure, and input output sparsity structure. We then explore some of the mathematical relationships that relate these different representations of structure. In particular, we show that signal and subsystem structure are fundamentally different ways of representing system structure. A signal structure does not always specify a unique subsystem structure nor does subsystem structure always specify a unique signal structure. We illustrate these concepts with a numerical example. © 2011 AACC American Automatic Control Council.
Resumo:
Humans have been shown to adapt to the temporal statistics of timing tasks so as to optimize the accuracy of their responses, in agreement with the predictions of Bayesian integration. This suggests that they build an internal representation of both the experimentally imposed distribution of time intervals (the prior) and of the error (the loss function). The responses of a Bayesian ideal observer depend crucially on these internal representations, which have only been previously studied for simple distributions. To study the nature of these representations we asked subjects to reproduce time intervals drawn from underlying temporal distributions of varying complexity, from uniform to highly skewed or bimodal while also varying the error mapping that determined the performance feedback. Interval reproduction times were affected by both the distribution and feedback, in good agreement with a performance-optimizing Bayesian observer and actor model. Bayesian model comparison highlighted that subjects were integrating the provided feedback and represented the experimental distribution with a smoothed approximation. A nonparametric reconstruction of the subjective priors from the data shows that they are generally in agreement with the true distributions up to third-order moments, but with systematically heavier tails. In particular, higher-order statistical features (kurtosis, multimodality) seem much harder to acquire. Our findings suggest that humans have only minor constraints on learning lower-order statistical properties of unimodal (including peaked and skewed) distributions of time intervals under the guidance of corrective feedback, and that their behavior is well explained by Bayesian decision theory.
Resumo:
Acoustic communication in drosophilid flies is based on the production and perception of courtship songs, which facilitate mating. Despite decades of research on courtship songs and behavior in Drosophila, central auditory responses have remained uncharacterized. In this study, we report on intracellular recordings from central neurons that innervate the Drosophila antennal mechanosensory and motor center (AMMC), the first relay for auditory information in the fly brain. These neurons produce graded-potential (nonspiking) responses to sound; we compare recordings from AMMC neurons to extracellular recordings of the receptor neuron population [Johnston's organ neurons (JONs)]. We discover that, while steady-state response profiles for tonal and broadband stimuli are significantly transformed between the JON population in the antenna and AMMC neurons in the brain, transient responses to pulses present in natural stimuli (courtship song) are not. For pulse stimuli in particular, AMMC neurons simply low-pass filter the receptor population response, thus preserving low-frequency temporal features (such as the spacing of song pulses) for analysis by postsynaptic neurons. We also compare responses in two closely related Drosophila species, Drosophila melanogaster and Drosophila simulans, and find that pulse song responses are largely similar, despite differences in the spectral content of their songs. Our recordings inform how downstream circuits may read out behaviorally relevant information from central neurons in the AMMC.
Resumo:
We present the Unified Form Language (UFL), which is a domain-specific language for representing weak formulations of partial differential equations with a view to numerical approximation. Features of UFL include support for variational forms and functionals, automatic differentiation of forms and expressions, arbitrary function space hierarchies formultifield problems, general differential operators and flexible tensor algebra. With these features, UFL has been used to effortlessly express finite element methods for complex systems of partial differential equations in near-mathematical notation, resulting in compact, intuitive and readable programs. We present in this work the language and its construction. An implementation of UFL is freely available as an open-source software library. The library generates abstract syntax tree representations of variational problems, which are used by other software libraries to generate concrete low-level implementations. Some application examples are presented and libraries that support UFL are highlighted. © 2014 ACM.
Resumo:
Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.
Discriminative language model adaptation for Mandarin broadcast speech transcription and translation
Resumo:
This paper investigates unsupervised test-time adaptation of language models (LM) using discriminative methods for a Mandarin broadcast speech transcription and translation task. A standard approach to adapt interpolated language models to is to optimize the component weights by minimizing the perplexity on supervision data. This is a widely made approximation for language modeling in automatic speech recognition (ASR) systems. For speech translation tasks, it is unclear whether a strong correlation still exists between perplexity and various forms of error cost functions in recognition and translation stages. The proposed minimum Bayes risk (MBR) based approach provides a flexible framework for unsupervised LM adaptation. It generalizes to a variety of forms of recognition and translation error metrics. LM adaptation is performed at the audio document level using either the character error rate (CER), or translation edit rate (TER) as the cost function. An efficient parameter estimation scheme using the extended Baum-Welch (EBW) algorithm is proposed. Experimental results on a state-of-the-art speech recognition and translation system are presented. The MBR adapted language models gave the best recognition and translation performance and reduced the TER score by up to 0.54% absolute. © 2007 IEEE.