865 resultados para Representations
Resumo:
Skillful tool use requires knowledge of the dynamic properties of tools in order to specify the mapping between applied force and tool motion. Importantly, this mapping depends on the orientation of the tool in the hand. Here we investigate the representation of dynamics during skillful manipulation of a tool that can be grasped at different orientations. We ask whether the motor system uses a single general representation of dynamics for all grasp contexts or whether it uses multiple grasp-specific representations. Using a novel robotic interface, subjects rotated a virtual tool whose orientation relative to the hand could be varied. Subjects could immediately anticipate the force direction for each orientation of the tool based on its visual geometry, and, with experience, they learned to parameterize the force magnitude. Surprisingly, this parameterization of force magnitude showed limited generalization when the orientation of the tool changed. Had subjects parameterized a single general representation, full generalization would be expected. Thus, our results suggest that object dynamics are captured by multiple representations, each of which encodes the mapping associated with a specific grasp context. We suggest that the concept of grasp-specific representations may provide a unifying framework for interpreting previous results related to dynamics learning.
Resumo:
We consider the robust control of plants with saturation nonlinearities from an input/output viewpoint. First, we present a parameterization for anti-windup control based on coprime factorizations of the controller. Second, we propose a synthesis method which exploits the freedom to choose a particular coprime factorization.
Resumo:
This paper presents some developments in query expansion and document representation of our spoken document retrieval system and shows how various retrieval techniques affect performance for different sets of transcriptions derived from a common speech source. Modifications of the document representation are used, which combine several techniques for query expansion, knowledge-based on one hand and statistics-based on the other. Taken together, these techniques can improve Average Precision by over 19% relative to a system similar to that which we presented at TREC-7. These new experiments have also confirmed that the degradation of Average Precision due to a word error rate (WER) of 25% is quite small (3.7% relative) and can be reduced to almost zero (0.2% relative). The overall improvement of the retrieval system can also be observed for seven different sets of transcriptions from different recognition engines with a WER ranging from 24.8% to 61.5%. We hope to repeat these experiments when larger document collections become available, in order to evaluate the scalability of these techniques.
Resumo:
In sensorimotor integration, sensory input and motor output signals are combined to provide an internal estimate of the state of both the world and one's own body. Although a single perceptual and motor snapshot can provide information about the current state, computational models show that the state can be optimally estimated by a recursive process in which an internal estimate is maintained and updated by the current sensory and motor signals. These models predict that an internal state estimate is maintained or stored in the brain. Here we report a patient with a lesion of the superior parietal lobe who shows both sensory and motor deficits consistent with an inability to maintain such an internal representation between updates. Our findings suggest that the superior parietal lobe is critical for sensorimotor integration, by maintaining an internal representation of the body's state.
Resumo:
A dynamical system can exhibit structure on multiple levels. Different system representations can capture different elements of a dynamical system's structure. We consider LTI input-output dynamical systems and present four representations of structure: complete computational structure, subsystem structure, signal structure, and input output sparsity structure. We then explore some of the mathematical relationships that relate these different representations of structure. In particular, we show that signal and subsystem structure are fundamentally different ways of representing system structure. A signal structure does not always specify a unique subsystem structure nor does subsystem structure always specify a unique signal structure. We illustrate these concepts with a numerical example. © 2011 AACC American Automatic Control Council.
Resumo:
We propose a new learning method to infer a mid-level feature representation that combines the advantage of semantic attribute representations with the higher expressive power of non-semantic features. The idea lies in augmenting an existing attribute-based representation with additional dimensions for which an autoencoder model is coupled with a large-margin principle. This construction allows a smooth transition between the zero-shot regime with no training example, the unsupervised regime with training examples but without class labels, and the supervised regime with training examples and with class labels. The resulting optimization problem can be solved efficiently, because several of the necessity steps have closed-form solutions. Through extensive experiments we show that the augmented representation achieves better results in terms of object categorization accuracy than the semantic representation alone. © 2012 Springer-Verlag.
Resumo:
While a large amount of research over the past two decades has focused on discrete abstractions of infinite-state dynamical systems, many structural and algorithmic details of these abstractions remain unknown. To clarify the computational resources needed to perform discrete abstractions, this paper examines the algorithmic properties of an existing method for deriving finite-state systems that are bisimilar to linear discrete-time control systems. We explicitly find the structure of the finite-state system, show that it can be enormous compared to the original linear system, and give conditions to guarantee that the finite-state system is reasonably sized and efficiently computable. Though constructing the finite-state system is generally impractical, we see that special cases could be amenable to satisfiability based verification techniques. ©2009 IEEE.
Resumo:
Humans have been shown to adapt to the temporal statistics of timing tasks so as to optimize the accuracy of their responses, in agreement with the predictions of Bayesian integration. This suggests that they build an internal representation of both the experimentally imposed distribution of time intervals (the prior) and of the error (the loss function). The responses of a Bayesian ideal observer depend crucially on these internal representations, which have only been previously studied for simple distributions. To study the nature of these representations we asked subjects to reproduce time intervals drawn from underlying temporal distributions of varying complexity, from uniform to highly skewed or bimodal while also varying the error mapping that determined the performance feedback. Interval reproduction times were affected by both the distribution and feedback, in good agreement with a performance-optimizing Bayesian observer and actor model. Bayesian model comparison highlighted that subjects were integrating the provided feedback and represented the experimental distribution with a smoothed approximation. A nonparametric reconstruction of the subjective priors from the data shows that they are generally in agreement with the true distributions up to third-order moments, but with systematically heavier tails. In particular, higher-order statistical features (kurtosis, multimodality) seem much harder to acquire. Our findings suggest that humans have only minor constraints on learning lower-order statistical properties of unimodal (including peaked and skewed) distributions of time intervals under the guidance of corrective feedback, and that their behavior is well explained by Bayesian decision theory.
Resumo:
Acoustic communication in drosophilid flies is based on the production and perception of courtship songs, which facilitate mating. Despite decades of research on courtship songs and behavior in Drosophila, central auditory responses have remained uncharacterized. In this study, we report on intracellular recordings from central neurons that innervate the Drosophila antennal mechanosensory and motor center (AMMC), the first relay for auditory information in the fly brain. These neurons produce graded-potential (nonspiking) responses to sound; we compare recordings from AMMC neurons to extracellular recordings of the receptor neuron population [Johnston's organ neurons (JONs)]. We discover that, while steady-state response profiles for tonal and broadband stimuli are significantly transformed between the JON population in the antenna and AMMC neurons in the brain, transient responses to pulses present in natural stimuli (courtship song) are not. For pulse stimuli in particular, AMMC neurons simply low-pass filter the receptor population response, thus preserving low-frequency temporal features (such as the spacing of song pulses) for analysis by postsynaptic neurons. We also compare responses in two closely related Drosophila species, Drosophila melanogaster and Drosophila simulans, and find that pulse song responses are largely similar, despite differences in the spectral content of their songs. Our recordings inform how downstream circuits may read out behaviorally relevant information from central neurons in the AMMC.
Resumo:
Humans develop rich mental representations that guide their behavior in a variety of everyday tasks. However, it is unknown whether these representations, often formalized as priors in Bayesian inference, are specific for each task or subserve multiple tasks. Current approaches cannot distinguish between these two possibilities because they cannot extract comparable representations across different tasks [1-10]. Here, we develop a novel method, termed cognitive tomography, that can extract complex, multidimensional priors across tasks. We apply this method to human judgments in two qualitatively different tasks, "familiarity" and "odd one out," involving an ecologically relevant set of stimuli, human faces. We show that priors over faces are structurally complex and vary dramatically across subjects, but are invariant across the tasks within each subject. The priors we extract from each task allow us to predict with high precision the behavior of subjects for novel stimuli both in the same task as well as in the other task. Our results provide the first evidence for a single high-dimensional structured representation of a naturalistic stimulus set that guides behavior in multiple tasks. Moreover, the representations estimated by cognitive tomography can provide independent, behavior-based regressors for elucidating the neural correlates of complex naturalistic priors. © 2013 The Authors.