107 resultados para Retinal adaptation
Resumo:
The motor system responds to perturbations with reflexes, such as the vestibulo-ocular reflex or stretch reflex, whose gains adapt in response to novel and fixed changes in the environment, such as magnifying spectacles or standing on a tilting platform. Here we demonstrate a reflex response to shifts in the hand's visual location during reaching, which occurs before the onset of voluntary reaction time, and investigate how its magnitude depends on statistical properties of the environment. We examine the change in reflex response to two different distributions of visuomotor discrepancies, both of which have zero mean and equal variance across trials. Critically one distribution is task relevant and the other task irrelevant. The task-relevant discrepancies are maintained to the end of the movement, whereas the task-irrelevant discrepancies are transient such that no discrepancy exists at the end of the movement. The reflex magnitude was assessed using identical probe trials under both distributions. We find opposite directions of adaptation of the reflex response under these two distributions, with increased reflex magnitudes for task-relevant variability and decreased reflex magnitudes for task-irrelevant variability. This demonstrates modulation of reflex magnitudes in the absence of a fixed change in the environment, and shows that reflexes are sensitive to the statistics of tasks with modulation depending on whether the variability is task relevant or task irrelevant.
Resumo:
Picking up an empty milk carton that we believe to be full is a familiar example of adaptive control, because the adaptation process of estimating the carton's weight must proceed simultaneously with the control process of moving the carton to a desired location. Here we show that the motor system initially generates highly variable behavior in such unpredictable tasks but eventually converges to stereotyped patterns of adaptive responses predicted by a simple optimality principle. These results suggest that adaptation can become specifically tuned to identify task-specific parameters in an optimal manner.
Resumo:
This paper proposes an HMM-based approach to generating emotional intonation patterns. A set of models were built to represent syllable-length intonation units. In a classification framework, the models were able to detect a sequence of intonation units from raw fundamental frequency values. Using the models in a generative framework, we were able to synthesize smooth and natural sounding pitch contours. As a case study for emotional intonation generation, Maximum Likelihood Linear Regression (MLLR) adaptation was used to transform the neutral model parameters with a small amount of happy and sad speech data. Perceptual tests showed that listeners could identify the speech with the sad intonation 80% of the time. On the other hand, listeners formed a bimodal distribution in their ability to detect the system generated happy intontation and on average listeners were able to detect happy intonation only 46% of the time. © Springer-Verlag Berlin Heidelberg 2005.
Resumo:
As the use of found data increases, more systems are being built using adaptive training. Here transforms are used to represent unwanted acoustic variability, e.g. speaker and acoustic environment changes, allowing a canonical model that models only the "pure" variability of speech to be trained. Adaptive training may be described within a Bayesian framework. By using complexity control approaches to ensure robust parameter estimates, the standard point estimate adaptive training can be justified within this Bayesian framework. However during recognition there is usually no control over the amount of data available. It is therefore preferable to be able to use a full Bayesian approach to applying transforms during recognition rather than the standard point estimates. This paper discusses various approximations to Bayesian approaches including a new variational Bayes approximation. The application of these approaches to state-of-the-art adaptively trained systems using both CAT and MLLR transforms is then described and evaluated on a large vocabulary speech recognition task. © 2005 IEEE.
Resumo:
Discriminative mapping transforms (DMTs) is an approach to robustly adding discriminative training to unsupervised linear adaptation transforms. In unsupervised adaptation DMTs are more robust to unreliable transcriptions than directly estimating adaptation transforms in a discriminative fashion. They were previously proposed for use with MLLR transforms with the associated need to explicitly transform the model parameters. In this work the DMT is extended to CMLLR transforms. As these operate in the feature space, it is only necessary to apply a different linear transform at the front-end rather than modifying the model parameters. This is useful for rapidly changing speakers/environments. The performance of DMTs with CMLLR was evaluated on the WSJ 20k task. Experimental results show that DMTs based on constrained linear transforms yield 3% to 6% relative gain over MLE transforms in unsupervised speaker adaptation. © 2011 IEEE.
Resumo:
Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process.
Resumo:
This study compared the mechanisms of adaptation to stable and unstable dynamics from the perspective of changes in joint mechanics. Subjects were instructed to make point to point movements in force fields generated by a robotic manipulandum which interacted with the arm in either a stable or an unstable manner. After subjects adjusted to the initial disturbing effects of the force fields they were able to produce normal straight movements to the target. In the case of the stable interaction, subjects modified the joint torques in order to appropriately compensate for the force field. No change in joint torque or endpoint force was required or observed in the case of the unstable interaction. After adaptation, the endpoint stiffness of the arm was measured by applying displacements to the hand in eight different directions midway through the movements. This was compared to the stiffness measured similarly during movements in a null force field. After adaptation, the endpoint stiffness under both the stable and unstable dynamics was modified relative to the null field. Adaptation to unstable dynamics was achieved by selective modification of endpoint stiffness in the direction of the instability. To investigate whether the change in endpoint stiffness could be accounted for by change in joint torque or endpoint force, we estimated the change in stiffness on each trial based on the change in joint torque relative to the null field. For stable dynamics the change in endpoint stiffness was accurately predicted. However, for unstable dynamics the change in endpoint stiffness could not be reproduced. In fact, the predicted endpoint stiffness was similar to that in the null force field. Thus, the change in endpoint stiffness seen after adaptation to stable dynamics was directly related to changes in net joint torque necessary to compensate for the dynamics in contrast to adaptation to unstable dynamics, where a selective change in endpoint stiffness occurred without any modification of net joint torque.
Resumo:
Recently, we demonstrated that humans can learn to make accurate movements in an unstable environment by controlling magnitude, shape, and orientation of the endpoint impedance. Although previous studies of human motor learning suggest that the brain acquires an inverse dynamics model of the novel environment, it is not known whether this control mechanism is operative in unstable environments. We compared learning of multijoint arm movements in a "velocity-dependent force field" (VF), which interacted with the arm in a stable manner, and learning in a "divergent force field" (DF), where the interaction was unstable. The characteristics of error evolution were markedly different in the 2 fields. The direction of trajectory error in the DF alternated to the left and right during the early stage of learning; that is, signed error was inconsistent from movement to movement and could not have guided learning of an inverse dynamics model. This contrasted sharply with trajectory error in the VF, which was initially biased and decayed in a manner that was consistent with rapid feedback error learning. EMG recorded before and after learning in the DF and VF are also consistent with different learning and control mechanisms for adapting to stable and unstable dynamics, that is, inverse dynamics model formation and impedance control. We also investigated adaptation to a rotated DF to examine the interplay between inverse dynamics model formation and impedance control. Our results suggest that an inverse dynamics model can function in parallel with an impedance controller to compensate for consistent perturbing force in unstable environments.