123 resultados para adaptation, reuse
Resumo:
The motor system responds to perturbations with reflexes, such as the vestibulo-ocular reflex or stretch reflex, whose gains adapt in response to novel and fixed changes in the environment, such as magnifying spectacles or standing on a tilting platform. Here we demonstrate a reflex response to shifts in the hand's visual location during reaching, which occurs before the onset of voluntary reaction time, and investigate how its magnitude depends on statistical properties of the environment. We examine the change in reflex response to two different distributions of visuomotor discrepancies, both of which have zero mean and equal variance across trials. Critically one distribution is task relevant and the other task irrelevant. The task-relevant discrepancies are maintained to the end of the movement, whereas the task-irrelevant discrepancies are transient such that no discrepancy exists at the end of the movement. The reflex magnitude was assessed using identical probe trials under both distributions. We find opposite directions of adaptation of the reflex response under these two distributions, with increased reflex magnitudes for task-relevant variability and decreased reflex magnitudes for task-irrelevant variability. This demonstrates modulation of reflex magnitudes in the absence of a fixed change in the environment, and shows that reflexes are sensitive to the statistics of tasks with modulation depending on whether the variability is task relevant or task irrelevant.
Resumo:
A significant proportion of the processing delays within the visual system are luminance dependent. Thus placing an attenuating filter over one eye causes a temporal delay between the eyes and thus an illusion of motion in depth for objects moving in the fronto-parallel plane, known as the Pulfrich effect. We have used this effect to study adaptation to such an interocular delay in two normal subjects wearing 75% attenuating neutral density filters over one eye. In two separate experimental periods both subjects showed about 60% adaptation over 9 days. Reciprocal effects were seen on removal of the filters. To isolate the site of adaptation we also measured the subjects' flicker fusion frequencies (FFFs) and contrast sensitivity functions (CSFs). Both subjects showed significant adaptation in their FFFs. An attempt to model the Pulfrich and FFF adaptation curves with a change in a single parameter in Kelly's [(1971) Journal of the Optical Society of America, 71, 537-546] retinal model was only partially successful. Although we have demonstrated adaptation in normal subjects to induced time delays in the visual system we postulate that this may at least partly represent retinal adaptation to the change in mean luminance.
Resumo:
Picking up an empty milk carton that we believe to be full is a familiar example of adaptive control, because the adaptation process of estimating the carton's weight must proceed simultaneously with the control process of moving the carton to a desired location. Here we show that the motor system initially generates highly variable behavior in such unpredictable tasks but eventually converges to stereotyped patterns of adaptive responses predicted by a simple optimality principle. These results suggest that adaptation can become specifically tuned to identify task-specific parameters in an optimal manner.
Resumo:
This paper proposes an HMM-based approach to generating emotional intonation patterns. A set of models were built to represent syllable-length intonation units. In a classification framework, the models were able to detect a sequence of intonation units from raw fundamental frequency values. Using the models in a generative framework, we were able to synthesize smooth and natural sounding pitch contours. As a case study for emotional intonation generation, Maximum Likelihood Linear Regression (MLLR) adaptation was used to transform the neutral model parameters with a small amount of happy and sad speech data. Perceptual tests showed that listeners could identify the speech with the sad intonation 80% of the time. On the other hand, listeners formed a bimodal distribution in their ability to detect the system generated happy intontation and on average listeners were able to detect happy intonation only 46% of the time. © Springer-Verlag Berlin Heidelberg 2005.