967 resultados para feet sensory information
Resumo:
Accumulator models that integrate incoming sensory information into motor plans provide a robust framework to understand decision making. However, their applicability to situations that demand a change of plan raises an interesting problem for the brain. This is because interruption of the current motor plan must occur by a competing motor plan, which is necessarily weaker in strength. To understand how changes of mind get expressed in behavior, we used a version of the double-step task called the redirect task, in which monkeys were trained to modify a saccade plan. We microstimulated the frontal eye fields during redirect behavior and systematically measured the deviation of the evoked saccade from the response field to causally track the changing saccade plan. Further, to identify the underlying mechanisms, eight different computational models of redirect behavior were assessed. It was observed that the model that included an independent, spatially specific inhibitory process, in addition to the two accumulators representing the preparatory processes of initial and final motor plans, best predicted the performance and the pattern of saccade deviation profile in the task. Such an inhibitory process suppressed the preparation of the initial motor plan, allowing the final motor plan to proceed unhindered. Thus, changes of mind are consistent with the notion of a spatially specific, inhibitory process that inhibits the current inappropriate plan, allowing expression of the new plan.
Resumo:
Neurons in the primate lateral intraparietal area (area LIP) carry visual, saccade-related and eye position activities. The visual and saccade activities are anchored in a retinotopic framework and the overall response magnitude is modulated by eye position. It was proposed that the modulation by eye position might be the basis of a distributed coding of target locations in a head-centered space. Other recording studies demonstrated that area LIP is involved in oculomotor planning. These results overall suggest that area LIP transforms sensory information for motor functions. In this thesis I further explore the role of area LIP in processing saccadic eye movements by observing the effects of reversible inactivation of this area. Macaque monkeys were trained to do visually guided and memory saccades and a double saccade task to examine the use of eye position signal. Finally, by intermixing visual saccades with trials in which two targets were presented at opposite sides of the fixation point, I examined the behavior of visual extinction.
In chapter 2, I will show that lesion of area LIP results in increased latency of contralesional visual and memory saccades. Contralesional memory saccades are also hypometric and slower in velocity. Moreover, the impairment of memory saccades does not vary with the duration of the delay period. This suggests that the oculomotor deficits observed after inactivation of area LIP is not due to the disruption of spatial memory.
In chapter 3, I will show that lesion of area LIP does not severely affect the processing of spontaneous eye movement. However, the monkeys made fewer contralesional saccades and tended to confine their gaze to the ipsilesional field after inactivation of area LIP. On the other hand, lesion of area LIP results in extinction of the contralesional stimulus. When the initial fixation position was varied so that the retinal and spatial locations of the targets could be dissociated, it was found that the extinction behavior could best be described in a head-centered coordinate.
In chapter 4, I will show that inactivation of area LIP disrupts the use of eye position signal to compute the second movement correctly in the double saccade task. If the first saccade steps into the contralesional field, the error rate and latency of the second saccade are both increased. Furthermore, the direction of the first eye movement largely does not have any effect on the impairment of the second saccade. I will argue that this study provides important evidence that the extraretinal signal used for saccadic localization is eye position rather than a displacement vector.
In chapter 5, I will demonstrate that in parietal monkeys the eye drifts toward the lesion side at the end of the memory saccade in darkness. This result suggests that the eye position activity in the posterior parietal cortex is active in nature and subserves gaze holding.
Overall, these results further support the view that area LIP neurons encode spatial locations in a craniotopic framework and is involved in processing voluntary eye movements.
Resumo:
How animals use sensory information to weigh the risks vs. benefits of behavioral decisions remains poorly understood. Inter-male aggression is triggered when animals perceive both the presence of an appetitive resource, such as food or females, and of competing conspecific males. How such signals are detected and integrated to control the decision to fight is not clear. Here we use the vinegar fly, Drosophila melanogaster, to investigate the manner in which food and females promotes aggression.
In the first chapter, we explore how food controls aggression. As in many other species, food promotes aggression in flies, but it is not clear whether food increases aggression per se, or whether aggression is a secondary consequence of increased social interactions caused by aggregation of flies on food. Furthermore, nothing is known about how animals evaluate the quality and quantity of food in the context of competition. We show that food promotes aggression independently of any effect to increase the frequency of contact between males. Food increases aggression but not courtship between males, suggesting that the effect of food on aggression is specific. Next, we show that flies tune the level of aggression according to absolute amount of food rather than other parameters, such as area or concentration of food. Sucrose, a sugar molecule present in many fruits, is sufficient to promote aggression, and detection of sugar via gustatory receptor neurons is necessary for food-promoted aggression. Furthermore, we show that while food is necessary for aggression, too much food decreases aggression. Finally, we show that flies exhibit strategies consistent with a territorial strategy. These data suggest that flies use sweet-sensing gustatory information to guide their decision to fight over a limited quantity of a food resource.
Following up on the findings of the first chapter, we asked how the presence of a conspecific female resource promotes male-male aggression. In the absence of food, group-housed male flies, who normally do not fight even in the presence of food, fight in the presence of females. Unlike food, the presence of females strongly influences proximity between flies. Nevertheless, as group-housed flies do not fight even when they are in small chambers, it is unlikely that the presence of female indirectly increases aggression by first increasing proximity. Unlike food, the presence of females also leads to large increases in locomotion and in male-female courtship behaviors, suggesting that females may influence aggression as well as general arousal. Female cuticular hydrocarbons are required for this effect, as females that do not produce CH pheromones are unable to promote male-male aggression. In particular, 7,11-HD––a female-specific cuticular hydrocarbon pheromone critical for male-female courtship––is sufficient to mediate this effect when it is perfumed onto pheromone-deficient females or males. Recent studies showed that ppk23+ GRNs label two population of GRNs, one of which detects male cuticular hydrocarbons and another labeled by ppk23 and ppk25, which detects female cuticular hydrocarbons. I show that in particular, both of these GRNs control aggression, presumably via detection of female or male pheromones. To further investigate the ways in which these two classes of GRNs control aggression, I developed new genetic tools to independently test the male- and female-sensing GRNs. I show that ppk25-LexA and ppk25-GAL80 faithfully recapitulate the expression pattern of ppk25-GAL4 and label a subset of ppk23+ GRNs. These tools can be used in future studies to dissect the respective functions of male-sensing and female-sensing GRNs in male social behaviors.
Finally, in the last chapter, I discuss quantitative approaches to describe how varying quantities of food and females could control the level of aggression. Flies show an inverse-U shaped aggressive response to varying quantities of food and a flat aggressive response to varying quantities of females. I show how two simple game theoretic models, “prisoner’s dilemma” and “coordination game” could be used to describe the level of aggression we observe. These results suggest that flies may use strategic decision-making, using simple comparisons of costs and benefits.
In conclusion, male-male aggression in Drosophila is controlled by simple gustatory cues from food and females, which are detected by gustatory receptor neurons. Different quantities of resource cues lead to different levels of aggression, and flies show putative territorial behavior, suggesting that fly aggression is a highly strategic adaptive behavior. How these resource cues are integrated with male pheromone cues and give rise to this complex behavior is an interesting subject, which should keep researchers busy in the coming years.
Resumo:
Human sensorimotor control has been predominantly studied using fixed tasks performed under laboratory conditions. This approach has greatly advanced our understanding of the mechanisms that integrate sensory information and generate motor commands during voluntary movement. However, experimental tasks necessarily restrict the range of behaviors that are studied. Moreover, the processes studied in the laboratory may not be the same processes that subjects call upon during their everyday lives. Naturalistic approaches thus provide an important adjunct to traditional laboratory-based studies. For example, wearable self-contained tracking systems can allow subjects to be monitored outside the laboratory, where they engage spontaneously in natural everyday behavior. Similarly, advances in virtual reality technology allow laboratory-based tasks to be made more naturalistic. Here, we review naturalistic approaches, including perspectives from psychology and visual neuroscience, as well as studies and technological advances in the field of sensorimotor control.
Resumo:
Although learning a motor skill, such as a tennis stroke, feels like a unitary experience, researchers who study motor control and learning break the processes involved into a number of interacting components. These components can be organized into four main groups. First, skilled performance requires the effective and efficient gathering of sensory information, such as deciding where and when to direct one's gaze around the court, and thus an important component of skill acquisition involves learning how best to extract task-relevant information. Second, the performer must learn key features of the task such as the geometry and mechanics of the tennis racket and ball, the properties of the court surface, and how the wind affects the ball's flight. Third, the player needs to set up different classes of control that include predictive and reactive control mechanisms that generate appropriate motor commands to achieve the task goals, as well as compliance control that specifies, for example, the stiffness with which the arm holds the racket. Finally, the successful performer can learn higher-level skills such as anticipating and countering the opponent's strategy and making effective decisions about shot selection. In this Primer we shall consider these components of motor learning using as an example how we learn to play tennis.
Resumo:
Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration. Thus, prior beliefs play a key role during the learning process, especially when only ambiguous sensory information is available. Here we develop a novel technique to estimate the covariance structure of the prior over visuomotor transformations--the mapping between actual and visual location of the hand--during a learning task. Subjects performed reaching movements under multiple visuomotor transformations in which they received visual feedback of their hand position only at the end of the movement. After experiencing a particular transformation for one reach, subjects have insufficient information to determine the exact transformation, and so their second reach reflects a combination of their prior over visuomotor transformations and the sensory evidence from the first reach. We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations. Therefore, although the set of visuomotor transformations experienced had little structure, the subjects had a strong tendency to interpret ambiguous sensory evidence as arising from rotation-like transformations. We then exposed the same subjects to a highly-structured set of visuomotor transformations, designed to be very different from the set of visuomotor rotations. During this exposure the prior was found to have changed significantly to have a covariance structure that no longer favored rotation-like transformations. In summary, we have developed a technique which can estimate the full covariance structure of a prior in a sensorimotor task and have shown that the prior over visuomotor transformations favor a rotation-like structure. Moreover, through experience of a novel task structure, participants can appropriately alter the covariance structure of their prior.
Resumo:
Optimal feedback control postulates that feedback responses depend on the task relevance of any perturbations. We test this prediction in a bimanual task, conceptually similar to balancing a laden tray, in which each hand could be perturbed up or down. Single-limb mechanical perturbations produced long-latency reflex responses ("rapid motor responses") in the contralateral limb of appropriate direction and magnitude to maintain the tray horizontal. During bimanual perturbations, rapid motor responses modulated appropriately depending on the extent to which perturbations affected tray orientation. Specifically, despite receiving the same mechanical perturbation causing muscle stretch, the strongest responses were produced when the contralateral arm was perturbed in the opposite direction (large tray tilt) rather than in the same direction or not perturbed at all. Rapid responses from shortening extensors depended on a nonlinear summation of the sensory information from the arms, with the response to a bimanual same-direction perturbation (orientation maintained) being less than the sum of the component unimanual perturbations (task relevant). We conclude that task-dependent tuning of reflexes can be modulated online within a single trial based on a complex interaction across the arms.
Resumo:
A recent study demonstrates involvement of primary motor cortex in task-dependent modulation of rapid feedback responses; cortical neurons resolve locally ambiguous sensory information, producing sophisticated responses to disturbances.
Resumo:
Decisions about noisy stimuli require evidence integration over time. Traditionally, evidence integration and decision making are described as a one-stage process: a decision is made when evidence for the presence of a stimulus crosses a threshold. Here, we show that one-stage models cannot explain psychophysical experiments on feature fusion, where two visual stimuli are presented in rapid succession. Paradoxically, the second stimulus biases decisions more strongly than the first one, contrary to predictions of one-stage models and intuition. We present a two-stage model where sensory information is integrated and buffered before it is fed into a drift diffusion process. The model is tested in a series of psychophysical experiments and explains both accuracy and reaction time distributions. © 2012 Rüter et al.
Resumo:
A key function of the brain is to interpret noisy sensory information. To do so optimally, observers must, in many tasks, take into account knowledge of the precision with which stimuli are encoded. In an orientation change detection task, we find that encoding precision does not only depend on an experimentally controlled reliability parameter (shape), but also exhibits additional variability. In spite of variability in precision, human subjects seem to take into account precision near-optimally on a trial-to-trial and item-to-item basis. Our results offer a new conceptualization of the encoding of sensory information and highlight the brain's remarkable ability to incorporate knowledge of uncertainty during complex perceptual decision-making.
Resumo:
The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as an heuristic by which to extract semantic information from multi-dimensional time-series. Here, we develop a probabilistic interpretation of this algorithm showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual spring-board, with which to motivate several novel extensions to the algorithm.
Resumo:
The human motor system is remarkably proficient in the online control of visually guided movements, adjusting to changes in the visual scene within 100 ms [1-3]. This is achieved through a set of highly automatic processes [4] translating visual information into representations suitable for motor control [5, 6]. For this to be accomplished, visual information pertaining to target and hand need to be identified and linked to the appropriate internal representations during the movement. Meanwhile, other visual information must be filtered out, which is especially demanding in visually cluttered natural environments. If selection of relevant sensory information for online control was achieved by visual attention, its limited capacity [7] would substantially constrain the efficiency of visuomotor feedback control. Here we demonstrate that both exogenously and endogenously cued attention facilitate the processing of visual target information [8], but not of visual hand information. Moreover, distracting visual information is more efficiently filtered out during the extraction of hand compared to target information. Our results therefore suggest the existence of a dedicated visuomotor binding mechanism that links the hand representation in visual and motor systems.
Resumo:
It has been shown that sensory morphology and sensory-motor coordination enhance the capabilities of sensing in robotic systems. The tasks of categorization and category learning, for example, can be significantly simplified by exploiting the morphological constraints, sensory-motor couplings and the interaction with the environment. This paper argues that, in the context of sensory-motor control, it is essential to consider body dynamics derived from morphological properties and the interaction with the environment in order to gain additional insight into the underlying mechanisms of sensory-motor coordination, and more generally the nature of perception. A locomotion model of a four-legged robot is used for the case studies in both simulation and real world. The locomotion model demonstrates how attractor states derived from body dynamics influence the sensory information, which can then be used for the recognition of stable behavioral patterns and of physical properties in the environment. A comprehensive analysis of behavior and sensory information leads to a deeper understanding of the underlying mechanisms by which body dynamics can be exploited for category learning of autonomous robotic systems. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
One of the most important functions in the individual development is the interaction and integration of each sensory input. There exist two competing theories, i.e. the deficiency theory and the compensatory theory, regarding the origin and nature of changes in visual functions observed after auditory deprivation. The deficiency theory proposed that integrative processes are essential for normal development. In contrast, the compensatory theory stated that the loss of one sense may be met by a greater reliance upon, therefore an enhancement of the remaining senses. Given that hearing impaired children’s learning depends primarily on visual information, it is important to recognize the differences of visual attention between them and their hearing age-mates. Differences among age groups could exist in either selectivity or sustained attention. Study 1 and study 2 explored the selective and sustained attention development of hearing impaired and hearing students with average cognitive ability, aged from 7 years to college students. The analysis and discussion of the results are based on the visual attention development as well as deficiency theory and compensatory theory. According to the results of the study 1 and study 2, the spatial distribution and controlling of the visual attention between hearing impaired and hearing students were also investigated in the study 3 and study 4. The present work showed that: Firstly, both hearing impaired and hearing participants had the similar developmental trajectory of the sustained attention. The ability of children’s sustained attention appeared to improve with age, and in adolescence it reached the peak. The hearing impaired participants had the comparable sustained attention skills to the matched hearing ones. Besides, the results of the hearing impaired participants showed that they could maintain their attention and vigilance on the current task over the observation period. Secondly, group differences of visual attention development were found between hearing impaired and hearing participants. In the childhood, the visual attention developmental speed of the hearing impaired children was slower than that of the hearing ones. The selective attention skill of the hearing impaired were not comparable to the hearing ones, however, their selective skill improved with age, so in the adulthood, hearing impaired students showed the slight advantage in the selective attention skill over the hearing ones. Thirdly, hearing impaired and hearing participants showed the similar spatial distribution in the attention resources. In the low perceptual load condition, both participants were suffered great interference of the distrator at the fixation. In contrast, in the high perceptual load condition, hearing impaired adults were suffered more interference of the peripheral distractor, which suggested that they distributed more attention resources to the peripheral field when faced difficult tasks. Fourthly, both groups showed similar processing in the visual attention tasks. That is, they both searched the target with only the color feature in a parallel way, but in a serial way while processing orientation feature and the features with the combination of the color and orientation. Furthermore, the results indicated that two groups show similar ways in the attention controlling. In summary, the present study showed that visual attention development was dependent upon the integration of multimodal sensory information. Because of the interaction and integration of the input from various sensory, it has a negative impact on the intact sensory at the early stage of one sensory loss, however, it can better the functions of other intact sensory gradually with development and practice.
Resumo:
Q. Meng and M.H. Lee, 'Error-driven active learning in growing radial basis function networks for early robot learning', 2006 IEEE International Conference on Robotics and Automation (IEEE ICRA 2006), 2984-90, Orlando, Florida, USA.