994 resultados para VISUAL GUIDANCE
Resumo:
Locomoting through the environment typically involves anticipating impending changes in heading trajectory in addition to maintaining the current direction of travel. We explored the neural systems involved in the “far road” and “near road” mechanisms proposed by Land and Horwood (1995) using simulated forward or backward travel where participants were required to gauge their current direction of travel (rather than directly control it). During forward egomotion, the distant road edges provided future path information, which participants used to improve their heading judgments. During backward egomotion, the road edges did not enhance performance because they no longer provided prospective information. This behavioral dissociation was reflected at the neural level, where only simulated forward travel increased activation in a region of the superior parietal lobe and the medial intraparietal sulcus. Providing only near road information during a forward heading judgment task resulted in activation in the motion complex. We propose a complementary role for the posterior parietal cortex and motion complex in detecting future path information and maintaining current lane positioning, respectively. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Resumo:
This study aimed to quantify the efficiency and smoothness of voluntary movement in Huntington's disease (HD) by the use of a graphics tablet that permits analysis of movement profiles. In particular, we aimed to ascertain whether a concurrent task (digit span) would affect the kinematics of goal-directed movements. Twelve patients with HD and their matched controls performed 12 vertical zig-zag movements, with both left and right hands (with and without the concurrent task), to large or small circular targets over long or short extents. The concurrent task was associated with shorter movement times and reduced right-hand superiority. Patients with HD were overall slower, especially with long strokes, and had similar peak velocities for both small and large targets, so that controls could better accommodate differences in target size. Patients with HD spent more time decelerating, especially with small targets, whereas controls allocated more nearly equal proportions of time to the acceleration and deceleration phases of movement, especially with large targets. Short strokes were generally less force inefficient than were long strokes, especially so for either hand in either group in the absence of the concurrent task, and for the right hand in its presence. With the concurrent task, however, the left hand's behavior changed differentially for the two groups; for patients with HD, it became more force efficient with short strokes and even less efficient with long strokes, whereas for controls, it became more efficient with long strokes. Controls may be able to divert attention away from the inferior left hand, increasing its automaticity, whereas patients with HD, because of disease, may be forced to engage even further online visual control under the demands of a concurrent task. Patients with HD may perhaps become increasingly reliant on terminal visual guidance, which indicates an impairment in constructing and refining an internal representation of the movement necessary for its. effective execution. Basal ganglia dysfunction may impair the ability to use internally generated cues to guide movement.
Resumo:
ExPASy (http://www.expasy.org) has worldwide reputation as one of the main bioinformatics resources for proteomics. It has now evolved, becoming an extensible and integrative portal accessing many scientific resources, databases and software tools in different areas of life sciences. Scientists can henceforth access seamlessly a wide range of resources in many different domains, such as proteomics, genomics, phylogeny/evolution, systems biology, population genetics, transcriptomics, etc. The individual resources (databases, web-based and downloadable software tools) are hosted in a 'decentralized' way by different groups of the SIB Swiss Institute of Bioinformatics and partner institutions. Specifically, a single web portal provides a common entry point to a wide range of resources developed and operated by different SIB groups and external institutions. The portal features a search function across 'selected' resources. Additionally, the availability and usage of resources are monitored. The portal is aimed for both expert users and people who are not familiar with a specific domain in life sciences. The new web interface provides, in particular, visual guidance for newcomers to ExPASy.
Resumo:
Le travail présenté dans cette thèse porte sur le rôle du cortex prémoteur dorsal (PMd) au sujet de la prise de décision (sélection d’une action parmis nombreux choix) et l'orientation visuelle des mouvements du bras. L’ouvrage décrit des expériences électrophysiologiques chez le singe éveillé (Macaca mulatta) permettant d’adresser une fraction importante des prédictions proposées par l'hypothèse des affordances concurrentes (Cisek, 2006; Cisek, 2007a). Cette hypothèse suggère que le choix de toute action est l’issue d'une concurrence entre les représentations internes des exigences et des atouts de chacune des options présentées (affordances; Gibson, 1979). Un intérêt particulier est donné au traitement de l'information spatiale et la valeur des options (expected value, EV) dans la prise de décisions. La première étude (article 1) explore la façon dont PMd reflète ces deux paramètres dans la période délai ainsi que de leur intéraction. La deuxième étude (article 2) explore le mécanisme de décision de façon plus détaillée et étend les résultats au cortex prémoteur ventral (PMv). Cette étude porte également sur la représentation spatiale et l’EV dans une perspective d'apprentissage. Dans un environnement nouveau les paramètres spatiaux des actions semblent être présents en tout temps dans PMd, malgré que la représentation de l’EV apparaît uniquement lorsque les animaux commencent à prendre des décisions éclairées au sujet de la valeur des options disponibles. La troisième étude (article 3) explore la façon dont PMd est impliqué aux “changements d'esprit“ dans un procès de décision. Cette étude décrit comment la sélection d’une action est mise à jour à la suite d'une instruction de mouvement (GO signal). I II Les résultats principaux des études sont reproduits par un modèle computationnel (Cisek, 2006) suggérant que la prise de décision entre plusieurs actions alternatives peux se faire par voie d’un mécanisme de concurrence (biased competition) qui aurait lieu dans la même région qui spécifie les actions.
Resumo:
Reaching and grasping an object is an action that can be performed in light, under visual guidance, as well as in darkness, under proprioceptive control only. Area V6A is a visuomotor area involved in the control of reaching movements. V6A, besides neurons activated by the execution of reaching movements, shows passive somatosensory and visual responses. This suggests fro V6A a multimodal capability of integrating sensory and motor-related information, We wanted to know whether this integration occurrs in reaching movements and in the present study we tested whether the visual feedback influenced the reaching activity of V6A neurons. In order to better address this question, we wanted to interpret the neural data in the light of the kinematic of reaching performance. We used an experimental paradigm that could examine V6A responses in two different visual backgrounds, light and dark. In these conditions, the monkey performed an istructed-delay reaching task moving the hand towards different target positions located in the peripersonal space. During the execution of reaching task, the visual feedback is processed in a variety of patterns of modulation, sometimes not expected. In fact, having already demonstrated in V6A reach-related discharges in absence of visual feedback, we expected two types of neural modulation: 1) the addition of light in the environment enhanced reach-related discharges recorded in the dark; 2) the light left the neural response unmodified. Unexpectedly, the results show a complex pattern of modulation that argues against a simple additive interaction between visual and motor-related signals.
Resumo:
Percutaneous nephrolithotomy (PCNL) for the treatment of renal stones and other related renal diseases has proved its efficacy and has stood the test of time compared with open surgical methods and extracorporal shock wave lithotripsy. However, access to the collecting system of the kidney is not easy because the available intra-operative image modalities only provide a two dimensional view of the surgical scenario. With this lack of visual information, several punctures are often necessary which, increases the risk of renal bleeding, splanchnic, vascular or pulmonary injury, or damage to the collecting system which sometimes makes the continuation of the procedure impossible. In order to address this problem, this paper proposes a workflow for introduction of a stereotactic needle guidance system for PCNL procedures. An analysis of the imposed clinical requirements, and a instrument guidance approach to provide the physician with a more intuitive planning and visual guidance to access the collecting system of the kidney are presented.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
This work proposes to seek for the factors related to the choices that people with special educational needs make as the result of the visual impairment, during the transition stage from high school to advanced education. Therefore, we have taken into consideration that Vocational Guidance and the transition towards adulthood get specific characteristics in case of visually impaired young people, particularly in what's related to continue with advanced education. The focus of this work is to be able to clarify the existence of factors that make this transition stage easier or harder, through the observation of visually impaired and blind people who complete high school. This matter has aroused interest and concern about the strategies to follow to ensure the successful entrance and remaining in the selected advanced education. However, if we don?t know the factors involved in the described fact, it's difficult to design an appropriate intervention strategy. Then, in order to take acknowledge about the specific issues of visually impaired young people who complete high school, we chose a special school for this disability and some students who will join this project
Resumo:
This work proposes to seek for the factors related to the choices that people with special educational needs make as the result of the visual impairment, during the transition stage from high school to advanced education. Therefore, we have taken into consideration that Vocational Guidance and the transition towards adulthood get specific characteristics in case of visually impaired young people, particularly in what's related to continue with advanced education. The focus of this work is to be able to clarify the existence of factors that make this transition stage easier or harder, through the observation of visually impaired and blind people who complete high school. This matter has aroused interest and concern about the strategies to follow to ensure the successful entrance and remaining in the selected advanced education. However, if we don?t know the factors involved in the described fact, it's difficult to design an appropriate intervention strategy. Then, in order to take acknowledge about the specific issues of visually impaired young people who complete high school, we chose a special school for this disability and some students who will join this project
Resumo:
This work proposes to seek for the factors related to the choices that people with special educational needs make as the result of the visual impairment, during the transition stage from high school to advanced education. Therefore, we have taken into consideration that Vocational Guidance and the transition towards adulthood get specific characteristics in case of visually impaired young people, particularly in what's related to continue with advanced education. The focus of this work is to be able to clarify the existence of factors that make this transition stage easier or harder, through the observation of visually impaired and blind people who complete high school. This matter has aroused interest and concern about the strategies to follow to ensure the successful entrance and remaining in the selected advanced education. However, if we don?t know the factors involved in the described fact, it's difficult to design an appropriate intervention strategy. Then, in order to take acknowledge about the specific issues of visually impaired young people who complete high school, we chose a special school for this disability and some students who will join this project
Resumo:
Chondroitin sulfate proteoglycans display both inhibitory and stimulatory effects on cell adhesion and neurite outgrowth in vitro. The functional activity of these proteoglycans appears to be context specific and dependent on the presence of different chondroitin sulfate-binding molecules. Little is known about the role of chondroitin sulfate proteoglycans in the growth and guidance of axons in vivo. To address this question, we examined the effects of exogenous soluble chondroitin sulfates on the growth and guidance of axons arising from a subpopulation of neurons in the vertebrate brain which express NOC-2, a novel glycoform of the neural cell adhesion molecule N-CAM. Intact brains of stage 28 Xenopus embryos were unilaterally exposed to medium containing soluble exogenous chondroitin sulfates. When exposed to chondroitin sulfate, NOC-2(+) axons within the tract of the postoptic commissure failed to follow their normal trajectory across the ventral midline via the ventral commissure in the midbrain. Instead, these axons either stalled or grew into the dorsal midbrain or continued growing longitudinally within the ventral longitudinal tract. These findings suggest that chondroitin sulfate proteoglycans indirectly modulate the growth and guidance of a subpopulation of forebrain axons by regulating either matrix-bound or cell surface cues at specific choice points within the developing vertebrate brain. (C) 1998 Academic Press.
Resumo:
The application of augmented reality (AR) technology for assembly guidance is a novel approach in the traditional manufacturing domain. In this paper, we propose an AR approach for assembly guidance using a virtual interactive tool that is intuitive and easy to use. The virtual interactive tool, termed the Virtual Interaction Panel (VirIP), involves two tasks: the design of the VirIPs and the real-time tracking of an interaction pen using a Restricted Coulomb Energy (RCE) neural network. The VirIP includes virtual buttons, which have meaningful assembly information that can be activated by an interaction pen during the assembly process. A visual assembly tree structure (VATS) is used for information management and assembly instructions retrieval in this AR environment. VATS is a hierarchical tree structure that can be easily maintained via a visual interface. This paper describes a typical scenario for assembly guidance using VirIP and VATS. The main characteristic of the proposed AR system is the intuitive way in which an assembly operator can easily step through a pre-defined assembly plan/sequence without the need of any sensor schemes or markers attached on the assembly components.
Resumo:
Near ground maneuvers, such as hover, approach and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground often using ultrasonic or laser range finders. Near ground maneuvers are naturally mastered by flying birds and insects as objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-to-contact (Tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for Unmanned Aerial Vehicles (UAV) relative ground distance control. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the Tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented on-board an experimental quadrotor UAV and shown not only to successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.
Resumo:
Near-ground maneuvers, such as hover, approach, and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground, often using ultrasonic or laser range finders. Near-ground maneuvers are naturally mastered by flying birds and insects because objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-tocontact (tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for relative ground distance control for unmanned aerial vehicles. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented onboard an experimental quadrotor unmannedaerial vehicle and is shown to not only successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.
Resumo:
Radial glia in the developing optic tectum express the key guidance molecules responsible for topographic targeting of retinal axons. However, the extent to which the radial glia are themselves influenced by retinal inputs and visual experience remains unknown. Using multiphoton live imaging of radial glia in the optic tectum of intact Xenopus laevis tadpoles in conjunction with manipulations of neural activity and sensory stimuli, radial glia were observed to exhibit spontaneous calcium transients that were modulated by visual stimulation. Structurally, radial glia extended and retracted many filopodial processes within the tectal neuropil over minutes. These processes interacted with retinotectal synapses and their motility was modulated by nitric oxide (NO) signaling downstream of neuronal NMDA receptor (NMDAR) activation and visual stimulation. These findings provide the first in vivo demonstration that radial glia actively respond both structurally and functionally to neural activity, via NMDAR-dependent NO release during the period of retinal axon ingrowth.