907 resultados para Visual control
Resumo:
A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.
Resumo:
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.
Resumo:
A neural network is introduced which provides a solution of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space. To do this, the network self-organizes a mapping from motion directions in 3-D space to velocity commands in joint space. Computer simulations demonstrate that, without any additional learning, the network can generate accurate movement commands that compensate for variable tool lengths, clamping of joints, distortions of visual input by a prism, and unexpected limb perturbations. Blind reaches have also been simulated.
Resumo:
Recently, a number of investigators have examined the neural loci of psychological processes enabling the control of visual spatial attention using cued-attention paradigms in combination with event-related functional magnetic resonance imaging. Findings from these studies have provided strong evidence for the involvement of a fronto-parietal network in attentional control. In the present study, we build upon this previous work to further investigate these attentional control systems. In particular, we employed additional controls for nonattentional sensory and interpretative aspects of cue processing to determine whether distinct regions in the fronto-parietal network are involved in different aspects of cue processing, such as cue-symbol interpretation and attentional orienting. In addition, we used shorter cue-target intervals that were closer to those used in the behavioral and event-related potential cueing literatures. Twenty participants performed a cued spatial attention task while brain activity was recorded with functional magnetic resonance imaging. We found functional specialization for different aspects of cue processing in the lateral and medial subregions of the frontal and parietal cortex. In particular, the medial subregions were more specific to the orienting of visual spatial attention, while the lateral subregions were associated with more general aspects of cue processing, such as cue-symbol interpretation. Additional cue-related effects included differential activations in midline frontal regions and pretarget enhancements in the thalamus and early visual cortical areas.
Resumo:
The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals.
Resumo:
Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.
Visual functioning and quality of life in the subfoveal radiotherapy study (SFRADS): SFRADS report 2
Resumo:
Aims: To determine whether or not self reported visual functioning and quality of life in patients with choroidal neovascularisation caused by age related macular degeneration (AMD) is better in those treated with 12 Gy external beam radiotherapy in comparison with untreated subjects. Methods: A multicentre single masked randomised controlled trial of 12 Gy of external beam radiation therapy (EBRT) delivered as 6x2 Gy fractions to the macula of an affected eye versus observation. Patients with AMD, aged 60 years or over, in three UK hospital units, who had subfoveal CNV and a visual acuity equal to or better than 6/60 (logMAR 1.0). Methods: Data from 199 eligible participants who were randomly assigned to 12 Gy teletherapy or observation were available for analysis. Visual function assessment, ophthalmic examination, and fundus fluorescein angiography were undertaken at baseline and at 3, 6, 12, and 24 months after study entry. To assess patient centred outcomes, subjects were asked to complete the Daily Living Tasks Dependent on Vision (DLTV) and the SF-36 questionnaires at baseline, 6, 12, and 24 months after enrolment to the study. Cross sectional and longitudinal analyses were conducted using arm of study as grouping variable. Regression analysis was employed to adjust for the effect of baseline co-variates on outcome at 12 months and 24 months. Results: Both control and treated subjects had significant losses in visual functioning as seen by a progressive decline in mean scores in the four dimensions of the DLTV. There were no statistically significant differences between treatment and control subjects in any of dimensions of the DLTV at 12 months or 24 months after study entry. Regression analysis confirmed that treatment status had no effect on the change in DLTV dimensional scores. Conclusions: The small benefits noted in clinical measures of vision in treated eyes did not translate into better self reported visual functioning in patients who received treatment when compared with the control arm. These findings have implications for the design of future clinical trials and studies.
Resumo:
Primary Objective: To investigate the utility of using a new method of assessment for deficits in selective visual attention (SVA). Methods and Procedures: An independent groups design compared six participants with brain injuries with six participants from a non-brain injured control group. The Sensomotoric Instruments Eye Movement system with remote eye-tracking device (eye camera), and 2 sets of eight stimuli were employed to determine if the camera would be a sensitive discriminator of SVA in these groups. Main Outcomes and Results: The attention profile displayed by the brain injured group showed that they were slower, made more errors, were less accurate, and more indecisive than the control group. Conclusions: The utility of eye movement analysis as an assessment method was established, with implications for rehabilitation requiring further development. Key words: selective visual attention, eye movement analysis, brain injury
Resumo:
Accurate estimates of the time-to-contact (TTC) of approaching objects are crucial for survival. We used an ecologically valid driving simulation to compare and contrast the neural substrates of egocentric (head-on approach) and allocentric (lateral approach) TTC tasks in a fully factorial, event-related fMRI design. Compared to colour control tasks, both egocentric and allocentric TTC tasks activated left ventral premotor cortex/frontal operculum and inferior parietal cortex, the same areas that have previously been implicated in temporal attentional orienting. Despite differences in visual and cognitive demands, both TTC and temporal orienting paradigms encourage the use of temporally predictive information to guide behaviour, suggesting these areas may form a core network for temporal prediction. We also demonstrated that the temporal derivative of the perceptual index tau (tau-dot) held predictive value for making collision judgements and varied inversely with activity in primary visual cortex (V1). Specifically, V1 activity increased with the increasing likelihood of reporting a collision, suggesting top-down attentional modulation of early visual processing areas as a function of subjective collision. Finally, egocentric viewpoints provoked a response bias for reporting collisions, rather than no-collisions, reflecting increased caution for head-on approaches. Associated increases in SMA activity suggest motor preparation mechanisms were engaged, despite the perceptual nature of the task.
Resumo:
Rapid orientating movements of the eyes are believed to be controlled ballistically. The mechanism underlying this control is thought to involve a comparison between the desired displacement of the eye and an estimate of its actual position (obtained from the integration of the eye velocity signal). This study shows, however, that under certain circumstances fast gaze movements may be controlled quite differently and may involve mechanisms which use visual information to guide movements prospectively. Subjects were required to make large gaze shifts in yaw towards a target whose location and motion were unknown prior to movement onset. Six of those tested demonstrated remarkable accuracy when making gaze shifts towards a target that appeared during their ongoing movement. In fact their level of accuracy was not significantly different from that shown when they performed a 'remembered' gaze shift to a known stationary target (F-3,F-15 = 0.15, p > 0.05). The lack of a stereotypical relationship between the skew of the gaze velocity profile and movement duration indicates that on-line modifications were being made. It is suggested that a fast route from the retina to the superior colliculus could account for this behaviour and that models of oculomotor control need to be updated.
Resumo:
The control of social attention during early infancy was investigated in two studies. In both studies, an adult turned towards one of two targets within the infant's immediate visual field. We tested: (a) whether infants were able to follow the direction of the adult's head turn; and (b) whether following a head turn was accompanied by further gaze shifts between experimenter and target. In the first study, 1-month-olds did not demonstrate attention following at the group level. In addition, those infants who turned towards the same target remained fixed on it and did not shift attention again. In Study 2, we tested infants longitudinally at 2-4 months. At the group level, infants followed the adult's head turn at 3 and 4 months but not at 2 months. Those infants who turned towards the same target at 3 and 4 months also shifted gaze back and forth between experimenter and target. By 3 months, infants seem able to capitalize on the social environment to disengage and distribute attention more flexibly. The results support the claim that the control of social attention begins in early infancy, and are consistent with the hypothesis that following the attention of other people is dependent on the development of disengagement skills.
Resumo:
The Kyoto Protocol and the European Energy Performance of Buildings Directive put an onus on governments
and organisations to lower carbon footprint in order to contribute towards reducing global warming. A key
parameter to be considered in buildings towards energy and cost savings is its indoor lighting that has a major
impact on overall energy usage and Carbon Dioxide emissions. Lighting control in buildings using Passive
Infrared sensors is a reliable and well established approach; however, the use of only Passive Infrared does not
offer much savings towards reducing carbon, energy, and cost. Accurate occupancy monitoring information can
greatly affect a building’s lighting control strategy towards a greener usage. This paper presents an approach for
data fusion of Passive Infrared sensors and passive Radio Frequency Identification (RFID) based occupancy
monitoring. The idea is to have efficient, need-based, and reliable control of lighting towards a green indoor
environment, all while considering visual comfort of occupants. The proposed approach provides an estimated
13% electrical energy savings in one open-plan office of a University building in one working day. Practical
implementation of RFID gateways provide real-world occupancy profiling data to be fused with Passive
Infrared sensing towards analysis and improvement of building lighting usage and control.
Resumo:
Recent studies suggested that the control of hand movements in catching involves continuous vision-based adjustments. More insight into these adjustments may be gained by examining the effects of occluding different parts of the ball trajectory. Here, we examined the effects of such occlusion on lateral hand movements when catching balls approaching from different directions, with the occlusion conditions presented in blocks or in randomized order. The analyses showed that late occlusion only had an effect during the blocked presentation, and early occlusion only during the randomized presentation. During the randomized presentation movement biases were more leftward if the preceding trial was an early occlusion trial. The effect of early occlusion during the randomized presentation suggests that the observed leftward movement bias relates to the rightward visual acceleration inherent to the ball trajectories used, while its absence during the blocked presentation seems to reflect trial-by-trial adaptations in the visuomotor gain, reminiscent of dynamic gain control in the smooth pursuit system. The movement biases during the late occlusion block were interpreted in terms of an incomplete motion extrapolation--a reduction of the velocity gain--caused by the fact that participants never saw the to-be-extrapolated part of the ball trajectory. These results underscore that continuous movement adjustments for catching do not only depend on visual information, but also on visuomotor adaptations based on non-visual information.