960 resultados para Visual motion
Resumo:
Neuroimaging studies of cortical activation during image transformation tasks have shown that mental rotation may rely on similar brain regions as those underlying visual perceptual mechanisms. The V5 complex, which is specialised for visual motion, is one region that has been implicated. We used functional magnetic resonance imaging (fMRI) to investigate rotational and linear transformation of stimuli. Areas of significant brain activation were identified for each of the primary mental transformation tasks in contrast to its own perceptual reference task which was cognitively matched in all respects except for the variable of interest. Analysis of group data for perception of rotational and linear motion showed activation in areas corresponding to V5 as defined in earlier studies. Both rotational and linear mental transformations activated Brodman Area (BA) 19 but did not activate V5. An area within the inferior temporal gyrus, representing an inferior satellite area of V5, was activated by both the rotational perception and rotational transformation tasks, but showed no activation in response to linear motion perception or transformation. The findings demonstrate the extent to which neural substrates for image transformation and perception overlap and are distinct as well as revealing functional specialisation within perception and transformation processing systems.
Resumo:
Emerging evidence of the high variability in the cognitive skills and deficits associated with reading achievement and dysfunction promotes both a more dimensional view of the risk factors involved, and the importance of discriminating between trajectories of impairment. Here we examined reading and component orthographic and phonological skills alongside measures of cognitive ability and auditory and visual sensory processing in a large group of primary school children between the ages of 7 and 12 years. We identified clusters of children with pseudoword or exception word reading scores at the 10th percentile or below relative to their age group, and a group with poor skills on both tasks. Compared to age-matched and reading-level controls, groups of children with more impaired exception word reading were best described by a trajectory of developmental delay, whereas readers with more impaired pseudoword reading or combined deficits corresponded more with a pattern of atypical development. Sensory processing deficits clustered within both of the groups with putative atypical development: auditory discrimination deficits with poor phonological awareness skills; impairments of visual motion processing in readers with broader and more severe patterns of reading and cognitive impairments. Sensory deficits have been variably associated with developmental impairments of literacy and language; these results suggest that such deficits are also likely to cluster in children with particular patterns of reading difficulty. © 2012 Elsevier Ltd.
Resumo:
The mappings from grapheme to phoneme are much less consistent in English than they are for most other languages. Therefore, the differences found between English-speaking dyslexics and controls on sensory measures of temporal processing might be related more to the irregularities of English orthography than to a general deficit affecting reading ability in all languages. However, here we show that poor readers of Norwegian, a language with a relatively regular orthography, are less sensitive than controls to dynamic visual and auditory stimuli. Consistent with results from previous studies of English-readers, detection thresholds for visual motion and auditory frequency modulation (FM) were significantly higher in 19 poor readers of Norwegian compared to 22 control readers of the same age. Over two-thirds (68.4%) of the children identified as poor readers were less sensitive than controls to either or both of the visual coherent motion or auditory 2Hz FM stimuli. © 2003 Elsevier Science (USA). All rights reserved.
Resumo:
Although placing reflective markers on pedestrians’ major joints can make pedestrians more conspicuous to drivers at night, it has been suggested that this “biological motion” effect may be reduced when visual clutter is present. We tested whether extraneous points of light affected the ability of 12 younger and 12 older drivers to see pedestrians as they drove on a closed road at night. Pedestrians wore black clothing alone or with retroreflective markings in four different configurations. One pedestrian walked in place and was surrounded by clutter on half of the trials. Another was always surrounded by visual clutter but either walked in place or stood still. Clothing configuration, pedestrian motion, and driver age influenced conspicuity but clutter did not. The results confirm that even in the presence of visual clutter pedestrians wearing biological motion configurations are recognized more often and at greater distances than when they wear a reflective vest.
Resumo:
This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.
Resumo:
In most visual mapping applications suited to Autonomous Underwater Vehicles (AUVs), stereo visual odometry (VO) is rarely utilised as a pose estimator as imagery is typically of very low framerate due to energy conservation and data storage requirements. This adversely affects the robustness of a vision-based pose estimator and its ability to generate a smooth trajectory. This paper presents a novel VO pipeline for low-overlap imagery from an AUV that utilises constrained motion and integrates magnetometer data in a bi-objective bundle adjustment stage to achieve low-drift pose estimates over large trajectories. We analyse the performance of a standard stereo VO algorithm and compare the results to the modified vo algorithm. Results are demonstrated in a virtual environment in addition to low-overlap imagery gathered from an AUV. The modified VO algorithm shows significantly improved pose accuracy and performance over trajectories of more than 300m. In addition, dense 3D meshes generated from the visual odometry pipeline are presented as a qualitative output of the solution.
Resumo:
We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent) or a different emotion (incongruent) with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.
Resumo:
This paper deals with constrained image-based visual servoing of circular and conical spiral motion about an unknown object approximating a single image point feature. Effective visual control of such trajectories has many applications for small unmanned aerial vehicles, including surveillance and inspection, forced landing (homing), and collision avoidance. A spherical camera model is used to derive a novel visual-predictive controller (VPC) using stability-based design methods for general nonlinear model-predictive control. In particular, a quasi-infinite horizon visual-predictive control scheme is derived. A terminal region, which is used as a constraint in the controller structure, can be used to guide appropriate reference image features for spiral tracking with respect to nominal stability and feasibility. Robustness properties are also discussed with respect to parameter uncertainty and additive noise. A comparison with competing visual-predictive control schemes is made, and some experimental results using a small quad rotor platform are given.
Resumo:
How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.
Resumo:
How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? Consider, for example, a deer moving behind a bush. Here the partially occluded fragments of motion signals available to an observer must be coherently grouped into the motion of a single object. A 3D FORMOTION model comprises five important functional interactions involving the brain’s form and motion systems that address such situations. Because the model’s stages are analogous to areas of the primate visual system, we refer to the stages by corresponding anatomical names. In one of these functional interactions, 3D boundary representations, in which figures are separated from their backgrounds, are formed in cortical area V2. These depth-selective V2 boundaries select motion signals at the appropriate depths in MT via V2-to-MT signals. In another, motion signals in MT disambiguate locally incomplete or ambiguous boundary signals in V2 via MT-to-V1-to-V2 feedback. The third functional property concerns resolution of the aperture problem along straight moving contours by propagating the influence of unambiguous motion signals generated at contour terminators or corners. Here, sparse “feature tracking signals” from, e.g., line ends, are amplified to overwhelm numerically superior ambiguous motion signals along line segment interiors. In the fourth, a spatially anisotropic motion grouping process takes place across perceptual space via MT-MST feedback to integrate veridical feature-tracking and ambiguous motion signals to determine a global object motion percept. The fifth property uses the MT-MST feedback loop to convey an attentional priming signal from higher brain areas back to V1 and V2. The model's use of mechanisms such as divisive normalization, endstopping, cross-orientation inhibition, and longrange cooperation is described. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.
Resumo:
Our understanding of how the visual system processes motion transparency, the phenomenon by which multiple directions of motion are perceived to co-exist in the same spatial region, has grown considerably in the past decade. There is compelling evidence that the process is driven by global-motion mechanisms. Consequently, although transparently moving surfaces are readily segmented over an extended space, the visual system cannot separate two motion signals that co-exist in the same local region. A related issue is whether the visual system can detect transparently moving surfaces simultaneously, or whether the component signals encounter a serial â??bottleneckâ?? during their processing? Our initial results show that, at sufficiently short stimulus durations, observers cannot accurately detect two superimposed directions; yet they have no difficulty in detecting one pattern direction in noise, supporting the serial-bottleneck scenario. However, in a second experiment, the difference in performance between the two tasks disappears when the component patterns are segregated. This discrepancy between the processing of transparent and non-overlapping patterns may be a consequence of suppressed activity of global-motion mechanisms when the transparent surfaces are presented in the same depth plane. To test this explanation, we repeated our initial experiment while separating the motion components in depth. The marked improvement in performance leads us to conclude that transparent motion signals are represented simultaneously.