989 resultados para motion processing
Resumo:
Despite the close interrelation between vestibular and visual processing (e.g., vestibulo-ocular reflex), surprisingly little is known about vestibular function in visually impaired people. In this study, we investigated thresholds of passive whole-body motion discrimination (leftward vs. rightward) in nine visually impaired participants and nine age-matched sighted controls. Participants were rotated in yaw, tilted in roll, and translated along the interaural axis at two different frequencies (0.33 and 2 Hz) by means of a motion platform. Superior performance of visually impaired participants was found in the 0.33 Hz roll tilt condition. No differences were observed in the other motion conditions. Roll tilts stimulate the semicircular canals and otoliths simultaneously. The results could thus reflect a specific improvement in canal–otolith integration in the visually impaired and are consistent with the compensatory hypothesis, which implies that the visually impaired are able to compensate the absence of visual input.
Resumo:
The goal of our study is to determine accurate time series of geophysical Earth rotation excitations to learn more about global dynamic processes in the Earth system. For this purpose, we developed an adjustment model which allows to combine precise observations from space geodetic observation systems, such as Satellite Laser Ranging (SLR), Global Navigation Satellite Systems (GNSS), Very Long Baseline Interferometry (VLBI), Doppler Orbit determination and Radiopositioning Integrated on Satellite (DORIS), satellite altimetry and satellite gravimetry in order to separate geophysical excitation mechanisms of Earth rotation. Three polar motion time series are applied to derive the polar motion excitation functions (integral effect). Furthermore we use five time variable gravity field solutions from Gravity Recovery and Climate Experiment (GRACE) to determine not only the integral mass effect but also the oceanic and hydrological mass effects by applying suitable filter techniques and a land-ocean mask. For comparison the integral mass effect is also derived from degree 2 potential coefficients that are estimated from SLR observations. The oceanic mass effect is also determined from sea level anomalies observed by satellite altimetry by reducing the steric sea level anomalies derived from temperature and salinity fields of the oceans. Due to the combination of all geodetic estimated excitations the weaknesses of the individual processing strategies can be reduced and the technique-specific strengths can be accounted for. The formal errors of the adjusted geodetic solutions are smaller than the RMS differences of the geophysical model solutions. The improved excitation time series can be used to improve the geophysical modeling.
Resumo:
In this paper, a novel and approach for obtaining 3D models from video sequences captured with hand-held cameras is addressed. We define a pipeline that robustly deals with different types of sequences and acquiring devices. Our system follows a divide and conquer approach: after a frame decimation that pre-conditions the input sequence, the video is split into short-length clips. This allows to parallelize the reconstruction step which translates into a reduction in the amount of computational resources required. The short length of the clips allows an intensive search for the best solution at each step of reconstruction which robustifies the system. The process of feature tracking is embedded within the reconstruction loop for each clip as opposed to other approaches. A final registration step, merges all the processed clips to the same coordinate frame
Resumo:
In a series of attempts to research and document relevant sloshing type phenomena, a series of experiments have been conducted. The aim of this paper is to describe the setup and data processing of such experiments. A sloshing tank is subjected to angular motion. As a result pressure registers are obtained at several locations, together with the motion data, torque and a collection of image and video information. The experimental rig and the data acquisition systems are described. Useful information for experimental sloshing research practitioners is provided. This information is related to the liquids used in the experiments, the dying techniques, tank building processes, synchronization of acquisition systems, etc. A new procedure for reconstructing experimental data, that takes into account experimental uncertainties, is presented. This procedure is based on a least squares spline approximation of the data. Based on a deterministic approach to the first sloshing wave impact event in a sloshing experiment, an uncertainty analysis procedure of the associated first pressure peak value is described.
Resumo:
In this paper, we consider a scenario where 3D scenes are modeled through a View+Depth representation. This representation is to be used at the rendering side to generate synthetic views for free viewpoint video. The encoding of both type of data (view and depth) is carried out using two H.264/AVC encoders. In this scenario we address the reduction of the encoding complexity of depth data. Firstly, an analysis of the Mode Decision and Motion Estimation processes has been conducted for both view and depth sequences, in order to capture the correlation between them. Taking advantage of this correlation, we propose a fast mode decision and motion estimation algorithm for the depth encoding. Results show that the proposed algorithm reduces the computational burden with a negligible loss in terms of quality of the rendered synthetic views. Quality measurements have been conducted using the Video Quality Metric.
Resumo:
When human subjects discriminate motion directions of two visual stimuli, their discrimination improves with practice. This improved performance has been found to be specific to the practiced directions and does not transfer to new motion directions. Indeed, such stimulus-specific learning has become a trademark finding in almost all perceptual learning studies and has been used to infer the loci of learning in the brain. For example, learning in motion discrimination has been inferred to occur in the visual area MT (medial temporal cortex) of primates, where neurons are selectively tuned to motion directions. However, such motion discrimination task is extremely difficult, as is typical of most perceptual learning tasks. When the difficulty is moderately reduced, learning transfers to new motion directions. This result challenges the idea of using simple visual stimuli to infer the locus of learning in low-level visual processes and suggests that higher-level processing is essential even in “simple” perceptual learning tasks.
Resumo:
The visual responses of neurons in the cerebral cortex were first adequately characterized in the 1960s by D. H. Hubel and T. N. Wiesel [(1962) J. Physiol. (London) 160, 106-154; (1968) J. Physiol. (London) 195, 215-243] using qualitative analyses based on simple geometric visual targets. Over the past 30 years, it has become common to consider the properties of these neurons by attempting to make formal descriptions of these transformations they execute on the visual image. Most such models have their roots in linear-systems approaches pioneered in the retina by C. Enroth-Cugell and J. R. Robson [(1966) J. Physiol. (London) 187, 517-552], but it is clear that purely linear models of cortical neurons are inadequate. We present two related models: one designed to account for the responses of simple cells in primary visual cortex (V1) and one designed to account for the responses of pattern direction selective cells in MT (or V5), an extrastriate visual area thought to be involved in the analysis of visual motion. These models share a common structure that operates in the same way on different kinds of input, and instantiate the widely held view that computational strategies are similar throughout the cerebral cortex. Implementations of these models for Macintosh microcomputers are available and can be used to explore the models' properties.
Resumo:
The primate visual system offers unprecedented opportunities for investigating the neural basis of cognition. Even the simplest visual discrimination task requires processing of sensory signals, formation of a decision, and orchestration of a motor response. With our extensive knowledge of the primate visual and oculomotor systems as a base, it is now possible to investigate the neural basis of simple visual decisions that link sensation to action. Here we describe an initial study of neural responses in the lateral intraparietal area (LIP) of the cerebral cortex while alert monkeys discriminated the direction of motion in a visual display. A subset of LIP neurons carried high-level signals that may comprise a neural correlate of the decision process in our task. These signals are neither sensory nor motor in the strictest sense; rather they appear to reflect integration of sensory signals toward a decision appropriate for guiding movement. If this ultimately proves to be the case, several fascinating issues in cognitive neuroscience will be brought under rigorous physiological scrutiny.
Resumo:
Early visual processing analyses fine and coarse image features separately. Here we show that motion signals derived from fine and coarse analyses are combined in rather a surprising way: Coarse and fine motion sensors representing the same direction of motion inhibit one another and an imbalance can reverse the motion perceived. Observers judged the direction of motion of patches of filtered two-dimensional noise, centered on 1 and 3 cycles/deg. When both sets of noise were present and only the 3 cycles/deg noise moved, judgments were reversed at short durations. When both sets of noise moved, judgments were correct but sensitivity was impaired. Reversals and impairments occurred both with isotropic noise and with orientation-filtered noise. The reversals and impairments could be simulated in a model of motion sensing by adding a stage in which the outputs of motion sensors tuned to 1 and 3 cycles/deg and the same direction of motion were subtracted from one another. The subtraction model predicted and we confirmed in experiments with orientation-filtered noise that if the 1 cycle/deg noise flickered and the 3 cycles/deg noise moved, the 1 cycle/deg noise appeared to move in the opposite direction to the 3 cycles/deg noise even at long durations.
Resumo:
This work describes a neural network based architecture that represents and estimates object motion in videos. This architecture addresses multiple computer vision tasks such as image segmentation, object representation or characterization, motion analysis and tracking. The use of a neural network architecture allows for the simultaneous estimation of global and local motion and the representation of deformable objects. This architecture also avoids the problem of finding corresponding features while tracking moving objects. Due to the parallel nature of neural networks, the architecture has been implemented on GPUs that allows the system to meet a set of requirements such as: time constraints management, robustness, high processing speed and re-configurability. Experiments are presented that demonstrate the validity of our architecture to solve problems of mobile agents tracking and motion analysis.
Resumo:
Perceptual accuracy is known to be influenced by stimuli location within the visual field. In particular, it seems to be enhanced in the lower visual hemifield (VH) for motion and space processing, and in the upper VH for object and face processing. The origins of such asymmetries are attributed to attentional biases across the visual field, and in the functional organization of the visual system. In this article, we tested content-dependent perceptual asymmetries in different regions of the visual field. Twenty-five healthy volunteers participated in this study. They performed three visual tests involving perception of shapes, orientation and motion, in the four quadrants of the visual field. The results of the visual tests showed that perceptual accuracy was better in the lower than in the upper visual field for motion perception, and better in the upper than in the lower visual field for shape perception. Orientation perception did not show any vertical bias. No difference was found when comparing right and left VHs. The functional organization of the visual system seems to indicate that the dorsal and the ventral visual streams, responsible for motion and shape perception, respectively, show a bias for the lower and upper VHs, respectively. Such a bias depends on the content of the visual information.
Resumo:
Stirred mills are becoming increasingly used for fine and ultra-fine grinding. This technology is still poorly understood when used in the mineral processing context. This makes process optimisation of such devices problematic. 3D DEM simulations of the flow of grinding media in pilot scale tower mills and pin mills are carried out in order to investigate the relative performance of these stirred mills. Media flow patterns and energy absorption rates and distributions are analysed here. In the second part of this paper, coherent flow structures, equipment wear and mixing and transport efficiency are analysed. (C) 2006 Published by Elsevier Ltd.
Resumo:
Developmental learning disabilities such as dyslexia and dyscalculia have a high rate of co-occurrence in pediatric populations, suggesting that they share underlying cognitive and neurophysiological mechanisms. Dyslexia and other developmental disorders with a strong heritable component have been associated with reduced sensitivity to coherent motion stimuli, an index of visual temporal processing on a millisecond time-scale. Here we examined whether deficits in sensitivity to visual motion are evident in children who have poor mathematics skills relative to other children of the same age. We obtained psychophysical thresholds for visual coherent motion and a control task from two groups of children who differed in their performance on a test of mathematics achievement. Children with math skills in the lowest 10% in their cohort were less sensitive than age-matched controls to coherent motion, but they had statistically equivalent thresholds to controls on a coherent form control measure. Children with mathematics difficulties therefore tend to present a similar pattern of visual processing deficit to those that have been reported previously in other developmental disorders. We speculate that reduced sensitivity to temporally defined stimuli such as coherent motion represents a common processing deficit apparent across a range of commonly co-occurring developmental disorders.
Resumo:
How does nearby motion affect the perceived speed of a target region? When a central drifting Gabor patch is surrounded by translating noise, its speed can be misperceived over a fourfold range. Typically, when a surround moves in the same direction, perceived centre speed is reduced; for opposite-direction surrounds it increases. Measuring this illusion for a variety of surround properties reveals that the motion context effects are a saturating function of surround speed (Experiment I) and contrast (Experiment II). Our analyses indicate that the effects are consistent with a subtractive process, rather than with speed being averaged over area. In Experiment III we exploit known properties of the motion system to ask where these surround effects impact. Using 2D plaid stimuli, we find that surround-induced shifts in perceived speed of one plaid component produce substantial shifts in perceived plaid direction. This indicates that surrounds exert their influence early in processing, before pattern motion direction is computed. These findings relate to ongoing investigations of surround suppression for direction discrimination, and are consistent with single-cell findings of direction-tuned suppressive and facilitatory interactions in primary visual cortex (V1).
Resumo:
Background - When a moving stimulus and a briefly flashed static stimulus are physically aligned in space the static stimulus is perceived as lagging behind the moving stimulus. This vastly replicated phenomenon is known as the Flash-Lag Effect (FLE). For the first time we employed biological motion as the moving stimulus, which is important for two reasons. Firstly, biological motion is processed by visual as well as somatosensory brain areas, which makes it a prime candidate for elucidating the interplay between the two systems with respect to the FLE. Secondly, discussions about the mechanisms of the FLE tend to recur to evolutionary arguments, while most studies employ highly artificial stimuli with constant velocities. Methodology/Principal Finding - Since biological motion is ecologically valid it follows complex patterns with changing velocity. We therefore compared biological to symbolic motion with the same acceleration profile. Our results with 16 observers revealed a qualitatively different pattern for biological compared to symbolic motion and this pattern was predicted by the characteristics of motor resonance: The amount of anticipatory processing of perceived actions based on the induced perspective and agency modulated the FLE. Conclusions/Significance - Our study provides first evidence for an FLE with non-linear motion in general and with biological motion in particular. Our results suggest that predictive coding within the sensorimotor system alone cannot explain the FLE. Our findings are compatible with visual prediction (Nijhawan, 2008) which assumes that extrapolated motion representations within the visual system generate the FLE. These representations are modulated by sudden visual input (e.g. offset signals) or by input from other systems (e.g. sensorimotor) that can boost or attenuate overshooting representations in accordance with biased neural competition (Desimone & Duncan, 1995).