121 resultados para motion washout filter
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
In human motion analysis, the joint estimation of appearance, body pose and location parameters is not always tractable due to its huge computational cost. In this paper, we propose a Rao-Blackwellized Particle Filter for addressing the problem of human pose estimation and tracking. The advantage of the proposed approach is that Rao-Blackwellization allows the state variables to be splitted into two sets, being one of them analytically calculated from the posterior probability of the remaining ones. This procedure reduces the dimensionality of the Particle Filter, thus requiring fewer particles to achieve a similar tracking performance. In this manner, location and size over the image are obtained stochastically using colour and motion clues, whereas body pose is solved analytically applying learned human Point Distribution Models.
Resumo:
This paper presents an Invariant Information Local Sub-map Filter (IILSF) as a technique for consistent Simultaneous Localisation and Mapping (SLAM) in a large environment. It harnesses the benefits of sub-map technique to improve the consistency and efficiency of Extended Kalman Filter (EKF) based SLAM. The IILSF makes use of invariant information obtained from estimated locations of features in independent sub-maps, instead of incorporating every observation directly into the global map. Then the global map is updated at regular intervals. Applying this technique to the EKF based SLAM algorithm: (a) reduces the computational complexity of maintaining the global map estimates and (b) simplifies transformation complexities and data association ambiguities usually experienced in fusing sub-maps together. Simulation results show that the method was able to accurately fuse local map observations to generate an efficient and consistent global map, in addition to significantly reducing computational cost and data association ambiguities.
Resumo:
Object tracking is an active research area nowadays due to its importance in human computer interface, teleconferencing and video surveillance. However, reliable tracking of objects in the presence of occlusions, pose and illumination changes is still a challenging topic. In this paper, we introduce a novel tracking approach that fuses two cues namely colour and spatio-temporal motion energy within a particle filter based framework. We conduct a measure of coherent motion over two image frames, which reveals the spatio-temporal dynamics of the target. At the same time, the importance of both colour and motion energy cues is determined in the stage of reliability evaluation. This determination helps maintain the performance of the tracking system against abrupt appearance changes. Experimental results demonstrate that the proposed method outperforms the other state of the art techniques in the used test datasets.
Resumo:
A rich model based motion vector steganalysis benefiting from both temporal and spatial correlations of motion vectors is proposed in this work. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this work. Firstly, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring motion vectors for longer distances. Therefore, temporal motion vector dependency along side the spatial dependency is utilized for rigorous motion vector steganalysis. Secondly, unlike the filters previously used, which were heuristically designed against a specific motion vector steganography, a diverse set of many filters which can capture aberrations introduced by various motion vector steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent motion vector steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in motion vector steganalysis field including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.
Resumo:
This letter presents novel behaviour-based tracking of people in low-resolution using instantaneous priors mediated by head-pose. We extend the Kalman Filter to adaptively combine motion information with an instantaneous prior belief about where the person will go based on where they are currently looking. We apply this new method to pedestrian surveillance, using automatically-derived head pose estimates, although the theory is not limited to head-pose priors. We perform a statistical analysis of pedestrian gazing behaviour and demonstrate tracking performance on a set of simulated and real pedestrian observations. We show that by using instantaneous `intentional' priors our algorithm significantly outperforms a standard Kalman Filter on comprehensive test data.
Resumo:
We have examined the ability of observers to parse bimodal local-motion distributions into two global motion surfaces, either overlapping (yielding transparent motion) or spatially segregated (yielding a motion boundary). The stimuli were random dot kinematograms in which the direction of motion of each dot was drawn from one of two rectangular probability distributions. A wide range of direction distribution widths and separations was tested. The ability to discriminate the direction of motion of one of the two motion surfaces from the direction of a comparison stimulus was used as an objective test of the perception of two discrete surfaces. Performance for both transparent and spatially segregated motion was remarkably good, being only slightly inferior to that achieved with a single global motion surface. Performance was consistently better for segregated motion than for transparency. Whereas transparent motion was only perceived with direction distributions which were separated by a significant gap, segregated motion could be seen with abutting or even partially overlapping direction distributions. For transparency, the critical gap increased with the range of directions in the distribution. This result does not support models in which transparency depends on detection of a minimum size of gap defining a bimodal direction distribution. We suggest, instead, that the operations which detect bimodality are scaled (in the direction domain) with the overall range of distributions. This yields a flexible, adaptive system that determines whether a gap in the direction distribution serves as a segmentation cue or is smoothed as part of a unitary computation of global motion.
Resumo:
The mechanisms underlying the parsing of a spatial distribution of velocity vectors into two adjacent (spatially segregated) or overlapping (transparent) motion surfaces were examined using random dot kinematograms. Parsing might occur using either of two principles. Surfaces might be defined on the basis of similarity of motion vectors and then sharp perceptual boundaries drawn between different surfaces (continuity-based segmentation). Alternatively, detection of a high gradient of direction or speed separating the motion surfaces might drive the process (discontinuity-based segmentation). To establish which method is used, we examined the effect of blurring the motion direction gradient. In the case of a sharp direction gradient, each dot had one of two directions differing by 135°. With a shallow gradient, most dots had one of two directions but the directions of the remainder spanned the range between one motion-defined surface and the other. In the spatial segregation case the gradient defined a central boundary separating two regions. In the transparent version the dots were randomly positioned. In both cases all dots moved with the same speed and existed for only two frames before being randomly replaced. The ability of observers to parse the motion distribution was measured in terms of their ability to discriminate the direction of one of the two surfaces. Performance was hardly affected by spreading the gradient over at least 25% of the dots (corresponding to a 1° strip in the segregation case). We conclude that detection of sharp velocity gradients is not necessary for distinguishing different motion surfaces.
Resumo:
Motion transparency provides a challenging test case for our understanding of how visual motion, and other attributes, are computed and represented in the brain. However, previous studies of visual transparency have used subjective criteria which do not confirm the existence of independent representations of the superimposed motions. We have developed measures of performance in motion transparency that require observers to extract information about two motions jointly, and therefore test the information that is simultaneously represented for each motion. Observers judged whether two motions were at 90 to one another; the base direction was randomized so that neither motion taken alone was informative. The precision of performance was determined by the standard deviations (S.D.s) of probit functions fitted to the data. Observers also made judgments of orthogonal directions between a single motion stream and a line, for one of two transparent motions against a line and for two spatially segregated motions. The data show that direction judgments with transparency can be made with comparable accuracy to segregated (non-transparent) conditions, supporting the idea that transparency involves the equivalent representation of two global motions in the same region. The precision of this joint direction judgment is, however, 2–3 times poorer than that for a single motion stream. The precision in directional judgment for a single stream is reduced only by a factor of about 1.5 by superimposing a second stream. The major effect in performance, therefore, appears to be associated with the need to compute and compare two global representations of motion, rather than with interference between the dot streams per se. Experiment 2tested the transparency of motions separated by a range of angles from 5 to 180 by requiring subjects to set a line matching the perceived direction of each motion. The S.D.s of these settings demonstrated that directions of transparent motions were represented independently for separations over 20. Increasing dot speeds from 1 to 10 deg/s improved directional performance but had no effect on transparency perception. Transparency was also unaffected by variations of density between 0.1 and 19 dots/deg2
Resumo:
The importance of relative motion information when modelling a novel motor skill was examined. Participants were assigned to one of four groups. Groups 1 and 2 viewed demonstrations of a skilled cricket bowler presented in either video or point light format. Group 3 observed a single point of light pertaining to the wrist of the skilled bowler only. Participants in Group 4 did not receive a demonstration and acted as controls. During 60 acquisition trials, participants in the demonstration groups viewed a model five times before each 10-trial block. Retention was examined the following day. Intra-limb coordination was assessed for the right elbow relative to the wrist in comparison to the model. The demonstration groups showed greater concordance with the model than the control group. However, the wrist group performed less like the model than the point light and video groups, who did not differ from each other. These effects were maintained in retention. Relative motion information aided the acquisition of intra-limb coordination, while making this information more salient (through point lights) provided no additional benefit. The motion of the models bowling arm was replicated more closely than the non-bowling arm, suggesting that information from the end-effector is prioritized during observation for later reproduction.
Resumo:
Using a speed-matching task, we measured the speed tuning of the dynamic motion aftereVect (MAE). The results of our Wrst experiment, in which we co-varied dot speed in the adaptation and test stimuli, revealed a speed tuning function. We sought to tease apart what contribution, if any, the test stimulus makes towards the observed speed tuning. This was examined by independently manipulating dot speed in the adaptation and test stimuli, and measuring the eVect this had on the perceived speed of the dynamic MAE. The results revealed that the speed tuning of the dynamic MAE is determined, not by the speed of the adaptation stimulus, but by the local motion characteristics of the dynamic test stimulus. The role of the test stimulus in determining the perceived speed of the dynamic MAE was conWrmed by showing that, if one uses a test stimulus containing two sources of local speed information, observers report seeing a transparent MAE; this is despite the fact that adaptation is induced using a single-speed stimulus. Thus while the adaptation stimulus necessarily determines perceived direction of the dynamic MAE, its perceived speed is determined by the test stimulus. This dissociation of speed and direction supports the notion that the processing of these two visual attributes may be partially independent.
Resumo:
The processing of motion information by the visual system can be decomposed into two general stages; point-by-point local motion extraction, followed by global motion extraction through the pooling of the local motion signals. The direction aftereVect (DAE) is a well known phenomenon in which prior adaptation to a unidirectional moving pattern results in an exaggerated perceived direction diVerence between the adapted direction and a subsequently viewed stimulus moving in a diVerent direction. The experiments in this paper sought to identify where the adaptation underlying the DAE occurs within the motion processing hierarchy. We found that the DAE exhibits interocular transfer, thus demonstrating that the underlying adapted neural mechanisms are binocularly driven and must, therefore, reside in the visual cortex. The remaining experiments measured the speed tuning of the DAE, and used the derived function to test a number of local and global models of the phenomenon. Our data provide compelling evidence that the DAE is driven by the adaptation of motion-sensitive neurons at the local-processing stage of motion encoding. This is in contrast to earlier research showing that direction repulsion, which can be viewed as a simultaneous presentation counterpart to the DAE, is a global motion process. This leads us to conclude that the DAE and direction repulsion reflect interactions between motion-sensitive neural mechanisms at different levels of the motion-processing hierarchy.
Resumo:
The design of a low loss quasi-optical beam splitter which is required to provide efficient diplexing of the bands 316.5-325.5 GHz and 349.5-358.5 GHz is presented. To minimise the filter insertion loss, the chosen architecture is a three-layer freestanding array of dipole slot elements. Floquet modal analysis and finite element method computer models are used to establish the geometry of the periodic structure and to predict its spectral response. Two different micromachining approaches have been employed to fabricate close packed arrays of 460 mm long elements in the screens that form the basic building block of the 30mm diameter multilayer frequency selective surface. Comparisons between simulated and measured transmission coefficients for the individual dichroic surfaces are used to determine the accuracy of the computer models and to confirm the suitability of the fabrication methods.