142 resultados para Attitude Motion
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
A major weakness among loading models for pedestrians walking on flexible structures proposed in recent years is the various uncorroborated assumptions made in their development. This applies to spatio-temporal characteristics of pedestrian loading and the nature of multi-object interactions. To alleviate this problem, a framework for the determination of localised pedestrian forces on full-scale structures is presented using a wireless attitude and heading reference systems (AHRS). An AHRS comprises a triad of tri-axial accelerometers, gyroscopes and magnetometers managed by a dedicated data processing unit, allowing motion in three-dimensional space to be reconstructed. A pedestrian loading model based on a single point inertial measurement from an AHRS is derived and shown to perform well against benchmark data collected on an instrumented treadmill. Unlike other models, the current model does not take any predefined form nor does it require any extrapolations as to the timing and amplitude of pedestrian loading. In order to assess correctly the influence of the moving pedestrian on behaviour of a structure, an algorithm for tracking the point of application of pedestrian force is developed based on data from a single AHRS attached to a foot. A set of controlled walking tests with a single pedestrian is conducted on a real footbridge for validation purposes. A remarkably good match between the measured and simulated bridge response is found, indeed confirming applicability of the proposed framework.
Resumo:
We have examined the ability of observers to parse bimodal local-motion distributions into two global motion surfaces, either overlapping (yielding transparent motion) or spatially segregated (yielding a motion boundary). The stimuli were random dot kinematograms in which the direction of motion of each dot was drawn from one of two rectangular probability distributions. A wide range of direction distribution widths and separations was tested. The ability to discriminate the direction of motion of one of the two motion surfaces from the direction of a comparison stimulus was used as an objective test of the perception of two discrete surfaces. Performance for both transparent and spatially segregated motion was remarkably good, being only slightly inferior to that achieved with a single global motion surface. Performance was consistently better for segregated motion than for transparency. Whereas transparent motion was only perceived with direction distributions which were separated by a significant gap, segregated motion could be seen with abutting or even partially overlapping direction distributions. For transparency, the critical gap increased with the range of directions in the distribution. This result does not support models in which transparency depends on detection of a minimum size of gap defining a bimodal direction distribution. We suggest, instead, that the operations which detect bimodality are scaled (in the direction domain) with the overall range of distributions. This yields a flexible, adaptive system that determines whether a gap in the direction distribution serves as a segmentation cue or is smoothed as part of a unitary computation of global motion.
Resumo:
The mechanisms underlying the parsing of a spatial distribution of velocity vectors into two adjacent (spatially segregated) or overlapping (transparent) motion surfaces were examined using random dot kinematograms. Parsing might occur using either of two principles. Surfaces might be defined on the basis of similarity of motion vectors and then sharp perceptual boundaries drawn between different surfaces (continuity-based segmentation). Alternatively, detection of a high gradient of direction or speed separating the motion surfaces might drive the process (discontinuity-based segmentation). To establish which method is used, we examined the effect of blurring the motion direction gradient. In the case of a sharp direction gradient, each dot had one of two directions differing by 135°. With a shallow gradient, most dots had one of two directions but the directions of the remainder spanned the range between one motion-defined surface and the other. In the spatial segregation case the gradient defined a central boundary separating two regions. In the transparent version the dots were randomly positioned. In both cases all dots moved with the same speed and existed for only two frames before being randomly replaced. The ability of observers to parse the motion distribution was measured in terms of their ability to discriminate the direction of one of the two surfaces. Performance was hardly affected by spreading the gradient over at least 25% of the dots (corresponding to a 1° strip in the segregation case). We conclude that detection of sharp velocity gradients is not necessary for distinguishing different motion surfaces.
Resumo:
Motion transparency provides a challenging test case for our understanding of how visual motion, and other attributes, are computed and represented in the brain. However, previous studies of visual transparency have used subjective criteria which do not confirm the existence of independent representations of the superimposed motions. We have developed measures of performance in motion transparency that require observers to extract information about two motions jointly, and therefore test the information that is simultaneously represented for each motion. Observers judged whether two motions were at 90 to one another; the base direction was randomized so that neither motion taken alone was informative. The precision of performance was determined by the standard deviations (S.D.s) of probit functions fitted to the data. Observers also made judgments of orthogonal directions between a single motion stream and a line, for one of two transparent motions against a line and for two spatially segregated motions. The data show that direction judgments with transparency can be made with comparable accuracy to segregated (non-transparent) conditions, supporting the idea that transparency involves the equivalent representation of two global motions in the same region. The precision of this joint direction judgment is, however, 2–3 times poorer than that for a single motion stream. The precision in directional judgment for a single stream is reduced only by a factor of about 1.5 by superimposing a second stream. The major effect in performance, therefore, appears to be associated with the need to compute and compare two global representations of motion, rather than with interference between the dot streams per se. Experiment 2tested the transparency of motions separated by a range of angles from 5 to 180 by requiring subjects to set a line matching the perceived direction of each motion. The S.D.s of these settings demonstrated that directions of transparent motions were represented independently for separations over 20. Increasing dot speeds from 1 to 10 deg/s improved directional performance but had no effect on transparency perception. Transparency was also unaffected by variations of density between 0.1 and 19 dots/deg2
Resumo:
The importance of relative motion information when modelling a novel motor skill was examined. Participants were assigned to one of four groups. Groups 1 and 2 viewed demonstrations of a skilled cricket bowler presented in either video or point light format. Group 3 observed a single point of light pertaining to the wrist of the skilled bowler only. Participants in Group 4 did not receive a demonstration and acted as controls. During 60 acquisition trials, participants in the demonstration groups viewed a model five times before each 10-trial block. Retention was examined the following day. Intra-limb coordination was assessed for the right elbow relative to the wrist in comparison to the model. The demonstration groups showed greater concordance with the model than the control group. However, the wrist group performed less like the model than the point light and video groups, who did not differ from each other. These effects were maintained in retention. Relative motion information aided the acquisition of intra-limb coordination, while making this information more salient (through point lights) provided no additional benefit. The motion of the models bowling arm was replicated more closely than the non-bowling arm, suggesting that information from the end-effector is prioritized during observation for later reproduction.
Resumo:
Using a speed-matching task, we measured the speed tuning of the dynamic motion aftereVect (MAE). The results of our Wrst experiment, in which we co-varied dot speed in the adaptation and test stimuli, revealed a speed tuning function. We sought to tease apart what contribution, if any, the test stimulus makes towards the observed speed tuning. This was examined by independently manipulating dot speed in the adaptation and test stimuli, and measuring the eVect this had on the perceived speed of the dynamic MAE. The results revealed that the speed tuning of the dynamic MAE is determined, not by the speed of the adaptation stimulus, but by the local motion characteristics of the dynamic test stimulus. The role of the test stimulus in determining the perceived speed of the dynamic MAE was conWrmed by showing that, if one uses a test stimulus containing two sources of local speed information, observers report seeing a transparent MAE; this is despite the fact that adaptation is induced using a single-speed stimulus. Thus while the adaptation stimulus necessarily determines perceived direction of the dynamic MAE, its perceived speed is determined by the test stimulus. This dissociation of speed and direction supports the notion that the processing of these two visual attributes may be partially independent.
Resumo:
The processing of motion information by the visual system can be decomposed into two general stages; point-by-point local motion extraction, followed by global motion extraction through the pooling of the local motion signals. The direction aftereVect (DAE) is a well known phenomenon in which prior adaptation to a unidirectional moving pattern results in an exaggerated perceived direction diVerence between the adapted direction and a subsequently viewed stimulus moving in a diVerent direction. The experiments in this paper sought to identify where the adaptation underlying the DAE occurs within the motion processing hierarchy. We found that the DAE exhibits interocular transfer, thus demonstrating that the underlying adapted neural mechanisms are binocularly driven and must, therefore, reside in the visual cortex. The remaining experiments measured the speed tuning of the DAE, and used the derived function to test a number of local and global models of the phenomenon. Our data provide compelling evidence that the DAE is driven by the adaptation of motion-sensitive neurons at the local-processing stage of motion encoding. This is in contrast to earlier research showing that direction repulsion, which can be viewed as a simultaneous presentation counterpart to the DAE, is a global motion process. This leads us to conclude that the DAE and direction repulsion reflect interactions between motion-sensitive neural mechanisms at different levels of the motion-processing hierarchy.
Resumo:
With the advent of new video standards such as MPEG-4 part-10 and H.264/H.26L, demands for advanced video coding, particularly in the area of variable block size video motion estimation (VBSME), are increasing. In this paper, we propose a new one-dimensional (1-D) very large-scale integration architecture for full-search VBSME (FSVBSME). The VBS sum of absolute differences (SAD) computation is performed by re-using the results of smaller sub-block computations. These are distributed and combined by incorporating a shuffling mechanism within each processing element. Whereas a conventional 1-D architecture can process only one motion vector (MV), this new architecture can process up to 41 MV sub-blocks (within a macroblock) in the same number of clock cycles.
Resumo:
Correlated electron-ion dynamics (CEID) is an extension of molecular dynamics that allows us to introduce in a correct manner the exchange of energy between electrons and ions. The formalism is based on a systematic approximation: small amplitude moment expansion. This formalism is extended here to include the explicit quantum spread of the ions and a generalization of the Hartree-Fock approximation for incoherent sums of Slater determinants. We demonstrate that the resultant dynamical equations reproduce analytically the selection rules for inelastic electron-phonon scattering from perturbation theory, which control the mutually driven excitations of the two interacting subsystems. We then use CEID to make direct numerical simulations of inelastic current-voltage spectroscopy in atomic wires, and to exhibit the crossover from ionic cooling to heating as a function of the relative degree of excitation of the electronic and ionic subsystems.
Resumo:
Our understanding of how the visual system processes motion transparency, the phenomenon by which multiple directions of motion are perceived to co-exist in the same spatial region, has grown considerably in the past decade. There is compelling evidence that the process is driven by global-motion mechanisms. Consequently, although transparently moving surfaces are readily segmented over an extended space, the visual system cannot separate two motion signals that co-exist in the same local region. A related issue is whether the visual system can detect transparently moving surfaces simultaneously, or whether the component signals encounter a serial â??bottleneckâ?? during their processing? Our initial results show that, at sufficiently short stimulus durations, observers cannot accurately detect two superimposed directions; yet they have no difficulty in detecting one pattern direction in noise, supporting the serial-bottleneck scenario. However, in a second experiment, the difference in performance between the two tasks disappears when the component patterns are segregated. This discrepancy between the processing of transparent and non-overlapping patterns may be a consequence of suppressed activity of global-motion mechanisms when the transparent surfaces are presented in the same depth plane. To test this explanation, we repeated our initial experiment while separating the motion components in depth. The marked improvement in performance leads us to conclude that transparent motion signals are represented simultaneously.