71 resultados para spine motion segment stiffness
Resumo:
A novel, fast automatic motion segmentation approach is presented. It differs from conventional pixel or edge based motion segmentation approaches in that the proposed method uses labelled regions (facets) to segment various video objects from the background. Facets are clustered into objects based on their motion and proximity details using Bayesian logic. Because the number of facets is usually much lower than the number of edges and points, using facets can greatly reduce the computational complexity of motion segmentation. The proposed method can tackle efficiently the complexity of video object motion tracking, and offers potential for real-time content-based video annotation.
Resumo:
Despite the importance of laughter in social interactions it remains little studied in affective computing. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received almost no attention. The aim of this study is twofold: first an investigation into observers' perception of laughter states (hilarious, social, awkward, fake, and non-laughter) based on body movements alone, through their categorization of avatars animated with natural and acted motion capture data. Significant differences in torso and limb movements were found between animations perceived as containing laughter and those perceived as nonlaughter. Hilarious laughter also differed from social laughter in the amount of bending of the spine, the amount of shoulder rotation and the amount of hand movement. The body movement features indicative of laughter differed between sitting and standing avatar postures. Based on the positive findings in this perceptual study, the second aim is to investigate the possibility of automatically predicting the distributions of observer's ratings for the laughter states. The findings show that the automated laughter recognition rates approach human rating levels, with the Random Forest method yielding the best performance.
Resumo:
We have examined the ability of observers to parse bimodal local-motion distributions into two global motion surfaces, either overlapping (yielding transparent motion) or spatially segregated (yielding a motion boundary). The stimuli were random dot kinematograms in which the direction of motion of each dot was drawn from one of two rectangular probability distributions. A wide range of direction distribution widths and separations was tested. The ability to discriminate the direction of motion of one of the two motion surfaces from the direction of a comparison stimulus was used as an objective test of the perception of two discrete surfaces. Performance for both transparent and spatially segregated motion was remarkably good, being only slightly inferior to that achieved with a single global motion surface. Performance was consistently better for segregated motion than for transparency. Whereas transparent motion was only perceived with direction distributions which were separated by a significant gap, segregated motion could be seen with abutting or even partially overlapping direction distributions. For transparency, the critical gap increased with the range of directions in the distribution. This result does not support models in which transparency depends on detection of a minimum size of gap defining a bimodal direction distribution. We suggest, instead, that the operations which detect bimodality are scaled (in the direction domain) with the overall range of distributions. This yields a flexible, adaptive system that determines whether a gap in the direction distribution serves as a segmentation cue or is smoothed as part of a unitary computation of global motion.
Resumo:
The mechanisms underlying the parsing of a spatial distribution of velocity vectors into two adjacent (spatially segregated) or overlapping (transparent) motion surfaces were examined using random dot kinematograms. Parsing might occur using either of two principles. Surfaces might be defined on the basis of similarity of motion vectors and then sharp perceptual boundaries drawn between different surfaces (continuity-based segmentation). Alternatively, detection of a high gradient of direction or speed separating the motion surfaces might drive the process (discontinuity-based segmentation). To establish which method is used, we examined the effect of blurring the motion direction gradient. In the case of a sharp direction gradient, each dot had one of two directions differing by 135°. With a shallow gradient, most dots had one of two directions but the directions of the remainder spanned the range between one motion-defined surface and the other. In the spatial segregation case the gradient defined a central boundary separating two regions. In the transparent version the dots were randomly positioned. In both cases all dots moved with the same speed and existed for only two frames before being randomly replaced. The ability of observers to parse the motion distribution was measured in terms of their ability to discriminate the direction of one of the two surfaces. Performance was hardly affected by spreading the gradient over at least 25% of the dots (corresponding to a 1° strip in the segregation case). We conclude that detection of sharp velocity gradients is not necessary for distinguishing different motion surfaces.
Resumo:
Motion transparency provides a challenging test case for our understanding of how visual motion, and other attributes, are computed and represented in the brain. However, previous studies of visual transparency have used subjective criteria which do not confirm the existence of independent representations of the superimposed motions. We have developed measures of performance in motion transparency that require observers to extract information about two motions jointly, and therefore test the information that is simultaneously represented for each motion. Observers judged whether two motions were at 90 to one another; the base direction was randomized so that neither motion taken alone was informative. The precision of performance was determined by the standard deviations (S.D.s) of probit functions fitted to the data. Observers also made judgments of orthogonal directions between a single motion stream and a line, for one of two transparent motions against a line and for two spatially segregated motions. The data show that direction judgments with transparency can be made with comparable accuracy to segregated (non-transparent) conditions, supporting the idea that transparency involves the equivalent representation of two global motions in the same region. The precision of this joint direction judgment is, however, 2–3 times poorer than that for a single motion stream. The precision in directional judgment for a single stream is reduced only by a factor of about 1.5 by superimposing a second stream. The major effect in performance, therefore, appears to be associated with the need to compute and compare two global representations of motion, rather than with interference between the dot streams per se. Experiment 2tested the transparency of motions separated by a range of angles from 5 to 180 by requiring subjects to set a line matching the perceived direction of each motion. The S.D.s of these settings demonstrated that directions of transparent motions were represented independently for separations over 20. Increasing dot speeds from 1 to 10 deg/s improved directional performance but had no effect on transparency perception. Transparency was also unaffected by variations of density between 0.1 and 19 dots/deg2
Resumo:
The importance of relative motion information when modelling a novel motor skill was examined. Participants were assigned to one of four groups. Groups 1 and 2 viewed demonstrations of a skilled cricket bowler presented in either video or point light format. Group 3 observed a single point of light pertaining to the wrist of the skilled bowler only. Participants in Group 4 did not receive a demonstration and acted as controls. During 60 acquisition trials, participants in the demonstration groups viewed a model five times before each 10-trial block. Retention was examined the following day. Intra-limb coordination was assessed for the right elbow relative to the wrist in comparison to the model. The demonstration groups showed greater concordance with the model than the control group. However, the wrist group performed less like the model than the point light and video groups, who did not differ from each other. These effects were maintained in retention. Relative motion information aided the acquisition of intra-limb coordination, while making this information more salient (through point lights) provided no additional benefit. The motion of the models bowling arm was replicated more closely than the non-bowling arm, suggesting that information from the end-effector is prioritized during observation for later reproduction.
Resumo:
Using a speed-matching task, we measured the speed tuning of the dynamic motion aftereVect (MAE). The results of our Wrst experiment, in which we co-varied dot speed in the adaptation and test stimuli, revealed a speed tuning function. We sought to tease apart what contribution, if any, the test stimulus makes towards the observed speed tuning. This was examined by independently manipulating dot speed in the adaptation and test stimuli, and measuring the eVect this had on the perceived speed of the dynamic MAE. The results revealed that the speed tuning of the dynamic MAE is determined, not by the speed of the adaptation stimulus, but by the local motion characteristics of the dynamic test stimulus. The role of the test stimulus in determining the perceived speed of the dynamic MAE was conWrmed by showing that, if one uses a test stimulus containing two sources of local speed information, observers report seeing a transparent MAE; this is despite the fact that adaptation is induced using a single-speed stimulus. Thus while the adaptation stimulus necessarily determines perceived direction of the dynamic MAE, its perceived speed is determined by the test stimulus. This dissociation of speed and direction supports the notion that the processing of these two visual attributes may be partially independent.
Resumo:
The processing of motion information by the visual system can be decomposed into two general stages; point-by-point local motion extraction, followed by global motion extraction through the pooling of the local motion signals. The direction aftereVect (DAE) is a well known phenomenon in which prior adaptation to a unidirectional moving pattern results in an exaggerated perceived direction diVerence between the adapted direction and a subsequently viewed stimulus moving in a diVerent direction. The experiments in this paper sought to identify where the adaptation underlying the DAE occurs within the motion processing hierarchy. We found that the DAE exhibits interocular transfer, thus demonstrating that the underlying adapted neural mechanisms are binocularly driven and must, therefore, reside in the visual cortex. The remaining experiments measured the speed tuning of the DAE, and used the derived function to test a number of local and global models of the phenomenon. Our data provide compelling evidence that the DAE is driven by the adaptation of motion-sensitive neurons at the local-processing stage of motion encoding. This is in contrast to earlier research showing that direction repulsion, which can be viewed as a simultaneous presentation counterpart to the DAE, is a global motion process. This leads us to conclude that the DAE and direction repulsion reflect interactions between motion-sensitive neural mechanisms at different levels of the motion-processing hierarchy.
Resumo:
Fifty-two CFLP mice had an open femoral diaphyseal osteotomy held in compression by a four-pin external fixator. The movement of 34 of the mice in their cages was quantified before and after operation, until sacrifice at 4, 8, 16 or 24 days. Thirty-three specimens underwent histomorphometric analysis and 19 specimens underwent torsional stiffness measurement. The expected combination of intramembranous and endochondral bone formation was observed, and the model was shown to be reliable in that variation in the histological parameters of healing was small between animals at the same time point, compared to the variation between time-points. There was surprisingly large individual variation in the amount of animal movement about the cage, which correlated with both histomorphometric and mechanical measures of healing. Animals that moved more had larger external calluses containing more cartilage and demonstrated lower torsional stiffness at the same time point. Assuming that movement of the whole animal predicts, at least to some extent, movement at the fracture site, this correlation is what would be expected in a model that involves similar processes to those in human fracture healing. Models such as this, employed to determine the effect of experimental interventions, will yield more information if the natural variation in animal motion is measured and included in the analysis.
Resumo:
Obesity is a low grade inflammatory state associated with premature cardiovascular morbidity and mortality. Along with traditional risk factors the measurement of endothelial function, insulin resistance, inflammation and arterial stiffness may contribute to the assessment of cardiovascular risk. We conducted a randomised placebo controlled trial to assess the effects of 12 weeks treatment with a PPAR-alpha agonist (fenofibrate) and a PPAR-gamma agonist (pioglitazone) on these parameters in obese glucose tolerant men. Arterial stiffness was measured using augmentation index and pulse wave velocity (PWV). E-selectin, VCAM-1 and ICAM-1 were used as markers of endothelial function. Insulin sensitivity improved with pioglitazone treatment (p=0.001) and, in keeping with this, adiponectin increased by 85.2% (p