3 resultados para 3D motion trajectory
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
Introduction: The sport practiced by people with disabilities has been growing in recent years. Consequently, advances in assessment and training methods have emerged. However, the paralympic sport keeps in tow these advances, with few specific studies that consider disability as intervening factor. The transcranial direct current stimulation (tDCS) is a technique that has proven to be capable of modulating brain function. Studies show beneficial effects of tDCS on muscle strength, power and fatigue during exercise. Objective: Investigate de the effect of tDCS on movement control in para-powerlifters. Methods: Eight subjects underwent two sessions of motion capture, which previously applied the anodic tDCS or sham sessions in the cerebellum. Three movements were performed with increasing load between 90-95% of 1MR. The movements were recorded by an 10 infrared cameras system which reconstructed the 3D trajectory of markers placed on the bar. Results: There have been changes between the anodic and sham conditions over bar level (initial, final, maximum during the eccentric and concentric phase) and in the difference between the final and initial bar level. Moreover, there was difference in bar level (final and during the eccentric phase) comparing athletes amputees and les autres. Conclusion: The findings of this study suggest that tDCS applied prior to the exercise over the cerebellum in para-powerlifters acts differently according to disability
Resumo:
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Resumo:
3D Reconstruction is the process used to obtain a detailed graphical model in three dimensions that represents some real objectified scene. This process uses sequences of images taken from the scene, so it can automatically extract the information about the depth of feature points. These points are then highlighted using some computational technique on the images that compose the used dataset. Using SURF feature points this work propose a model for obtaining depth information of feature points detected by the system. At the ending, the proposed system extract three important information from the images dataset: the 3D position for feature points; relative rotation and translation matrices between images; the realtion between the baseline for adjacent images and the 3D point accuracy error found.