4 resultados para 3D motion capture
em QSpace: Queen's University - Canada
Resumo:
Clinical optical motion capture allows us to obtain kinematic and kinetic outcome measures that aid clinicians in diagnosing and treating different pathologies affecting healthy gait. The long term aim for gait centres is for subject-specific analyses that can predict, prevent, or reverse the effects of pathologies through gait retraining. To track the body, anatomical segment coordinate systems are commonly created by applying markers to the surface of the skin over specific, bony anatomy that is manually palpated. The location and placement of these markers is subjective and precision errors of up to 25mm have been reported [1]. Additionally, the selection of which anatomical landmarks to use in segment models can result in large angular differences; for example angular differences in the trunk can range up to 53o for the same motion depending on marker placement [2]. These errors can result in erroneous kinematic outcomes that either diminish or increase the apparent effects of a treatment or pathology compared to healthy data. Our goal was to improve the accuracy and precision of optical motion capture outcome measures. This thesis describes two separate studies. In the first study we aimed to establish an approach that would allow us to independently quantify the error among trunk models. Using this approach we determined if there was a best model to accurately track trunk motion. In the second study we designed a device to improve precision for test, re-test protocols that would also reduce the set-up time for motion capture experiments. Our method to compare a kinematically derived centre of mass velocity to one that was derived kinetically was successful in quantifying error among trunk models. Our findings indicate that models that use lateral shoulder markers as well as limit the translational degrees of freedom of the trunk through shared pelvic markers result in the least amount of error for the tasks we studied. We also successfully reduced intra- and inter-operator anatomical marker placement errors using a marker alignment device. The improved accuracy and precision resulting from the methods established in this thesis may lead to increased sensitivity to changes in kinematics, and ultimately result in more consistent treatment outcomes.
Resumo:
The ability to capture human motion allows researchers to evaluate an individual’s gait. Gait can be measured in different ways, from camera-based systems to Magnetic and Inertial Measurement Units (MIMU). The former uses cameras to track positional information of photo-reflective markers, while the latter uses accelerometers, gyroscopes, and magnetometers to measure segment orientation. Both systems can be used to measure joint kinematics, but the results vary because of their differences in anatomical calibrations. The objective of this thesis was to study potential solutions for reducing joint angle discrepancies between MIMU and camera-based systems. The first study worked to correct the anatomical frame differences between MIMU and camera-based systems via the joint angles of both systems. This study looked at full lower body correction versus correcting a single joint. Single joint correction showed slightly better alignment of both systems, but does not take into account that body segments are generally affected by more than one joint. The second study explores the possibility of anatomical landmarking using a single camera and a pointer apparatus. Results showed anatomical landmark position could be determined using a single camera, as the anatomical landmarks found from this study and a camera-based system showed similar results. This thesis worked on providing a novel way for obtaining anatomical landmarks with a single point-and-shoot camera, as well aligning anatomical frames between MIMUs and camera-based systems using joint angles.
Resumo:
Quantitative methods can help us understand how underlying attributes contribute to movement patterns. Applying principal components analysis (PCA) to whole-body motion data may provide an objective data-driven method to identify unique and statistically important movement patterns. Therefore, the primary purpose of this study was to determine if athletes’ movement patterns can be differentiated based on skill level or sport played using PCA. Motion capture data from 542 athletes performing three sport-screening movements (i.e. bird-dog, drop jump, T-balance) were analyzed. A PCA-based pattern recognition technique was used to analyze the data. Prior to analyzing the effects of skill level or sport on movement patterns, methodological considerations related to motion analysis reference coordinate system were assessed. All analyses were addressed as case-studies. For the first case study, referencing motion data to a global (lab-based) coordinate system compared to a local (segment-based) coordinate system affected the ability to interpret important movement features. Furthermore, for the second case study, where the interpretability of PCs was assessed when data were referenced to a stationary versus a moving segment-based coordinate system, PCs were more interpretable when data were referenced to a stationary coordinate system for both the bird-dog and T-balance task. As a result of the findings from case study 1 and 2, only stationary segment-based coordinate systems were used in cases 3 and 4. During the bird-dog task, elite athletes had significantly lower scores compared to recreational athletes for principal component (PC) 1. For the T-balance movement, elite athletes had significantly lower scores compared to recreational athletes for PC 2. In both analyses the lower scores in elite athletes represented a greater range of motion. Finally, case study 4 reported differences in athletes’ movement patterns who competed in different sports, and significant differences in technique were detected during the bird-dog task. Through these case studies, this thesis highlights the feasibility of applying PCA as a movement pattern recognition technique in athletes. Future research can build on this proof-of-principle work to develop robust quantitative methods to help us better understand how underlying attributes (e.g. height, sex, ability, injury history, training type) contribute to performance.
Resumo:
Moving through a stable, three-dimensional world is a hallmark of our motor and perceptual experience. This stability is constantly being challenged by movements of the eyes and head, inducing retinal blur and retino-spatial misalignments for which the brain must compensate. To do so, the brain must account for eye and head kinematics to transform two-dimensional retinal input into the reference frame necessary for movement or perception. The four studies in this thesis used both computational and psychophysical approaches to investigate several aspects of this reference frame transformation. In the first study, we examined the neural mechanism underlying the visuomotor transformation for smooth pursuit using a feedforward neural network model. After training, the model performed the general, three-dimensional transformation using gain modulation. This gave mechanistic significance to gain modulation observed in cortical pursuit areas while also providing several testable hypotheses for future electrophysiological work. In the second study, we asked how anticipatory pursuit, which is driven by memorized signals, accounts for eye and head geometry using a novel head-roll updating paradigm. We showed that the velocity memory driving anticipatory smooth pursuit relies on retinal signals, but is updated for the current head orientation. In the third study, we asked how forcing retinal motion to undergo a reference frame transformation influences perceptual decision making. We found that simply rolling one's head impairs perceptual decision making in a way captured by stochastic reference frame transformations. In the final study, we asked how torsional shifts of the retinal projection occurring with almost every eye movement influence orientation perception across saccades. We found a pre-saccadic, predictive remapping consistent with maintaining a purely retinal (but spatially inaccurate) orientation perception throughout the movement. Together these studies suggest that, despite their spatial inaccuracy, retinal signals play a surprisingly large role in our seamless visual experience. This work therefore represents a significant advance in our understanding of how the brain performs one of its most fundamental functions.