4 resultados para Kinematics
em QSpace: Queen's University - Canada
Resumo:
Measurement of joint kinematics can provide knowledge to help improve joint prosthesis design, as well as identify joint motion patterns that may lead to joint degeneration or injury. More investigation into how the hip translates in live human subjects during high amplitude motions is needed. This work presents a design of a non-invasive method using the registration between images from conventional Magnetic Resonance Imaging (MRI) and open MRI to calculate three dimensional hip joint kinematics. The method was tested on a single healthy subject in three different poses. MRI protocols for the conventional gantry, high-resolution MRI and the open gantry, lowresolution MRI were developed. The scan time for the low-resolution protocol was just under 6 minutes. High-resolution meshes and low resolution contours were derived from segmentation of the high-resolution and low-resolution images, respectively. Low-resolution contours described the poses as scanned, whereas the meshes described the bones’ geometries. The meshes and contours were registered to each other, and joint kinematics were calculated. The segmentation and registration were performed for both cortical and sub-cortical bone surfaces. A repeatability study was performed by comparing the kinematic results derived from three users’ segmentations of the sub-cortical bone surfaces from a low-resolution scan. The root mean squared error of all registrations was below 1.92mm. The maximum range between segmenters in translation magnitude was 0.95mm, and the maximum deviation from the average of all orientations was 1.27◦. This work demonstrated that this method for non-invasive measurement of hip kinematics is promising for measuring high-range-of-motion hip motions in vivo.
Resumo:
The ability to capture human motion allows researchers to evaluate an individual’s gait. Gait can be measured in different ways, from camera-based systems to Magnetic and Inertial Measurement Units (MIMU). The former uses cameras to track positional information of photo-reflective markers, while the latter uses accelerometers, gyroscopes, and magnetometers to measure segment orientation. Both systems can be used to measure joint kinematics, but the results vary because of their differences in anatomical calibrations. The objective of this thesis was to study potential solutions for reducing joint angle discrepancies between MIMU and camera-based systems. The first study worked to correct the anatomical frame differences between MIMU and camera-based systems via the joint angles of both systems. This study looked at full lower body correction versus correcting a single joint. Single joint correction showed slightly better alignment of both systems, but does not take into account that body segments are generally affected by more than one joint. The second study explores the possibility of anatomical landmarking using a single camera and a pointer apparatus. Results showed anatomical landmark position could be determined using a single camera, as the anatomical landmarks found from this study and a camera-based system showed similar results. This thesis worked on providing a novel way for obtaining anatomical landmarks with a single point-and-shoot camera, as well aligning anatomical frames between MIMUs and camera-based systems using joint angles.
Resumo:
Clinical optical motion capture allows us to obtain kinematic and kinetic outcome measures that aid clinicians in diagnosing and treating different pathologies affecting healthy gait. The long term aim for gait centres is for subject-specific analyses that can predict, prevent, or reverse the effects of pathologies through gait retraining. To track the body, anatomical segment coordinate systems are commonly created by applying markers to the surface of the skin over specific, bony anatomy that is manually palpated. The location and placement of these markers is subjective and precision errors of up to 25mm have been reported [1]. Additionally, the selection of which anatomical landmarks to use in segment models can result in large angular differences; for example angular differences in the trunk can range up to 53o for the same motion depending on marker placement [2]. These errors can result in erroneous kinematic outcomes that either diminish or increase the apparent effects of a treatment or pathology compared to healthy data. Our goal was to improve the accuracy and precision of optical motion capture outcome measures. This thesis describes two separate studies. In the first study we aimed to establish an approach that would allow us to independently quantify the error among trunk models. Using this approach we determined if there was a best model to accurately track trunk motion. In the second study we designed a device to improve precision for test, re-test protocols that would also reduce the set-up time for motion capture experiments. Our method to compare a kinematically derived centre of mass velocity to one that was derived kinetically was successful in quantifying error among trunk models. Our findings indicate that models that use lateral shoulder markers as well as limit the translational degrees of freedom of the trunk through shared pelvic markers result in the least amount of error for the tasks we studied. We also successfully reduced intra- and inter-operator anatomical marker placement errors using a marker alignment device. The improved accuracy and precision resulting from the methods established in this thesis may lead to increased sensitivity to changes in kinematics, and ultimately result in more consistent treatment outcomes.
Resumo:
Moving through a stable, three-dimensional world is a hallmark of our motor and perceptual experience. This stability is constantly being challenged by movements of the eyes and head, inducing retinal blur and retino-spatial misalignments for which the brain must compensate. To do so, the brain must account for eye and head kinematics to transform two-dimensional retinal input into the reference frame necessary for movement or perception. The four studies in this thesis used both computational and psychophysical approaches to investigate several aspects of this reference frame transformation. In the first study, we examined the neural mechanism underlying the visuomotor transformation for smooth pursuit using a feedforward neural network model. After training, the model performed the general, three-dimensional transformation using gain modulation. This gave mechanistic significance to gain modulation observed in cortical pursuit areas while also providing several testable hypotheses for future electrophysiological work. In the second study, we asked how anticipatory pursuit, which is driven by memorized signals, accounts for eye and head geometry using a novel head-roll updating paradigm. We showed that the velocity memory driving anticipatory smooth pursuit relies on retinal signals, but is updated for the current head orientation. In the third study, we asked how forcing retinal motion to undergo a reference frame transformation influences perceptual decision making. We found that simply rolling one's head impairs perceptual decision making in a way captured by stochastic reference frame transformations. In the final study, we asked how torsional shifts of the retinal projection occurring with almost every eye movement influence orientation perception across saccades. We found a pre-saccadic, predictive remapping consistent with maintaining a purely retinal (but spatially inaccurate) orientation perception throughout the movement. Together these studies suggest that, despite their spatial inaccuracy, retinal signals play a surprisingly large role in our seamless visual experience. This work therefore represents a significant advance in our understanding of how the brain performs one of its most fundamental functions.