2 resultados para Motion compensated frame interpolation
em QSpace: Queen's University - Canada
Resumo:
The ability to capture human motion allows researchers to evaluate an individual’s gait. Gait can be measured in different ways, from camera-based systems to Magnetic and Inertial Measurement Units (MIMU). The former uses cameras to track positional information of photo-reflective markers, while the latter uses accelerometers, gyroscopes, and magnetometers to measure segment orientation. Both systems can be used to measure joint kinematics, but the results vary because of their differences in anatomical calibrations. The objective of this thesis was to study potential solutions for reducing joint angle discrepancies between MIMU and camera-based systems. The first study worked to correct the anatomical frame differences between MIMU and camera-based systems via the joint angles of both systems. This study looked at full lower body correction versus correcting a single joint. Single joint correction showed slightly better alignment of both systems, but does not take into account that body segments are generally affected by more than one joint. The second study explores the possibility of anatomical landmarking using a single camera and a pointer apparatus. Results showed anatomical landmark position could be determined using a single camera, as the anatomical landmarks found from this study and a camera-based system showed similar results. This thesis worked on providing a novel way for obtaining anatomical landmarks with a single point-and-shoot camera, as well aligning anatomical frames between MIMUs and camera-based systems using joint angles.
Resumo:
Moving through a stable, three-dimensional world is a hallmark of our motor and perceptual experience. This stability is constantly being challenged by movements of the eyes and head, inducing retinal blur and retino-spatial misalignments for which the brain must compensate. To do so, the brain must account for eye and head kinematics to transform two-dimensional retinal input into the reference frame necessary for movement or perception. The four studies in this thesis used both computational and psychophysical approaches to investigate several aspects of this reference frame transformation. In the first study, we examined the neural mechanism underlying the visuomotor transformation for smooth pursuit using a feedforward neural network model. After training, the model performed the general, three-dimensional transformation using gain modulation. This gave mechanistic significance to gain modulation observed in cortical pursuit areas while also providing several testable hypotheses for future electrophysiological work. In the second study, we asked how anticipatory pursuit, which is driven by memorized signals, accounts for eye and head geometry using a novel head-roll updating paradigm. We showed that the velocity memory driving anticipatory smooth pursuit relies on retinal signals, but is updated for the current head orientation. In the third study, we asked how forcing retinal motion to undergo a reference frame transformation influences perceptual decision making. We found that simply rolling one's head impairs perceptual decision making in a way captured by stochastic reference frame transformations. In the final study, we asked how torsional shifts of the retinal projection occurring with almost every eye movement influence orientation perception across saccades. We found a pre-saccadic, predictive remapping consistent with maintaining a purely retinal (but spatially inaccurate) orientation perception throughout the movement. Together these studies suggest that, despite their spatial inaccuracy, retinal signals play a surprisingly large role in our seamless visual experience. This work therefore represents a significant advance in our understanding of how the brain performs one of its most fundamental functions.