772 resultados para motion tracking
Resumo:
Three-dimensional imaging for the quantification of myocardial motion is a key step in the evaluation of cardiac disease. A tagged magnetic resonance imaging method that automatically tracks myocardial displacement in three dimensions is presented. Unlike other techniques, this method tracks both in-plane and through-plane motion from a single image plane without affecting the duration of image acquisition. A small z-encoding gradient is subsequently added to the refocusing lobe of the slice-selection gradient pulse in a slice following CSPAMM acquisition. An opposite polarity z-encoding gradient is added to the orthogonal tag direction. The additional z-gradients encode the instantaneous through plane position of the slice. The vertical and horizontal tags are used to resolve in-plane motion, while the added z-gradients is used to resolve through-plane motion. Postprocessing automatically decodes the acquired data and tracks the three-dimensional displacement of every material point within the image plane for each cine frame. Experiments include both a phantom and in vivo human validation. These studies demonstrate that the simultaneous extraction of both in-plane and through-plane displacements and pathlines from tagged images is achievable. This capability should open up new avenues for the automatic quantification of cardiac motion and strain for scientific and clinical purposes.
Resumo:
In this paper we present a new method to track bonemovements in stereoscopic X-ray image series of the kneejoint. The method is based on two different X-ray imagesets: a rotational series of acquisitions of the stillsubject knee that will allow the tomographicreconstruction of the three-dimensional volume (model),and a stereoscopic image series of orthogonal projectionsas the subject performs movements. Tracking the movementsof bones throughout the stereoscopic image series meansto determine, for each frame, the best pose of everymoving element (bone) previously identified in the 3Dreconstructed model. The quality of a pose is reflectedin the similarity between its simulated projections andthe actual radiographs. We use direct Fourierreconstruction to approximate the three-dimensionalvolume of the knee joint. Then, to avoid the expensivecomputation of digitally rendered radiographs (DRR) forpose recovery, we reformulate the tracking problem in theFourier domain. Under the hypothesis of parallel X-raybeams, we use the central-slice-projection theorem toreplace the heavy 2D-to-3D registration of projections inthe signal domain by efficient slice-to-volumeregistration in the Fourier domain. Focusing onrotational movements, the translation-relevant phaseinformation can be discarded and we only consider scalarFourier amplitudes. The core of our motion trackingalgorithm can be implemented as a classical frame-wiseslice-to-volume registration task. Preliminary results onboth synthetic and real images confirm the validity ofour approach.
Resumo:
Three-dimensional imaging and quantification of myocardial function are essential steps in the evaluation of cardiac disease. We propose a tagged magnetic resonance imaging methodology called zHARP that encodes and automatically tracks myocardial displacement in three dimensions. Unlike other motion encoding techniques, zHARP encodes both in-plane and through-plane motion in a single image plane without affecting the acquisition speed. Postprocessing unravels this encoding in order to directly track the 3-D displacement of every point within the image plane throughout an entire image sequence. Experimental results include a phantom validation experiment, which compares zHARP to phase contrast imaging, and an in vivo study of a normal human volunteer. Results demonstrate that the simultaneous extraction of in-plane and through-plane displacements from tagged images is feasible.
Resumo:
PURPOSE: To implement real-time myocardial strain-encoding (SENC) imaging in combination with tracking the tissue displacement in the through-plane direction. MATERIALS AND METHODS: SENC imaging was combined with the slice-following technique by implementing three-dimensional (3D) selective excitation. Certain adjustments were implemented to reduce scan time to one heartbeat. A total of 10 volunteers and five pigs were scanned on a 3T MRI scanner. Spatial modulation of magnetization (SPAMM)-tagged images were acquired on planes orthogonal to the SENC planes for comparison. Myocardial infarction (MI) was induced in two pigs and the resulting SENC images were compared to standard delayed-enhancement (DE) images. RESULTS: The strain values computed from SENC imaging with slice-following showed significant difference from those acquired without slice-following, especially during systole (P < 0.01). The strain curves computed from the SENC images with and without slice-following were similar to those computed from the orthogonal SPAMM images, with and without, respectively, tracking the tag line displacement in the strain direction. The resulting SENC images showed good agreement with the DE images in identifying MI in infarcted pigs. CONCLUSION: Correction of through-plane motion in real-time cardiac functional imaging is feasible using slice-following. The strain measurements are more accurate than conventional SENC measurements in humans and animals, as validated with conventional MRI tagging.
Resumo:
In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.
Video stimuli reduce object-directed imitation accuracy: a novel two-person motion-tracking approach
Resumo:
Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video vs. real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs) and a second participant imitated. This task was performed with either simple (one ball) or complex (three balls) movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position) was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.
Resumo:
The aging population has become a burning issue for all modern societies around the world recently. There are two important issues existing now to be solved. One is how to continuously monitor the movements of those people having suffered a stroke in natural living environment for providing more valuable feedback to guide clinical interventions. The other one is how to guide those old people effectively when they are at home or inside other buildings and to make their life easier and convenient. Therefore, human motion tracking and navigation have been active research fields with the increasing number of elderly people. However, motion capture has been extremely challenging to go beyond laboratory environments and obtain accurate measurements of human physical activity especially in free-living environments, and navigation in free-living environments also poses some problems such as the denied GPS signal and the moving objects commonly presented in free-living environments. This thesis seeks to develop new technologies to enable accurate motion tracking and positioning in free-living environments. This thesis comprises three specific goals using our developed IMU board and the camera from the imaging source company: (1) to develop a robust and real-time orientation algorithm using only the measurements from IMU; (2) to develop a robust distance estimation in static free-living environments to estimate people’s position and navigate people in static free-living environments and simultaneously the scale ambiguity problem, usually appearing in the monocular camera tracking, is solved by integrating the data from the visual and inertial sensors; (3) in case of moving objects viewed by the camera existing in free-living environments, to firstly design a robust scene segmentation algorithm and then respectively estimate the motion of the vIMU system and moving objects. To achieve real-time orientation tracking, an Adaptive-Gain Orientation Filter (AGOF) is proposed in this thesis based on the basic theory of deterministic approach and frequency-based approach using only measurements from the newly developed MARG (Magnet, Angular Rate, and Gravity) sensors. To further obtain robust positioning, an adaptive frame-rate vision-aided IMU system is proposed to develop and implement fast vIMU ego-motion estimation algorithms, where the orientation is estimated in real time from MARG sensors in the first step and then used to estimate the position based on the data from visual and inertial sensors. In case of the moving objects viewed by the camera existing in free-living environments, a robust scene segmentation algorithm is firstly proposed to obtain position estimation and simultaneously the 3D motion of moving objects. Finally, corresponding simulations and experiments have been carried out.
Resumo:
Methods for optical motion capture often require timeconsuming manual processing before the data can be used for subsequent tasks such as retargeting or character animation. These processing steps restrict the applicability of motion capturing especially for dynamic VR-environments with real time requirements. To solve these problems, we present two additional, fast and automatic processing stages based on our motion capture pipeline presented in [HSK05]. A normalization step aligns the recorded coordinate systems with the skeleton structure to yield a common and intuitive data basis across different recording sessions. A second step computes a parameterization based on automatically extracted main movement axes to generate a compact motion description. Our method does not restrict the placement of marker bodies nor the recording setup, and only requires a short calibration phase.
Resumo:
Upper limb function impairment is one of the most common sequelae of central nervous system injury, especially in stroke patients and when spinal cord injury produces tetraplegia. Conventional assessment methods cannot provide objective evaluation of patient performance and the tiveness of therapies. The most common assessment tools are based on rating scales, which are inefficient when measuring small changes and can yield subjective bias. In this study, we designed an inertial sensor-based monitoring system composed of five sensors to measure and analyze the complex movements of the upper limbs, which are common in activities of daily living. We developed a kinematic model with nine degrees of freedom to analyze upper limb and head movements in three dimensions. This system was then validated using a commercial optoelectronic system. These findings suggest that an inertial sensor-based motion tracking system can be used in patients who have upper limb impairment through data integration with a virtual reality-based neuroretation system.
Resumo:
To gain a better understanding of the fluid–structure interaction and especially when dealing with a flow around an arbitrarily moving body, it is essential to develop measurement tools enabling the instantaneous detection of moving deformable interface during the flow measurements. A particularly useful application is the determination of unsteady turbulent flow velocity field around a moving porous fishing net structure which is of great interest for selectivity and also for the numerical code validation which needs a realistic database. To do this, a representative piece of fishing net structure is used to investigate both the Turbulent Boundary Layer (TBL) developing over the horizontal porous moving fishing net structure and the turbulent flow passing through the moving porous structure. For such an investigation, Time Resolved PIV measurements are carried out and combined with a motion tracking technique allowing the measurement of the instantaneous motion of the deformable fishing net during PIV measurements. Once the two-dimensional motion of the porous structure is accessed, PIV velocity measurements are analyzed in connection with the detected motion. Finally, the TBL is characterized and the effect of the structure motion on the volumetric flow rate passing though the moving porous structure is clearly demonstrated.
Resumo:
The spine is a complex structure that provides motion in three directions: flexion and extension, lateral bending and axial rotation. So far, the investigation of the mechanical and kinematic behavior of the basic unit of the spine, a motion segment, is predominantly a domain of in vitro experiments on spinal loading simulators. Most existing approaches to measure spinal stiffness intraoperatively in an in vivo environment use a distractor. However, these concepts usually assume a planar loading and motion. The objective of our study was to develop and validate an apparatus, that allows to perform intraoperative in vivo measurements to determine both the applied force and the resulting motion in three dimensional space. The proposed setup combines force measurement with an instrumented distractor and motion tracking with an optoelectronic system. As the orientation of the applied force and the three dimensional motion is known, not only force-displacement, but also moment-angle relations could be determined. The validation was performed using three cadaveric lumbar ovine spines. The lateral bending stiffness of two motion segments per specimen was determined with the proposed concept and compared with the stiffness acquired on a spinal loading simulator which was considered to be gold standard. The mean values of the stiffness computed with the proposed concept were within a range of ±15% compared to data obtained with the spinal loading simulator under applied loads of less than 5 Nm.
Resumo:
Study Design. A comparative study of cervical range of motion in asymptomatic persons and those with whiplash. Objectives. To compare the primary and conjunct ranges of motion of the cervical spine in asymptomatic persons and those with persistent whiplash-associated disorders, and to investigate the ability of these measures of range of motion to discriminate between the groups. Summary of Background. Evidence that range of motion is an effective indicator of physical impairment in the cervical spine is not conclusive. Few studies have evaluated the ability to discriminate between asymptomatic persons and those with whiplash on the basis of range of motion or compared three-dimensional in vivo measures of range of motion in asymptomatic persons and those with whiplash-associated disorders. Methods. The study participants were 89 asymptomatic volunteers (41 men, 48 women; mean age 39.2 years) and 114 patients with persistent whiplash-associated disorders (22 men, 93 women; mean age 37.2 years) referred to a whiplash research unit for assessment of their cervical region. Range of cervical motion was measured in three dimensions with a computerized, electromagnetic, motion-tracking device. The movements assessed were flexion, extension, left and right lateral flexion, and left and right rotation. Results. Range of motion was reduced in all primary movements in patients with persistent whiplash-associated disorder. Sagittal plane movements were proportionally the most affected. On the basis of primary and conjunct range of motion, age, and gender, 90.3% of study participants could be correctly categorized as asymptomatic or as having whiplash (sensitivity 86.2%, specificity 95.3%). Conclusions. Range of motion was capable of discriminating between asymptomatic persons and those with persistent whiplash-associated disorders.
Resumo:
Hand and finger tracking has a major importance in healthcare, for rehabilitation of hand function required due to a neurological disorder, and in virtual environment applications, like characters animation for on-line games or movies. Current solutions consist mostly of motion tracking gloves with embedded resistive bend sensors that most often suffer from signal drift, sensor saturation, sensor displacement and complex calibration procedures. More advanced solutions provide better tracking stability, but at the expense of a higher cost. The proposed solution aims to provide the required precision, stability and feasibility through the combination of eleven inertial measurements units (IMUs). Each unit captures the spatial orientation of the attached body. To fully capture the hand movement, each finger encompasses two units (at the proximal and distal phalanges), plus one unit at the back of the hand. The proposed glove was validated in two distinct steps: a) evaluation of the sensors’ accuracy and stability over time; b) evaluation of the bending trajectories during usual finger flexion tasks based on the intra-class correlation coefficient (ICC). Results revealed that the glove was sensitive mainly to magnetic field distortions and sensors tuning. The inclusion of a hard and soft iron correction algorithm and accelerometer and gyro drift and temperature compensation methods provided increased stability and precision. Finger trajectories evaluation yielded high ICC values with an overall reliability within application’s satisfying limits. The developed low cost system provides a straightforward calibration and usability, qualifying the device for hand and finger tracking in healthcare and animation industries.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
Relaxation rates provide important information about tissue microstructure. Multi-parameter mapping (MPM) estimates multiple relaxation parameters from multi-echo FLASH acquisitions with different basic contrasts, i.e., proton density (PD), T1 or magnetization transfer (MT) weighting. Motion can particularly affect maps of the apparent transverse relaxation rate R2(*), which are derived from the signal of PD-weighted images acquired at different echo times. To address the motion artifacts, we introduce ESTATICS, which robustly estimates R2(*) from images even when acquired with different basic contrasts. ESTATICS extends the fitted signal model to account for inherent contrast differences in the PDw, T1w and MTw images. The fit was implemented as a conventional ordinary least squares optimization and as a robust fit with a small or large confidence interval. These three different implementations of ESTATICS were tested on data affected by severe motion artifacts and data with no prominent motion artifacts as determined by visual assessment or fast optical motion tracking. ESTATICS improved the quality of the R2(*) maps and reduced the coefficient of variation for both types of data-with average reductions of 30% when severe motion artifacts were present. ESTATICS can be applied to any protocol comprised of multiple 2D/3D multi-echo FLASH acquisitions as used in the general research and clinical setting.