18 resultados para 3D motion capture
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Robotic exoskeletons can be used to study and treat patients with neurological impairments. They can guide and support the human limb over a large range of motion, which requires that the movement trajectory of the exoskeleton coincide with the one of the human arm. This is straightforward to achieve for rather simple joints like the elbow, but very challenging for complex joints like the human shoulder, which is comprised by several bones and can exhibit a movement with multiple rotational and translational degrees of freedom. Thus, several research groups have developed different shoulder actuation mechanism. However, there are no experimental studies that directly compare the comfort of two different shoulder actuation mechanisms. In this study, the comfort and the naturalness of the new shoulder actuation mechanism of the ARMin III exoskeleton are compared to a ball-and-socket-type shoulder actuation. The study was conducted in 20 healthy subjects using questionnaires and 3D-motion records to assess comfort and naturalness. The results indicate that the new shoulder actuation is slightly better than a ball-and-socket-type actuation. However, the differences are small, and under the tested conditions, the comfort and the naturalness of the two tested shoulder actuations do not differ a lot.
Resumo:
The unique characteristics of special populations such as pre-school children and Down syndrome kids in crisis and their distorted self-image were never studied before, because of the difficulty of crisis reproduction. This study proposes a VR setting that tries to model some special population's behaviour in the time of crises and offers them a training scenario. The sample population consisted of 30 pre-school children and 20 children with Down syndrome. The VR setting involved a high-speed PC, a VPL EyePhone 1, a MR toolkit, a vibrations plate, a motion capture system and other sensors. The system measured and modelled the typical behaviour of these special populations in a Virtual Earthquake scenario with sight and sound and calculated a VR anthropomorphic model that reproduced their behaviour and emotional state. Afterwards one group received an emotionally enhanced VR self-image as feedback for their training, one group received a plain VR self-image and another group received verbal instructions. The findings strongly suggest that the training was a lot more biased by the emotionally enhanced VR self-image than the other approaches. These findings could highlight the special role of the self-image to therapy and training and the interesting role of imagination to emotions, motives and learning. Further studies could be done with various scenarios in order to measure the best-biased behaviour and establish the most natural and affective VR model. This presentation is going to highlight the main findings and some theories behind them.
Resumo:
Real cameras have a limited depth of field. The resulting defocus blur is a valuable cue for estimating the depth structure of a scene. Using coded apertures, depth can be estimated from a single frame. For optical flow estimation between frames, however, the depth dependent degradation can introduce errors. These errors are most prominent when objects move relative to the focal plane of the camera. We incorporate coded aperture defocus blur into optical flow estimation and allow for piecewise smooth 3D motion of objects. With coded aperture flow, we can establish dense correspondences between pixels in succeeding coded aperture frames. We compare several approaches to compute accurate correspondences for coded aperture images showing objects with arbitrary 3D motion.
Resumo:
The spine is a complex structure that provides motion in three directions: flexion and extension, lateral bending and axial rotation. So far, the investigation of the mechanical and kinematic behavior of the basic unit of the spine, a motion segment, is predominantly a domain of in vitro experiments on spinal loading simulators. Most existing approaches to measure spinal stiffness intraoperatively in an in vivo environment use a distractor. However, these concepts usually assume a planar loading and motion. The objective of our study was to develop and validate an apparatus, that allows to perform intraoperative in vivo measurements to determine both the applied force and the resulting motion in three dimensional space. The proposed setup combines force measurement with an instrumented distractor and motion tracking with an optoelectronic system. As the orientation of the applied force and the three dimensional motion is known, not only force-displacement, but also moment-angle relations could be determined. The validation was performed using three cadaveric lumbar ovine spines. The lateral bending stiffness of two motion segments per specimen was determined with the proposed concept and compared with the stiffness acquired on a spinal loading simulator which was considered to be gold standard. The mean values of the stiffness computed with the proposed concept were within a range of ±15% compared to data obtained with the spinal loading simulator under applied loads of less than 5 Nm.
Resumo:
This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm
Resumo:
Microfluidic technology has been successfully applied to isolate very rare tumor-derived epithelial cells (circulating tumor cells, CTCs) from blood with relatively high yield and purity, opening up exciting prospects for early detection of cancer. However, a major limitation of state-of-the-art CTC-chips is their inability to characterize the behavior and function of captured CTCs, for example to obtain information on proliferative and invasive properties or, ultimately, tumor re-initiating potential. Although CTCs can be efficiently immunostained with markers reporting phenotype or fate (e.g. apoptosis, proliferation), it has not yet been possible to reliably grow captured CTCs over long periods of time and at single cell level. It is challenging to remove CTCs from a microchip after capture, therefore such analyses should ideally be performed directly on-chip. To address this challenge, we merged CTC capture with three-dimensional (3D) tumor cell culture on the same microfluidic platform. PC3 prostate cancer cells were isolated from spiked blood on a transparent PDMS CTC-chip, encapsulated on-chip in a biomimetic hydrogel matrix (QGel™) that was formed in situ, and their clonal 3D spheroid growth potential was assessed by microscopy over one week in culture. The possibility to clonally expand a subset of captured CTCs in a near-physiological in vitro model adds an important element to the expanding CTC-chip toolbox that ultimately should improve prediction of treatment responses and disease progression.
Resumo:
The three-dimensional documentation of footwear and tyre impressions in snow offers an opportunity to capture additional fine detail for the identification as present photographs. For this approach, up to now, different casting methods have been used. Casting of footwear impressions in snow has always been a difficult assignment. This work demonstrates that for the three-dimensional documentation of impressions in snow the non-destructive method of 3D optical surface scanning is suitable. The new method delivers more detailed results of higher accuracy than the conventional casting techniques. The results of this easy to use and mobile 3D optical surface scanner were very satisfactory in different meteorological and snow conditions. The method is also suitable for impressions in soil, sand or other materials. In addition to the side by side comparison, the automatic comparison of the 3D models and the computation of deviations and accuracy of the data simplify the examination and delivers objective and secure results. The results can be visualized efficiently. Data exchange between investigating authorities at a national or an international level can be achieved easily with electronic data carriers.
Resumo:
The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effects of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the dosimetric impact of prediction (a) increased with response time, (b) was larger for 3D radiation therapy as compared with 4D radiation therapy, (c) was relatively insensitive to change in beam energy and beam direction, (d) was greater for IMRT distributions as compared with conformal distributions, (e) was smaller than the dosimetric impact of latency, and (f) was greatest for respiration motion with audio instructions, followed by visual feedback and free breathing. Geometric errors of prediction that occur during 4D radiation delivery introduce dosimetric errors that are dependent on several factors, such as response time, treatment-delivery type, and beam energy. Even for relatively small response times of 0.6 s into the future, dosimetric errors due to prediction could approach delivery errors when respiratory motion is not accounted for at all. To reduce the dosimetric impact, better predictive models and/or shorter response times are required.
Resumo:
This paper describes a method for DRR generation as well as for volume gradients projection using hardware accelerated 2D texture mapping and accumulation buffering and demonstrates its application in 2D-3D registration of X-ray fluoroscopy to CT images. The robustness of the present registration scheme are guaranteed by taking advantage of a coarse-to-fine processing of the volume/image pyramids based on cubic B-splines. A human cadaveric spine specimen together with its ground truth was used to compare the present scheme with a purely software-based scheme in three aspects: accuracy, speed, and capture ranges. Our experiments revealed an equivalent accuracy and capture ranges but with much shorter registration time with the present scheme. More specifically, the results showed 0.8 mm average target registration error, 55 second average execution time per registration, and 10 mm and 10° capture ranges for the present scheme when tested on a 3.0 GHz Pentium 4 computer.
Resumo:
The acquisition of conventional X-ray radiographs remains the standard imaging procedure for the diagnosis of hip-related problems. However, recent studies demonstrated the benefit of using three-dimensional (3D) surface models in the clinical routine. 3D surface models of the hip joint are useful for assessing the dynamic range of motion in order to identify possible pathologies such as femoroacetabular impingement. In this paper, we present an integrated system which consists of X-ray radiograph calibration and subsequent 2D/3D hip joint reconstruction for diagnosis and planning of hip-related problems. A mobile phantom with two different sizes of fiducials was developed for X-ray radiograph calibration, which can be robustly detected within the images. On the basis of the calibrated X-ray images, a 3D reconstruction method of the acetabulum was developed and applied together with existing techniques to reconstruct a 3D surface model of the hip joint. X-ray radiographs of dry cadaveric hip bones and one cadaveric specimen with soft tissue were used to prove the robustness of the developed fiducial detection algorithm. Computed tomography scans of the cadaveric bones were used to validate the accuracy of the integrated system. The fiducial detection sensitivity was in the same range for both sizes of fiducials. While the detection sensitivity was 97.96% for the large fiducials, it was 97.62% for the small fiducials. The acetabulum and the proximal femur were reconstructed with a mean surface distance error of 1.06 and 1.01 mm, respectively. The results for fiducial detection sensitivity and 3D surface reconstruction demonstrated the capability of the integrated system for 3D hip joint reconstruction from 2D calibrated X-ray radiographs.
Resumo:
We describe a user assisted technique for 3D stereo conversion from 2D images. Our approach exploits the geometric structure of perspective images including vanishing points. We allow a user to indicate lines, planes, and vanishing points in the input image, and directly employ these as constraints in an image warping framework to produce a stereo pair. By sidestepping explicit construction of a depth map, our approach is applicable to more general scenes and avoids potential artifacts of depth-image-based rendering. Our method is most suitable for scenes with large scale structures such as buildings.
Resumo:
Aneurysm diameter measurement is quick and easy, but suffers from the pitfalls of being "too rough and ready". When semi-automated segmentation took 7-10 minutes to estimate volume, it was not a practical tool for busy, routine clinical practice. Today, the availability of automatic segmentation in seconds is bound to make volume measurement, along with 3D ultrasonography, the tools of the future. There can be no debate.