987 resultados para 3D registration
Resumo:
Recent work suggests that the human ear varies significantly between different subjects and can be used for identification. In principle, therefore, using ears in addition to the face within a recognition system could improve accuracy and robustness, particularly for non-frontal views. The paper describes work that investigates this hypothesis using an approach based on the construction of a 3D morphable model of the head and ear. One issue with creating a model that includes the ear is that existing training datasets contain noise and partial occlusion. Rather than exclude these regions manually, a classifier has been developed which automates this process. When combined with a robust registration algorithm the resulting system enables full head morphable models to be constructed efficiently using less constrained datasets. The algorithm has been evaluated using registration consistency, model coverage and minimalism metrics, which together demonstrate the accuracy of the approach. To make it easier to build on this work, the source code has been made available online.
Resumo:
Significant recent progress has shown ear recognition to be a viable biometric. Good recognition rates have been demonstrated under controlled conditions, using manual registration or with specialised equipment. This paper describes a new technique which improves the robustness of ear registration and recognition, addressing issues of pose variation, background clutter and occlusion. By treating the ear as a planar surface and creating a homography transform using SIFT feature matches, ears can be registered accurately. The feature matches reduce the gallery size and enable a precise ranking using a simple 2D distance algorithm. When applied to the XM2VTS database it gives results comparable to PCA with manual registration. Further analysis on more challenging datasets demonstrates the technique to be robust to background clutter, viewing angles up to +/- 13 degrees and with over 20% occlusion.
Resumo:
The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.
Resumo:
Background This paper presents a method that registers MRIs acquired in prone position, with surface topography (TP) and X-ray reconstructions acquired in standing position, in order to obtain a 3D representation of a human torso incorporating the external surface, bone structures, and soft tissues. Methods TP and X-ray data are registered using landmarks. Bone structures are used to register each MRI slice using an articulated model, and the soft tissue is confined to the volume delimited by the trunk and bone surfaces using a constrained thin-plate spline. Results The method is tested on 3 pre-surgical patients with scoliosis and shows a significant improvement, qualitatively and using the Dice similarity coefficient, in fitting the MRI into the standing patient model when compared to rigid and articulated model registration. The determinant of the Jacobian of the registration deformation shows higher variations in the deformation in areas closer to the surface of the torso. Conclusions The novel, resulting 3D full torso model can provide a more complete representation of patient geometry to be incorporated in surgical simulators under development that aim at predicting the effect of scoliosis surgery on the external appearance of the patient’s torso.
Resumo:
This paper presents a method based on articulated models for the registration of spine data extracted from multimodal medical images of patients with scoliosis. With the ultimate aim being the development of a complete geometrical model of the torso of a scoliotic patient, this work presents a method for the registration of vertebral column data using 3D magnetic resonance images (MRI) acquired in prone position and X-ray data acquired in standing position for five patients with scoliosis. The 3D shape of the vertebrae is estimated from both image modalities for each patient, and an articulated model is used in order to calculate intervertebral transformations required in order to align the vertebrae between both postures. Euclidean distances between anatomical landmarks are calculated in order to assess multimodal registration error. Results show a decrease in the Euclidean distance using the proposed method compared to rigid registration and more physically realistic vertebrae deformations compared to thin-plate-spline (TPS) registration thus improving alignment.
Resumo:
The registration of full 3-D models is an important task in computer vision. Range finders only reconstruct a partial view of the object. Many authors have proposed several techniques to register 3D surfaces from multiple views in which there are basically two aspects to consider. First, poor registration in which some sort of correspondences are established. Second, accurate registration in order to obtain a better solution. A survey of the most common techniques is presented and includes experimental results of some of them
Resumo:
One of the key aspects in 3D-image registration is the computation of the joint intensity histogram. We propose a new approach to compute this histogram using uniformly distributed random lines to sample stochastically the overlapping volume between two 3D-images. The intensity values are captured from the lines at evenly spaced positions, taking an initial random offset different for each line. This method provides us with an accurate, robust and fast mutual information-based registration. The interpolation effects are drastically reduced, due to the stochastic nature of the line generation, and the alignment process is also accelerated. The results obtained show a better performance of the introduced method than the classic computation of the joint histogram
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Objectives: To investigate the reliability of regional three-dimensional registration and superimposition methods for assessment of temporomandibular joint condylar morphology across subjects and longitudinally.Methods: The sample consisted of cone beam CT scans of 36 patients. The across-subject comparisons included 12 controls, mean age 41.3 +/- 12.0 years, and 12 patients with temporomandibular joint osteoarthritis, mean age 41.3 +/- 14.7 years. The individual longitudinal assessments included 12 patients with temporomandibular joint osteoarthritis, mean age 37.8 +/- 16.7 years, followed up at pre-operative jaw surgery, immediately after and one-year post-operative. Surface models of all condyles were constructed from the cone beam CT scans. Two previously calibrated observers independently performed all registration methods. A landmark-based approach was used for the registration of across-subject condylar models, and temporomandibular joint osteoarthritis vs control group differences were computed with shape analysis. A voxel-based approach was used for registration of longitudinal scans calculated x, y, z degrees of freedom for translation and rotation. Two-way random intraclass correlation coefficients tested the interobserver reliability.Results: Statistically significant differences between the control group and the osteoarthritis group were consistently located on the lateral and medial poles for both observers. The interobserver differences were <= 0.2 mm. For individual longitudinal comparisons, the mean interobserver differences were <= 0.6 mm in translation errors and 1.2 degrees in rotation errors, with excellent reliability (intraclass correlation coefficient >0.75).Conclusions: Condylar registration for across-subjects and longitudinal assessments is reliable and can be used to quantify subtle bony differences in the three-dimensional condylar morphology.
Resumo:
The aim of this study was to determine whether image artifacts caused by orthodontic metal accessories interfere with the accuracy of 3D CBCT model superimposition. A human dry skull was subjected three times to a CBCT scan: at first without orthodontic brackets (T1), then with stainless steel brackets bonded without (T2) and with orthodontic arch wires (T3) inserted into the brackets' slots. The registration of image surfaces and the superimposition of 3D models were performed. Within-subject surface distances between T1-T2, T1-T3 and T2-T3 were computed and calculated for comparison among the three data sets. The minimum and maximum Hausdorff Distance units (HDu) computed between the corresponding data points of the T1 and T2 CBCT 3D surface images were 0.000000 and 0.049280 HDu, respectively, and the mean distance was 0.002497 HDu. The minimum and maximum Hausdorff Distances between T1 and T3 were 0.000000 and 0.047440 HDu, respectively, with a mean distance of 0.002585 HDu. In the comparison between T2 and T3, the minimum, maximum and mean Hausdorff Distances were 0.000000, 0.025616 and 0.000347 HDu, respectively. In the current study, the image artifacts caused by metal orthodontic accessories did not compromise the accuracy of the 3D model superimposition. Color-coded maps of overlaid structures complemented the computed Hausdorff Distances and demonstrated a precise fusion between the data sets.
Resumo:
This paper presents methods based on Information Filters for solving matching problems with emphasis on real-time, or effectively real-time applications. Both applications discussed in this work deal with ultrasound-based rigid registration in computer-assisted orthopedic surgery. In the first application, the usual workflow of rigid registration is reformulated such that registration algorithms would iterate while the surgeon is acquiring ultrasound images of the anatomy to be operated. Using this effectively real-time approach to registration, the surgeon would then receive feedback in order to better gauge the quality of the final registration outcome. The second application considered in this paper circumvents the need to attach physical markers to bones for anatomical referencing. Experiments using anatomical objects immersed in water are performed in order to evaluate and compare the different methods presented herein, using both 2D as well as real-time 3D ultrasound.
Resumo:
Purpose Accurate three-dimensional (3D) models of lumbar vertebrae can enable image-based 3D kinematic analysis. The common approach to derive 3D models is by direct segmentation of CT or MRI datasets. However, these have the disadvantages that they are expensive, timeconsuming and/or induce high-radiation doses to the patient. In this study, we present a technique to automatically reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image. Methods Our technique is based on a hybrid 2D/3D deformable registration strategy combining a landmark-to-ray registration with a statistical shape model-based 2D/3D reconstruction scheme. Fig. 1 shows different stages of the reconstruction process. Four cadaveric lumbar spine segments (total twelve lumbar vertebrae) were used to validate the technique. To evaluate the reconstruction accuracy, the surface models reconstructed from the lateral fluoroscopic images were compared to the associated ground truth data derived from a 3D CT-scan reconstruction technique. For each case, a surface-based matching was first used to recover the scale and the rigid transformation between the reconstructed surface model Results Our technique could successfully reconstruct 3D surface models of all twelve vertebrae. After recovering the scale and the rigid transformation between the reconstructed surface models and the ground truth models, the average error of the 2D/3D surface model reconstruction over the twelve lumbar vertebrae was found to be 1.0 mm. The errors of reconstructing surface models of all twelve vertebrae are shown in Fig. 2. It was found that the mean errors of the reconstructed surface models in comparison to their associated ground truths after iterative scaled rigid registrations ranged from 0.7 mm to 1.3 mm and the rootmean squared (RMS) errors ranged from 1.0 mm to 1.7 mm. The average mean reconstruction error was found to be 1.0 mm. Conclusion An accurate, scaled 3D reconstruction of the lumbar vertebra can be obtained from a single lateral fluoroscopic image using a statistical shape model based 2D/3D reconstruction technique. Future work will focus on applying the reconstructed model for 3D kinematic analysis of lumbar vertebrae, an extension of our previously-reported imagebased kinematic analysis. The developed method also has potential applications in surgical planning and navigation.
Resumo:
OBJECTIVES: To evaluate the feasibility of fusion imaging compound tomography (FICT) of CT/MRI and single photon emission tomography (SPECT) versus planar scintigraphy only (plSc) in pre-surgical staging for vulvar cancer. MATERIALS AND METHODS: Analysis of consecutive patients with vulvar cancer who preoperatively underwent sentinel scintigraphy (planar and 3D-SPECT imaging) and CT or MRI. Body markers were used for exact anatomical co-registration and fusion datasets were reconstructed using SPECT and CT/MRI. The number and localisation of all intraoperatively identified and resected sentinel lymph nodes (SLN) were compared between planar and 3D fusion imaging. RESULTS: Twenty six SLN were localized on planar scintigraphy. Twelve additional SLN were identified after SPECT and CT/MRI reconstruction, all of them were confirmed intraoperatively. In seven cases where single foci were identified at plSc, fusion imaging revealed grouped individual nodes and five additional localisations were discovered at fusion imaging. In seven patients both methods identified SLN contra lateral to the primary tumor site, but only fusion imaging allowed to localise iliac SLN in four patients. All SLN predicted on fusion imaging could be localised and resected during surgery. CONCLUSIONS: Fusion imaging using SPECT and CT/MRI can detect SLN in vulvar cancer more precisely than planar imaging regarding number and anatomical localisation. FICT revealed additional information in seven out of ten cases (70%).
Resumo:
A pilot study to detect volume changes of cerebral structures in growth hormone (GH)-deficient adults treated with GH using serial 3D MR image processing and to assess need for segmentation prior to registration was conducted.
Resumo:
We consider the problem of approximating the 3D scan of a real object through an affine combination of examples. Common approaches depend either on the explicit estimation of point-to-point correspondences or on 2-dimensional projections of the target mesh; both present drawbacks. We follow an approach similar to [IF03] by representing the target via an implicit function, whose values at the vertices of the approximation are used to define a robust cost function. The problem is approached in two steps, by approximating first a coarse implicit representation of the whole target, and then finer, local ones; the local approximations are then merged together with a Poisson-based method. We report the results of applying our method on a subset of 3D scans from the Face Recognition Grand Challenge v.1.0.