5 resultados para Empirically-guided registration
em Universidad de Alicante
Resumo:
The use of RGB-D sensors for mapping and recognition tasks in robotics or, in general, for virtual reconstruction has increased in recent years. The key aspect of these kinds of sensors is that they provide both depth and color information using the same device. In this paper, we present a comparative analysis of the most important methods used in the literature for the registration of subsequent RGB-D video frames in static scenarios. The analysis begins by explaining the characteristics of the registration problem, dividing it into two representative applications: scene modeling and object reconstruction. Then, a detailed experimentation is carried out to determine the behavior of the different methods depending on the application. For both applications, we used standard datasets and a new one built for object reconstruction.
Resumo:
Many applications including object reconstruction, robot guidance, and. scene mapping require the registration of multiple views from a scene to generate a complete geometric and appearance model of it. In real situations, transformations between views are unknown and it is necessary to apply expert inference to estimate them. In the last few years, the emergence of low-cost depth-sensing cameras has strengthened the research on this topic, motivating a plethora of new applications. Although they have enough resolution and accuracy for many applications, some situations may not be solved with general state-of-the-art registration methods due to the signal-to-noise ratio (SNR) and the resolution of the data provided. The problem of working with low SNR data, in general terms, may appear in any 3D system, then it is necessary to propose novel solutions in this aspect. In this paper, we propose a method, μ-MAR, able to both coarse and fine register sets of 3D points provided by low-cost depth-sensing cameras, despite it is not restricted to these sensors, into a common coordinate system. The method is able to overcome the noisy data problem by means of using a model-based solution of multiplane registration. Specifically, it iteratively registers 3D markers composed by multiple planes extracted from points of multiple views of the scene. As the markers and the object of interest are static in the scenario, the transformations obtained for the markers are applied to the object in order to reconstruct it. Experiments have been performed using synthetic and real data. The synthetic data allows a qualitative and quantitative evaluation by means of visual inspection and Hausdorff distance respectively. The real data experiments show the performance of the proposal using data acquired by a Primesense Carmine RGB-D sensor. The method has been compared to several state-of-the-art methods. The results show the good performance of the μ-MAR to register objects with high accuracy in presence of noisy data outperforming the existing methods.
Resumo:
In this thesis a methodology for representing 3D subjects and their deformations in adverse situations is studied. The study is focused in providing methods based on registration techniques to improve the data in situations where the sensor is working in the limit of its sensitivity. In order to do this, it is proposed two methods to overcome the problems which can difficult the process in these conditions. First a rigid registration based on model registration is presented, where the model of 3D planar markers is used. This model is estimated using a proposed method which improves its quality by taking into account prior knowledge of the marker. To study the deformations, it is proposed a framework to combine multiple spaces in a non-rigid registration technique. This proposal improves the quality of the alignment with a more robust matching process that makes use of all available input data. Moreover, this framework allows the registration of multiple spaces simultaneously providing a more general technique. Concretely, it is instantiated using colour and location in the matching process for 3D location registration.
Resumo:
PURPOSE: To evaluate in a pilot study the visual, refractive, corneal topographic, and aberrometric changes after wavefront-guided LASIK or photorefractive keratectomy (PRK) using a high-resolution aberrometer to calculate the treatment for aberrated eyes. METHODS: Twenty aberrated eyes of 18 patients undergoing wavefront-guided LASIK or PRK using the VISX STARS4IR excimer laser and the iDesign aberrometer (Abbott Medical Optics, Inc., Santa Ana, CA) were enrolled in this prospective study. Three groups were differentiated: keratoconus post-CXL group including 11 keratoconic eyes (10 patients), post-LASIK group including 5 eyes (5 patients) with previous decentered LASIK treatments, and post-RK group including 4 eyes (3 patients) with previous radial keratotomy. Visual, refractive, contrast sensitivity, corneal topographic, and ocular aberrometric changes were evaluated during a 6-month follow-up. RESULTS: An improvement in uncorrected (UDVA) and corrected visual acuity (CDVA) associated with a reduction in the spherical equivalent was observed in the three groups, but was only statistically significant in the keratoconus post-CXL and post-LASIK groups (P ≤ .04). All eyes gained one or more lines of CDVA after surgery. Improvements in contrast sensitivity were observed in the three groups, but they were only statistically significant in the keratoconus post-CXL and post-LASIK groups (P ≤ .04). Regarding aberrations, a reduction was observed in trefoil aberrations in the keratoconus post-CXL group (P = .05) and significant reductions in higher-order and primary coma aberrations in the post-LASIK group (P = .04). CONCLUSIONS: Wavefront-guided laser enhancements using the evaluated platform seem to be safe and effective to restore the visual function in aberrated eyes.
Resumo:
Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.