27 resultados para 3D point clouds


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis a methodology for representing 3D subjects and their deformations in adverse situations is studied. The study is focused in providing methods based on registration techniques to improve the data in situations where the sensor is working in the limit of its sensitivity. In order to do this, it is proposed two methods to overcome the problems which can difficult the process in these conditions. First a rigid registration based on model registration is presented, where the model of 3D planar markers is used. This model is estimated using a proposed method which improves its quality by taking into account prior knowledge of the marker. To study the deformations, it is proposed a framework to combine multiple spaces in a non-rigid registration technique. This proposal improves the quality of the alignment with a more robust matching process that makes use of all available input data. Moreover, this framework allows the registration of multiple spaces simultaneously providing a more general technique. Concretely, it is instantiated using colour and location in the matching process for 3D location registration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Plane model extraction from three-dimensional point clouds is a necessary step in many different applications such as planar object reconstruction, indoor mapping and indoor localization. Different RANdom SAmple Consensus (RANSAC)-based methods have been proposed for this purpose in recent years. In this study, we propose a novel method-based on RANSAC called Multiplane Model Estimation, which can estimate multiple plane models simultaneously from a noisy point cloud using the knowledge extracted from a scene (or an object) in order to reconstruct it accurately. This method comprises two steps: first, it clusters the data into planar faces that preserve some constraints defined by knowledge related to the object (e.g., the angles between faces); and second, the models of the planes are estimated based on these data using a novel multi-constraint RANSAC. We performed experiments in the clustering and RANSAC stages, which showed that the proposed method performed better than state-of-the-art methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Disponible en Github: https://github.com/adririquelme/DSE

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan, August 29-31, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims. We study the optical and near-infrared colour excesses produced by circumstellar emission in a sample of Be/X-ray binaries. Our main goals are exploring whether previously published relations, valid for isolated Be stars, are applicable to Be/X-ray binaries and computing the distance to these systems after correcting for the effects of the circumstellar contamination. Methods. Simultaneous UBVRI photometry and spectra in the 3500−7000 Å spectral range were obtained for 11 optical counterparts to Be/X-ray binaries in the LMC, 5 in the SMC and 12 in the Milky Way. As a measure of the amount of circumstellar emission we used the Hα equivalent width corrected for photospheric absorption. Results. We find a linear relationship between the strength of the Hα emission line and the component of E(B − V) originating from the circumstellar disk. This relationship is valid for stars with emission lines weaker than EW ≈ −15   Å. Beyond this point, the circumstellar contribution to E(B − V) saturates at a value ≈0.17   mag. A similar relationship is found for the (V − I) near infrared colour excess, albeit with a steeper slope and saturation level. The circumstellar excess in (B − V) is found to be about five times higher for Be/X-ray binaries than for isolated Be stars with the same equivalent width EW(Hα), implying significant differences in the physical properties of their circumstellar envelopes. The distance to Be/X-ray binaries (with non-shell Be star companions) can only be correctly estimated by taking into account the excess emission in the V band produced by free-free and free-bound transitions in the circumstellar envelope. We provide a simple method to determine the distances that includes this effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several recent works deal with 3D data in mobile robotic problems, e.g. mapping or egomotion. Data comes from any kind of sensor such as stereo vision systems, time of flight cameras or 3D lasers, providing a huge amount of unorganized 3D data. In this paper, we describe an efficient method to build complete 3D models from a Growing Neural Gas (GNG). The GNG is applied to the 3D raw data and it reduces both the subjacent error and the number of points, keeping the topology of the 3D data. The GNG output is then used in a 3D feature extraction method. We have performed a deep study in which we quantitatively show that the use of GNG improves the 3D feature extraction method. We also show that our method can be applied to any kind of 3D data. The 3D features obtained are used as input in an Iterative Closest Point (ICP)-like method to compute the 6DoF movement performed by a mobile robot. A comparison with standard ICP is performed, showing that the use of GNG improves the results. Final results of 3D mapping from the egomotion calculated are also shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3D sensors provides valuable information for mobile robotic tasks like scene classification or object recognition, but these sensors often produce noisy data that makes impossible applying classical keypoint detection and feature extraction techniques. Therefore, noise removal and downsampling have become essential steps in 3D data processing. In this work, we propose the use of a 3D filtering and down-sampling technique based on a Growing Neural Gas (GNG) network. GNG method is able to deal with outliers presents in the input data. These features allows to represent 3D spaces, obtaining an induced Delaunay Triangulation of the input space. Experiments show how the state-of-the-art keypoint detectors improve their performance using GNG output representation as input data. Descriptors extracted on improved keypoints perform better matching in robotics applications as 3D scene registration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To evaluate two cases of intermittent exotropia (IX(T)) treated by vision therapy the efficacy of the treatment by complementing the clinical examination with a 3-D video-oculography to register and to evidence the potential applicability of this technology for such purpose. Methods: We report the binocular alignment changes occurring after vision therapy in a woman of 36 years with an IX(T) of 25 prism diopters (Δ) at far and 18 Δ at near and a child of 10 years with 8 Δ of IX(T) in primary position associated to 6 Δ of left eye hypotropia. Both patients presented good visual acuity with correction in both eyes. Instability of ocular deviation was evident by VOG analysis, revealing also the presence of vertical and torsional components. Binocular vision therapy was prescribed and performed including different types of vergence, accommodation, and consciousness of diplopia training. Results: After therapy, excellent ranges of fusional vergence and a “to-the-nose” near point of convergence were obtained. The 3-D VOG examination (Sensoro Motoric Instruments, Teltow, Germany) confirmed the compensation of the deviation with a high level of stability of binocular alignment. Significant improvement could be observed after therapy in the vertical and torsional components that were found to become more stable. Patients were very satisfied with the outcome obtained by vision therapy. Conclusion: 3D-VOG is a useful technique for providing an objective register of the compensation of the ocular deviation and the stability of the binocular alignment achieved after vision therapy in cases of IX(T), providing a detailed analysis of vertical and torsional improvements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objetivo: Evaluar la eficacia del tratamiento en 3 casos de exotropia intermitente (XT(i)) mediante ejercicios de terapia visual, completando la exploración clínica con Videooculografia-30 y evidenciar la potencial aplicabilidad de esta tecnología para dicho propósito. Métodos: Exponemos los cambios ocurridos tras ejercicios de terapia visual en una mujer de 36 años con XT(i) de -25 dioptrías prismáticas (dp) de lejos y 18 dp de cerca; Un niño de 10 años de edad con 8 dp de XT(i) en posición primaria, asociados a +6 dp de hipotropia izquierda; y un hombre de 63 años con XT(i) de 6 dp en posición primaria asociada a +7 dp de hipertropia derecha. Todos los pacientes presentaron buena agudeza visual corregida en ambos ojos. La inestabilidad de la desviación ocular se evidenció mediante análisis de VOG-30, revelando la presencia de components verticales y torsionales. Se realizaron ejercicios de terapia visual, incluyendo diferentes tipos de ejercicios de vergencias, acomodación y percepción de la diplopía. Resultados: Tras la terapia visual se obtuvieron excelentes rangos de vergencias fusionales y de punto próximo de convergencia («hasta la nariz»). El examen mediante VOG-3D (Sensoro Motoric lnstruments, Teltow, Germany) confirmó la compensación de la desviación con estabilidad del alineamiento ocular. Se observó una significativa mejora después de la terapia en los components verticals y torsionales, lo cuales se hicieron más estables. Los pacientes se mostraron muy satisfechos de los resultados obtenidos. Conclusión: La VOG-3D es una técnica útil para dotamos de un método objetivo de registro de la compensación y estabilidad de la desviación ocular después de realizar ejercicios de terapia visual en casos de XT(i), ofreciéndonos un detallado análisis de la mejoría de los components verticales y torsionales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, the requirement of professional skills to university students is constantly increasing in our society. In our opinion, the content offered in official degrees need to be nourished with different variables, enriching their global professional knowledge in a parallel way; that is why, in recent years, there is a great multiplicity of complementary courses at university. One of the most socially demanded technical requirements within the architectural, design or engineering field is the management of 3D drawing software, becoming an indispensable reality in these sectors. Thus, this specific training becomes essential over two-dimension traditional design, because the inclusion of great possibilities of spatial development that go beyond conventional orthographic projections (plans, sections or elevations), allowing modelling and rotation of the selected items from multiple angles and perspectives. Therefore, this paper analyzes the teaching methodology of a complementary course for those technicians in the construction industry interested in computer-aided design, using modelling (SketchupMake) and rendering programs (Kerkythea). The course is developed from the technician point of view, by learning computer management and its application to professional development from a more general to a more specific view through practical examples. The proposed methodology is based on the development of real examples in different professional environments such as rehabilitation, new constructions, opening projects or architectural design. This multidisciplinary contribution improves criticism of students in different areas, encouraging new learning strategies and the independent development of three-dimensional solutions. Thus, the practical implementation of new situations, even suggested by the students themselves, ensures active participation, saving time during the design process and the increase of effectiveness when generating elements which may be represented, moved or virtually tested. In conclusion, this teaching-learning methodology improves the skills and competencies of students to face the growing professional demands of society. After finishing the course, technicians not only improved their expertise in the field of drawing but they also enhanced their capacity for spatial vision; both essential qualities in these sectors that can be applied to their professional development with great success.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Durante los últimos años ha sido creciente el uso de las unidades de procesamiento gráfico, más conocidas como GPU (Graphic Processing Unit), en aplicaciones de propósito general, dejando a un lado el objetivo para el que fueron creadas y que no era otro que el renderizado de gráficos por computador. Este crecimiento se debe en parte a la evolución que han experimentado estos dispositivos durante este tiempo y que les ha dotado de gran potencia de cálculo, consiguiendo que su uso se extienda desde ordenadores personales a grandes cluster. Este hecho unido a la proliferación de sensores RGB-D de bajo coste ha hecho que crezca el número de aplicaciones de visión que hacen uso de esta tecnología para la resolución de problemas, así como también para el desarrollo de nuevas aplicaciones. Todas estas mejoras no solamente se han realizado en la parte hardware, es decir en los dispositivos, sino también en la parte software con la aparición de nuevas herramientas de desarrollo que facilitan la programación de estos dispositivos GPU. Este nuevo paradigma se acuñó como Computación de Propósito General sobre Unidades de Proceso Gráfico (General-Purpose computation on Graphics Processing Units, GPGPU). Los dispositivos GPU se clasifican en diferentes familias, en función de las distintas características hardware que poseen. Cada nueva familia que aparece incorpora nuevas mejoras tecnológicas que le permite conseguir mejor rendimiento que las anteriores. No obstante, para sacar un rendimiento óptimo a un dispositivo GPU es necesario configurarlo correctamente antes de usarlo. Esta configuración viene determinada por los valores asignados a una serie de parámetros del dispositivo. Por tanto, muchas de las implementaciones que hoy en día hacen uso de los dispositivos GPU para el registro denso de nubes de puntos 3D, podrían ver mejorado su rendimiento con una configuración óptima de dichos parámetros, en función del dispositivo utilizado. Es por ello que, ante la falta de un estudio detallado del grado de afectación de los parámetros GPU sobre el rendimiento final de una implementación, se consideró muy conveniente la realización de este estudio. Este estudio no sólo se realizó con distintas configuraciones de parámetros GPU, sino también con diferentes arquitecturas de dispositivos GPU. El objetivo de este estudio es proporcionar una herramienta de decisión que ayude a los desarrolladores a la hora implementar aplicaciones para dispositivos GPU. Uno de los campos de investigación en los que más prolifera el uso de estas tecnologías es el campo de la robótica ya que tradicionalmente en robótica, sobre todo en la robótica móvil, se utilizaban combinaciones de sensores de distinta naturaleza con un alto coste económico, como el láser, el sónar o el sensor de contacto, para obtener datos del entorno. Más tarde, estos datos eran utilizados en aplicaciones de visión por computador con un coste computacional muy alto. Todo este coste, tanto el económico de los sensores utilizados como el coste computacional, se ha visto reducido notablemente gracias a estas nuevas tecnologías. Dentro de las aplicaciones de visión por computador más utilizadas está el registro de nubes de puntos. Este proceso es, en general, la transformación de diferentes nubes de puntos a un sistema de coordenadas conocido. Los datos pueden proceder de fotografías, de diferentes sensores, etc. Se utiliza en diferentes campos como son la visión artificial, la imagen médica, el reconocimiento de objetos y el análisis de imágenes y datos de satélites. El registro se utiliza para poder comparar o integrar los datos obtenidos en diferentes mediciones. En este trabajo se realiza un repaso del estado del arte de los métodos de registro 3D. Al mismo tiempo, se presenta un profundo estudio sobre el método de registro 3D más utilizado, Iterative Closest Point (ICP), y una de sus variantes más conocidas, Expectation-Maximization ICP (EMICP). Este estudio contempla tanto su implementación secuencial como su implementación paralela en dispositivos GPU, centrándose en cómo afectan a su rendimiento las distintas configuraciones de parámetros GPU. Como consecuencia de este estudio, también se presenta una propuesta para mejorar el aprovechamiento de la memoria de los dispositivos GPU, permitiendo el trabajo con nubes de puntos más grandes, reduciendo el problema de la limitación de memoria impuesta por el dispositivo. El funcionamiento de los métodos de registro 3D utilizados en este trabajo depende en gran medida de la inicialización del problema. En este caso, esa inicialización del problema consiste en la correcta elección de la matriz de transformación con la que se iniciará el algoritmo. Debido a que este aspecto es muy importante en este tipo de algoritmos, ya que de él depende llegar antes o no a la solución o, incluso, no llegar nunca a la solución, en este trabajo se presenta un estudio sobre el espacio de transformaciones con el objetivo de caracterizarlo y facilitar la elección de la transformación inicial a utilizar en estos algoritmos.