869 resultados para computational geometry
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given
Resumo:
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article surveys several methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analysed using real images. A summary, accompanied with experimental results, is given
Resumo:
Shape complexity has recently received attention from different fields, such as computer vision and psychology. In this paper, integral geometry and information theory tools are applied to quantify the shape complexity from two different perspectives: from the inside of the object, we evaluate its degree of structure or correlation between its surfaces (inner complexity), and from the outside, we compute its degree of interaction with the circumscribing sphere (outer complexity). Our shape complexity measures are based on the following two facts: uniformly distributed global lines crossing an object define a continuous information channel and the continuous mutual information of this channel is independent of the object discretisation and invariant to translations, rotations, and changes of scale. The measures introduced in this paper can be potentially used as shape descriptors for object recognition, image retrieval, object localisation, tumour analysis, and protein docking, among others
Resumo:
We analyze the emergence of synchronization in a population of moving integrate-and-fire oscillators. Oscillators, while moving on a plane, interact with their nearest neighbor upon firing time. We discover a nonmonotonic dependence of the synchronization time on the velocity of the agents. Moreover, we find that mechanisms that drive synchronization are different for different dynamical regimes. We report the extreme situation where an interplay between the time scales involved in the dynamical processes completely inhibits the achievement of a coherent state. We also provide estimators for the transitions between the different regimes.
Resumo:
Aquest text és un recull de procediments per inserir els blocs d'AutoCAD de forma més eficient, en la resolució de problemes prèviament tipificats: la PRIMERA PART descriu protocols d'actuació que l'usuari haurà d'aplicar manualment, mentre que la SEGONA PART ofereix rutines programades en AutoLISP i VisualLISP que l'eximiran d'aquesta obligació.Si ho deixéssim aquí, però, podria semblar que els mateixos mètodes manuals presentats en primer lloc són després els que AutoLISP automatitza; per això convé aclarir que la problemàtica de la PRIMERA PART, tot i que pròxima a la de la SEGONA, és diferent i reprodueix el contingut d'una monografia (BLOCS I GEOMETRIA: 5 EXERCICIS COMENTATS) que forma part del material de suport a l'assignatura ELEMENTS DE CAD, impartida per l'autor en l'ETS d'Enginyeria de Telecomunicació de Barcelona i que té per objecte cobrir el buit bibliogràfic que es detectava en el vessant geomètric de la inserció de blocs, a diferència del que s'ocupa de l'estructura de dades més adient en cada context (incrustació de dibuixos amb INSERT versus vinculació mitjançant REFX), més profusament tractat, proposant una sistematització tipològica dels casos on l'escala és funció lineal d'una distància.La SEGONA PART va més enllà i amplia el repertori d'AutoCAD amb les ordres GINSERT, RATREDIT, INSERTOK, INS2D, INS3D, BLOQUEOK, DESCOMPOK, DEF-TRANSF, APL-TRANSF-V i APL-TRANSF-N, de les quals INS2D i INS3D (INSERTOK és una versió simplificada de INS2D, per a blocs sense atributs) són l'aportació més innovadora i que més lluny porta les potencialitats de la inserció de blocs: resumint-ho en una frase, es tracta d’aconseguir que la inserció d’un bloc (que pot ser l’original, un bloc constituït per una inserció de l’original o un de constituït per la inserció del precedent) s’encabeixi en un marc prèviament establert, a semblança de les ordres ESCALA o GIRA, que mitjançant l'opció Referencia apliquen als objectes seleccionats la transformació d'escalat o de rotació necessària per tal que un element de referència assoleixi una determinada grandària o posició. Tot i que, per identificar amb encert el nucli del problema, serà inevitable introduir una reflexió: quan s’ha tingut la precaució de referir un bloc 2D a un quadrat unitari ortogonal, inserir-lo de manera que s’adapti a qualsevol marc rectangular establert en el dibuix és immediat, però ja no ho és tant concatenar insercions de manera que, a més d’una combinació simple de escalat, gir i translació, l’operació dugui implícita una transformació de cisallament. Perquè és clar que si inserim el bloc girat i convertim la inserció en un bloc que al seu torn tornem a inserir, ara però amb escalat no uniforme, el transformat del quadrat de referència primitiu serà un paral·lelogram, però el problema és: dibuixat un marc romboïdal concret, ¿quin gir caldrà donar a la primera inserció, i quin gir i factors d’escala caldrà aplicar a la segona perquè el quadrat de referència s’adapti al marc? El problema es complica si, a més, volem aprofitar el resultat de la primera inserció per a d’altres paral·lelograms, organitzant un sistema no redundant de insercions intermèdies. Doncs bé: INS2D i INS3D donen satisfacció a aquestes qüestions (la segona ja no contempla l'encaix en un paral·lelogram, sinó en un paral·lelepípede) i són aplicables a blocs proveïts d’atributs, no només de tipus convencional (els continguts en el pla de base del bloc, únics de funcionament garantit amb l’ordre INSERT), sinó també dels situats i orientats lliurement.
Resumo:
In this paper, we present view-dependent information theory quality measures for pixel sampling and scene discretization in flatland. The measures are based on a definition for the mutual information of a line, and have a purely geometrical basis. Several algorithms exploiting them are presented and compare well with an existing one based on depth differences
Resumo:
In this paper we address the problem of extracting representative point samples from polygonal models. The goal of such a sampling algorithm is to find points that are evenly distributed. We propose star-discrepancy as a measure for sampling quality and propose new sampling methods based on global line distributions. We investigate several line generation algorithms including an efficient hardware-based sampling method. Our method contributes to the area of point-based graphics by extracting points that are more evenly distributed than by sampling with current algorithms
Resumo:
Seloste väitöskirjasta: Estimating single-tree attributes by airborne laser scanning : methods based on computational geometry of the 3-D point data / Jari Vauhkonen. - [Joensuu] : [Itä-Suomen yliopisto], 2010. - (Dissertationes Forestales ; 104).
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given
Resumo:
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article surveys several methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analysed using real images. A summary, accompanied with experimental results, is given