962 resultados para 3D Computer Graphics
Resumo:
In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.
Resumo:
Today, the requirement of professional skills to university students is constantly increasing in our society. In our opinion, the content offered in official degrees need to be nourished with different variables, enriching their global professional knowledge in a parallel way; that is why, in recent years, there is a great multiplicity of complementary courses at university. One of the most socially demanded technical requirements within the architectural, design or engineering field is the management of 3D drawing software, becoming an indispensable reality in these sectors. Thus, this specific training becomes essential over two-dimension traditional design, because the inclusion of great possibilities of spatial development that go beyond conventional orthographic projections (plans, sections or elevations), allowing modelling and rotation of the selected items from multiple angles and perspectives. Therefore, this paper analyzes the teaching methodology of a complementary course for those technicians in the construction industry interested in computer-aided design, using modelling (SketchupMake) and rendering programs (Kerkythea). The course is developed from the technician point of view, by learning computer management and its application to professional development from a more general to a more specific view through practical examples. The proposed methodology is based on the development of real examples in different professional environments such as rehabilitation, new constructions, opening projects or architectural design. This multidisciplinary contribution improves criticism of students in different areas, encouraging new learning strategies and the independent development of three-dimensional solutions. Thus, the practical implementation of new situations, even suggested by the students themselves, ensures active participation, saving time during the design process and the increase of effectiveness when generating elements which may be represented, moved or virtually tested. In conclusion, this teaching-learning methodology improves the skills and competencies of students to face the growing professional demands of society. After finishing the course, technicians not only improved their expertise in the field of drawing but they also enhanced their capacity for spatial vision; both essential qualities in these sectors that can be applied to their professional development with great success.
Resumo:
Durante los últimos años ha sido creciente el uso de las unidades de procesamiento gráfico, más conocidas como GPU (Graphic Processing Unit), en aplicaciones de propósito general, dejando a un lado el objetivo para el que fueron creadas y que no era otro que el renderizado de gráficos por computador. Este crecimiento se debe en parte a la evolución que han experimentado estos dispositivos durante este tiempo y que les ha dotado de gran potencia de cálculo, consiguiendo que su uso se extienda desde ordenadores personales a grandes cluster. Este hecho unido a la proliferación de sensores RGB-D de bajo coste ha hecho que crezca el número de aplicaciones de visión que hacen uso de esta tecnología para la resolución de problemas, así como también para el desarrollo de nuevas aplicaciones. Todas estas mejoras no solamente se han realizado en la parte hardware, es decir en los dispositivos, sino también en la parte software con la aparición de nuevas herramientas de desarrollo que facilitan la programación de estos dispositivos GPU. Este nuevo paradigma se acuñó como Computación de Propósito General sobre Unidades de Proceso Gráfico (General-Purpose computation on Graphics Processing Units, GPGPU). Los dispositivos GPU se clasifican en diferentes familias, en función de las distintas características hardware que poseen. Cada nueva familia que aparece incorpora nuevas mejoras tecnológicas que le permite conseguir mejor rendimiento que las anteriores. No obstante, para sacar un rendimiento óptimo a un dispositivo GPU es necesario configurarlo correctamente antes de usarlo. Esta configuración viene determinada por los valores asignados a una serie de parámetros del dispositivo. Por tanto, muchas de las implementaciones que hoy en día hacen uso de los dispositivos GPU para el registro denso de nubes de puntos 3D, podrían ver mejorado su rendimiento con una configuración óptima de dichos parámetros, en función del dispositivo utilizado. Es por ello que, ante la falta de un estudio detallado del grado de afectación de los parámetros GPU sobre el rendimiento final de una implementación, se consideró muy conveniente la realización de este estudio. Este estudio no sólo se realizó con distintas configuraciones de parámetros GPU, sino también con diferentes arquitecturas de dispositivos GPU. El objetivo de este estudio es proporcionar una herramienta de decisión que ayude a los desarrolladores a la hora implementar aplicaciones para dispositivos GPU. Uno de los campos de investigación en los que más prolifera el uso de estas tecnologías es el campo de la robótica ya que tradicionalmente en robótica, sobre todo en la robótica móvil, se utilizaban combinaciones de sensores de distinta naturaleza con un alto coste económico, como el láser, el sónar o el sensor de contacto, para obtener datos del entorno. Más tarde, estos datos eran utilizados en aplicaciones de visión por computador con un coste computacional muy alto. Todo este coste, tanto el económico de los sensores utilizados como el coste computacional, se ha visto reducido notablemente gracias a estas nuevas tecnologías. Dentro de las aplicaciones de visión por computador más utilizadas está el registro de nubes de puntos. Este proceso es, en general, la transformación de diferentes nubes de puntos a un sistema de coordenadas conocido. Los datos pueden proceder de fotografías, de diferentes sensores, etc. Se utiliza en diferentes campos como son la visión artificial, la imagen médica, el reconocimiento de objetos y el análisis de imágenes y datos de satélites. El registro se utiliza para poder comparar o integrar los datos obtenidos en diferentes mediciones. En este trabajo se realiza un repaso del estado del arte de los métodos de registro 3D. Al mismo tiempo, se presenta un profundo estudio sobre el método de registro 3D más utilizado, Iterative Closest Point (ICP), y una de sus variantes más conocidas, Expectation-Maximization ICP (EMICP). Este estudio contempla tanto su implementación secuencial como su implementación paralela en dispositivos GPU, centrándose en cómo afectan a su rendimiento las distintas configuraciones de parámetros GPU. Como consecuencia de este estudio, también se presenta una propuesta para mejorar el aprovechamiento de la memoria de los dispositivos GPU, permitiendo el trabajo con nubes de puntos más grandes, reduciendo el problema de la limitación de memoria impuesta por el dispositivo. El funcionamiento de los métodos de registro 3D utilizados en este trabajo depende en gran medida de la inicialización del problema. En este caso, esa inicialización del problema consiste en la correcta elección de la matriz de transformación con la que se iniciará el algoritmo. Debido a que este aspecto es muy importante en este tipo de algoritmos, ya que de él depende llegar antes o no a la solución o, incluso, no llegar nunca a la solución, en este trabajo se presenta un estudio sobre el espacio de transformaciones con el objetivo de caracterizarlo y facilitar la elección de la transformación inicial a utilizar en estos algoritmos.
Resumo:
La condición tridimensional de la construcción edificatoria precisa del uso del dibujo en 3D como la mejor herramienta de proyecto y transmisión de conocimientos técnicos y formales. El objetivo de esta comunicación es mostrar la aplicación de la expresión gráfica en 3D en un análisis histórico sobre la evolución de la envolvente industrializada en arquitectura, identificando sus principales condicionantes técnicos y constructivos. El estudio compara la evolución del uso de sistemas constructivos industrializados mediante un análisis gráfico de las soluciones constructivas más destacables. La metodología empleada se basa en la identificación y estudio de determinados sistemas constructivos industrializados compuestos por materiales ligeros así como de obras de arquitectura representativas por su influencia en la evolución de la envolvente arquitectónica en la segunda mitad del siglo XX. La representación gráfica en 3D ayuda a comparar las obras analizadas desde aspectos tecnológicos y formales, constatándose la utilidad del dibujo asistido por ordenador en el análisis constructivo realizado. En conclusión, el uso del dibujo arquitectónico en 3D contribuye, por la mejor comprensión de las características espaciales de las soluciones constructivas, al análisis de las propiedades materiales y funcionales de los sistemas constructivos industrializados y su aplicación al diseño arquitectónico, ayudando a perfeccionar su conocimiento e incrementando la calidad constructiva y compromiso social de las propuestas arquitectónicas.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Thesis (M.S.)--University of Illinois.
Resumo:
In this paper we investigate the difference between the adsorption of spherical molecule argon (at 87.3 K) and the flexible normal butane (at an equivalent temperature of 150 K) in carbon slit pores. These temperatures are equivalent in the sense that they have the same relative distances between their respective triple points and critical points. Higher equivalent temperatures are also studied (122.67 K for argon and 303 K for n-butane) to investigate the effects of temperature on the 2D-transition in adsorbed density. The Grand Canonical Monte Carlo simulation is used to study the adsorption of these two model adsorbates. Beside the longer computation times involved in the computation of n-butane adsorption, n-butane exhibits many interesting behaviors such as: (i) the onset of adsorption occurs sooner (in terms of relative pressure), (ii) the hysteresis for 2D- and 3D-transitions is larger, (iii) liquid-solid transition is not possible, (iv) 2D-transition occurs for n-butane at 150 K while it does not happen for argon except for pores that accommodate two layers of molecules, (v) the maximum pore density is about four times less than that of argon and (vi) the sieving pore width is slightly larger than that for argon. Finally another feature obtained from the Grand Canonical Monte Carlo (GCMC) simulation is the configurational arrangement of molecules in pores. For spherical argon, the arrangement is rather well structured, while for n-butane the arrangement depends very much on the pore size. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Beyond the inherent technical challenges, current research into the three dimensional surface correspondence problem is hampered by a lack of uniform terminology, an abundance of application specific algorithms, and the absence of a consistent model for comparing existing approaches and developing new ones. This paper addresses these challenges by presenting a framework for analysing, comparing, developing, and implementing surface correspondence algorithms. The framework uses five distinct stages to establish correspondence between surfaces. It is general, encompassing a wide variety of existing techniques, and flexible, facilitating the synthesis of new correspondence algorithms. This paper presents a review of existing surface correspondence algorithms, and shows how they fit into the correspondence framework. It also shows how the framework can be used to analyse and compare existing algorithms and develop new algorithms using the framework's modular structure. Six algorithms, four existing and two new, are implemented using the framework. Each implemented algorithm is used to match a number of surface pairs. Results demonstrate that the correspondence framework implementations are faithful implementations of existing algorithms, and that powerful new surface correspondence algorithms can be created. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Resumo:
The use of 3D visualisation of digital information is a recent phenomenon. It relies on users understanding 3D perspectival spaces. Questions about the universal access of such spaces has been debated since its inception in the European Renaissance. Perspective has since become a strong cultural influence in Western visual communication. Perspective imaging assists the process of experimenting by the sketching or modelling of ideas. In particular, the recent 3D modelling of an essentially non-dimensional Cyber-space raises questions of how we think about information in general. While alternate methods clearly exist they are rarely explored within the 3D paradigm (such as Chinese isometry). This paper seeks to generate further discussion on the historical background of perspective and its role in underpinning this emergent field. © 2005 IEEE.
Resumo:
For determining functionality dependencies between two proteins, both represented as 3D structures, it is an essential condition that they have one or more matching structural regions called patches. As 3D structures for proteins are large, complex and constantly evolving, it is computationally expensive and very time-consuming to identify possible locations and sizes of patches for a given protein against a large protein database. In this paper, we address a vector space based representation for protein structures, where a patch is formed by the vectors within the region. Based on our previews work, a compact representation of the patch named patch signature is applied here. A similarity measure of two patches is then derived based on their signatures. To achieve fast patch matching in large protein databases, a match-and-expand strategy is proposed. Given a query patch, a set of small k-sized matching patches, called candidate patches, is generated in match stage. The candidate patches are further filtered by enlarging k in expand stage. Our extensive experimental results demonstrate encouraging performances with respect to this biologically critical but previously computationally prohibitive problem.