890 resultados para Computer Graphics, 3D Studio Max, Unity 3D, PlayMaker, Progettazione, Sviluppo, Videogioco


Relevância:

50.00% 50.00%

Publicador:

Resumo:

La condición tridimensional de la construcción edificatoria precisa del uso del dibujo en 3D como la mejor herramienta de proyecto y transmisión de conocimientos técnicos y formales. El objetivo de esta comunicación es mostrar la aplicación de la expresión gráfica en 3D en un análisis histórico sobre la evolución de la envolvente industrializada en arquitectura, identificando sus principales condicionantes técnicos y constructivos. El estudio compara la evolución del uso de sistemas constructivos industrializados mediante un análisis gráfico de las soluciones constructivas más destacables. La metodología empleada se basa en la identificación y estudio de determinados sistemas constructivos industrializados compuestos por materiales ligeros así como de obras de arquitectura representativas por su influencia en la evolución de la envolvente arquitectónica en la segunda mitad del siglo XX. La representación gráfica en 3D ayuda a comparar las obras analizadas desde aspectos tecnológicos y formales, constatándose la utilidad del dibujo asistido por ordenador en el análisis constructivo realizado. En conclusión, el uso del dibujo arquitectónico en 3D contribuye, por la mejor comprensión de las características espaciales de las soluciones constructivas, al análisis de las propiedades materiales y funcionales de los sistemas constructivos industrializados y su aplicación al diseño arquitectónico, ayudando a perfeccionar su conocimiento e incrementando la calidad constructiva y compromiso social de las propuestas arquitectónicas.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Questo elaborato finale presenta la proposta di traduzione di alcuni capitoli del volume "Adama ou la vie en 3D" di Valentine Goby. Questo testo fa parte della collana Français d'ailleurs, sostenuta e promossa dalla Cité nationale de l'histoire de l'immigration di Parigi. Dopo un breve riassunto della storia di questa istituzione, propongo una traduzione di tre capitoli che trattano il tema delle leggi relative al rilascio dei documenti agli immigrati, seguita da un'analisi di traduzione a livello lessicale e morfosintattico, per poi passare a un approfondimento sulla situazione politica e sociologica della Francia degli anni '80 e '90, periodo in cui si svolge il racconto.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

An industrial manipulator equipped with an automatic clay extruder is used to realize a machine that can manufacture additively clay objects. The desired geometries are designed by means of a 3D modeling software and then sliced in a sequence of layers with the same thickness of the extruded clay section. The profiles of each layer are transformed in trajectories for the extruder and therefore for the end-effector of the manipulator. The goal of this thesis is to improve the algorithm for the inverse kinematic resolution and the integration of the routine within the development software that controls the machine (Rhino/Grasshopper). The kinematic model is described by homogeneous transformations, adopting the Denavit-Hartenberg standard convention. The function is implemented in C# and it has been preliminarily tested in Matlab. The outcome of this work is a substantial reduction of the computation time relative to the execution of the algorithm, which is halved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Beyond the inherent technical challenges, current research into the three dimensional surface correspondence problem is hampered by a lack of uniform terminology, an abundance of application specific algorithms, and the absence of a consistent model for comparing existing approaches and developing new ones. This paper addresses these challenges by presenting a framework for analysing, comparing, developing, and implementing surface correspondence algorithms. The framework uses five distinct stages to establish correspondence between surfaces. It is general, encompassing a wide variety of existing techniques, and flexible, facilitating the synthesis of new correspondence algorithms. This paper presents a review of existing surface correspondence algorithms, and shows how they fit into the correspondence framework. It also shows how the framework can be used to analyse and compare existing algorithms and develop new algorithms using the framework's modular structure. Six algorithms, four existing and two new, are implemented using the framework. Each implemented algorithm is used to match a number of surface pairs. Results demonstrate that the correspondence framework implementations are faithful implementations of existing algorithms, and that powerful new surface correspondence algorithms can be created. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.

Relevância:

50.00% 50.00%

Publicador:

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The use of 3D visualisation of digital information is a recent phenomenon. It relies on users understanding 3D perspectival spaces. Questions about the universal access of such spaces has been debated since its inception in the European Renaissance. Perspective has since become a strong cultural influence in Western visual communication. Perspective imaging assists the process of experimenting by the sketching or modelling of ideas. In particular, the recent 3D modelling of an essentially non-dimensional Cyber-space raises questions of how we think about information in general. While alternate methods clearly exist they are rarely explored within the 3D paradigm (such as Chinese isometry). This paper seeks to generate further discussion on the historical background of perspective and its role in underpinning this emergent field. © 2005 IEEE.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

For determining functionality dependencies between two proteins, both represented as 3D structures, it is an essential condition that they have one or more matching structural regions called patches. As 3D structures for proteins are large, complex and constantly evolving, it is computationally expensive and very time-consuming to identify possible locations and sizes of patches for a given protein against a large protein database. In this paper, we address a vector space based representation for protein structures, where a patch is formed by the vectors within the region. Based on our previews work, a compact representation of the patch named patch signature is applied here. A similarity measure of two patches is then derived based on their signatures. To achieve fast patch matching in large protein databases, a match-and-expand strategy is proposed. Given a query patch, a set of small k-sized matching patches, called candidate patches, is generated in match stage. The candidate patches are further filtered by enlarging k in expand stage. Our extensive experimental results demonstrate encouraging performances with respect to this biologically critical but previously computationally prohibitive problem.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.