959 resultados para 3D model
Resumo:
The geometries of a catchment constitute the basis for distributed physically based numerical modeling of different geoscientific disciplines. In this paper results from ground-penetrating radar (GPR) measurements, in terms of a 3D model of total sediment thickness and active layer thickness in a periglacial catchment in western Greenland, is presented. Using the topography, thickness and distribution of sediments is calculated. Vegetation classification and GPR measurements are used to scale active layer thickness from local measurements to catchment scale models. Annual maximum active layer thickness varies from 0.3 m in wetlands to 2.0 m in barren areas and areas of exposed bedrock. Maximum sediment thickness is estimated to be 12.3 m in the major valleys of the catchment. A method to correlate surface vegetation with active layer thickness is also presented. By using relatively simple methods, such as probing and vegetation classification, it is possible to upscale local point measurements to catchment scale models, in areas where the upper subsurface is relatively homogenous. The resulting spatial model of active layer thickness can be used in combination with the sediment model as a geometrical input to further studies of subsurface mass-transport and hydrological flow paths in the periglacial catchment through numerical modelling.
Resumo:
Este trabajo se centra en la construcción de la parte física del personaje virtual. El desarrollo muestra téecnicas de modelado 3D, cinemática y animación usadas para la creación de personajes virtuales. Se incluye además una implementación que está dividida en: modelado del personaje virtual, creación de un sistema de cinemática inversa y la creación de animaciones utilizando el sistema de cinemática. Primero, crear un modelo 3D exacto al diseño original, segundo, el desarrollo de un sistema de cinemática inversa que resuelva con exactitud las posiciones de las partes articuladas que forman el personaje virtual, y tercero, la creación de animaciones haciendo uso del sistema de cinemática para conseguir animaciones fluidas y depuradas. Como consecuencia, se ha obtenido un componente 3D animado, reutilizable, ampliable, y exportable a otros entornos virtuales. ---ABSTRACT---This article is pointed in the making of the physical part of the virtual character. Development shows modeling 3D, kinematic and animation techniques used for create the virtual character. In addition, an implementation is included, and it is divided in: to model the 3D character, to create an inverse kinematics system, and to create animations using a kinematic system. First, creating an exact 3D model from the original design, second, developing an inverse kinematics system that resolves the positions of the articulated pieces that compose the virtual character, and third, creating animation using the inverse kinematics system to get fluid and refined animations in realtime. As consequence, a 3D animated, reusable, extendable and to other virtual environments exportable component has been obtained.
Resumo:
The proliferation of video games and other applications of computer graphics in everyday life demands a much easier way to create animatable virtual human characters. Traditionally, this has been the job of highly skilled artists and animators that painstakingly model, rig and animate their avatars, and usually have to tune them for each application and transmission/rendering platform. The emergence of virtual/mixed reality environments also calls for practical and costeffective ways to produce custom models of actual people. The purpose of the present dissertation is bringing 3D human scanning closer to the average user. For this, two different techniques are presented, one passive and one active. The first one is a fully automatic system for generating statically multi-textured avatars of real people captured with several standard cameras. Our system uses a state-of-the-art shape from silhouette technique to retrieve the shape of subject. However, to deal with the lack of detail that is common in the facial region for these kind of techniques, which do not handle concavities correctly, our system proposes an approach to improve the quality of this region. This face enhancement technique uses a generic facial model which is transformed according to the specific facial features of the subject. Moreover, this system features a novel technique for generating view-independent texture atlases computed from the original images. This static multi-texturing system yields a seamless texture atlas calculated by combining the color information from several photos. We suppress the color seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by color blending techniques. The second technique features a system to retrieve a fully animatable 3D model of a human using a commercial depth sensor. Differently to other approaches in the current state of the art, our system does not require the user to be completely still through the scanning process, and neither the depth sensor is moved around the subject to cover all its surface. Instead, the depth sensor remains static and the skeleton tracking information is used to compensate the user’s movements during the scanning stage. RESUMEN La popularización de videojuegos y otras aplicaciones de los gráficos por ordenador en el día a día requiere una manera más sencilla de crear modelos virtuales humanos animables. Tradicionalmente, estos modelos han sido creados por artistas profesionales que cuidadosamente los modelan y animan, y que tienen que adaptar específicamente para cada aplicación y plataforma de transmisión o visualización. La aparición de los entornos de realidad virtual/mixta aumenta incluso más la demanda de técnicas prácticas y baratas para producir modelos 3D representando personas reales. El objetivo de esta tesis es acercar el escaneo de humanos en 3D al usuario medio. Para ello, se presentan dos técnicas diferentes, una pasiva y una activa. La primera es un sistema automático para generar avatares multi-texturizados de personas reales mediante una serie de cámaras comunes. Nuestro sistema usa técnicas del estado del arte basadas en shape from silhouette para extraer la forma del sujeto a escanear. Sin embargo, este tipo de técnicas no gestiona las concavidades correctamente, por lo que nuestro sistema propone una manera de incrementar la calidad en una región del modelo que se ve especialmente afectada: la cara. Esta técnica de mejora facial usa un modelo 3D genérico de una cara y lo modifica según los rasgos faciales específicos del sujeto. Además, el sistema incluye una novedosa técnica para generar un atlas de textura a partir de las imágenes capturadas. Este sistema de multi-texturización consigue un atlas de textura sin transiciones abruptas de color gracias a su manera de mezclar la información de color de varias imágenes sobre cada triángulo. Todas las costuras y discontinuidades de color debidas a las condiciones de iluminación irregulares son eliminadas, minimizando el efecto de desenfoque de la interpolación que normalmente introducen este tipo de métodos. La segunda técnica presenta un sistema para conseguir un modelo humano 3D completamente animable utilizando un sensor de profundidad. A diferencia de otros métodos del estado de arte, nuestro sistema no requiere que el usuario esté completamente quieto durante el proceso de escaneado, ni mover el sensor alrededor del sujeto para cubrir toda su superficie. Por el contrario, el sensor se mantiene estático y el esqueleto virtual de la persona, que se va siguiendo durante el proceso, se utiliza para compensar sus movimientos durante el escaneado.
Resumo:
Se ha realizado un modelo geológico en 3D de la porción NO de la Cuenca del Bajo Segura, por ser esta la que mostraba una menor complicación geológica. La cuenca se ha dividido en 7 sintemas (nombrados Ab,M1, M2, P1, P2, Pc y Q) y se ha utilizado como base de la cuenca el techo de la Formación Calizas de Las Ventanas (Ve). La construcción del modelo 3D permite un mejor conocimiento geológico de la cuenca. El modelo apunta a una mayor complicación tectónica de lo supuesto en un principio.
Resumo:
This work presents a 3D geometric model of growth strata cropping out in a fault-propagation fold associated with the Crevillente Fault (Abanilla-Alicante sector) from the Bajo Segura Basin (eastern Betic Cordillera, southern Spain). The analysis of this 3D model enables us to unravel the along-strike and along-section variations of the growth strata, providing constraints to assess the fold development, and hence, the fault kinematic evolution in space and time. We postulate that the observed along-strike dip variations are related to lateral variation in fault displacement. Along-section variations of the progressive unconformity opening angles indicate greater fault slip in the upper Tortonian–Messinian time span; from the Messinian on, quantitative analysis of the unconformity indicate a constant or lower tectonic activity of the Crevillente Fault (Abanilla-Alicante sector); the minor abundance of striated pebbles in the Pliocene-Quaternary units could be interpreted as a decrease in the stress magnitude and consequently in the tectonic activity of the fault. At a regional scale, comparison of the growth successions cropping out in the northern and southern limits of the Bajo Segura Basin points to a southward migration of deformation in the basin. This means that the Bajo Segura Fault became active after the Crevillente Fault (Abanilla-Alicante sector), for which activity on the latter was probably decreasing according to our data. Consequently, we propose that the seismic hazard at the northern limit of the Bajo Segura Basin should be lower than at the southern limit.
Resumo:
Underwater video transects have become a common tool for quantitative analysis of the seafloor. However a major difficulty remains in the accurate determination of the area surveyed as underwater navigation can be unreliable and image scaling does not always compensate for distortions due to perspective and topography. Depending on the camera set-up and available instruments, different methods of surface measurement are applied, which make it difficult to compare data obtained by different vehicles. 3-D modelling of the seafloor based on 2-D video data and a reference scale can be used to compute subtransect dimensions. Focussing on the length of the subtransect, the data obtained from 3-D models created with the software PhotoModeler Scanner are compared with those determined from underwater acoustic positioning (ultra short baseline, USBL) and bottom tracking (Doppler velocity log, DVL). 3-D model building and scaling was successfully conducted on all three tested set-ups and the distortion of the reference scales due to substrate roughness was identified as the main source of imprecision. Acoustic positioning was generally inaccurate and bottom tracking unreliable on rough terrain. Subtransect lengths assessed with PhotoModeler were on average 20% longer than those derived from acoustic positioning due to the higher spatial resolution and the inclusion of slope. On a high relief wall bottom tracking and 3-D modelling yielded similar results. At present, 3-D modelling is the most powerful, albeit the most time-consuming, method for accurate determination of video subtransect dimensions.
Resumo:
This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.
Resumo:
The object of this paper is presenting the University of Economics – Varna, using a 3D model with 3Ds MAX. Created in 1920, May 14, University of Economics - Varna is a cultural institution with a place and style of its own. With the emergence of the three-dimensional modeling we entered a new stage of the evolution of computer graphics. The main target is to preserve the historical vision, to demonstrate forward-thinking and using of future-oriented approaches.
Resumo:
We study the growth of the explosion energy after shock revival in neutrino-driven explosions in two and three dimensions (2D/3D) using multi-group neutrino hydrodynamics simulations of an 11.2 M⊙ star. The 3D model shows a faster and steadier growth of the explosion energy and already shows signs of subsiding accretion after one second. By contrast, the growth of the explosion energy in 2D is unsteady, and accretion lasts for several seconds as confirmed by additional long-time simulations of stars of similar masses. Appreciable explosion energies can still be reached, albeit at the expense of rather high neutron star masses. In 2D, the binding energy at the gain radius is larger because the strong excitation of downward-propagating g modes removes energy from the freshly accreted material in the downflows. Consequently, the mass outflow rate is considerably lower in 2D than in 3D. This is only partially compensated by additional heating by outward-propagating acoustic waves in 2D. Moreover, the mass outflow rate in 2D is reduced because much of the neutrino energy deposition occurs in downflows or bubbles confined by secondary shocks without driving outflows. Episodic constriction of outflows and vertical mixing of colder shocked material and hot, neutrino-heated ejecta due to Rayleigh–Taylor instability further hamper the growth of the explosion energy in 2D. Further simulations will be necessary to determine whether these effects are generic over a wider range of supernova progenitors.
Resumo:
The fruit of certain mango cultivars (e.g., 'Honey Gold') can develop blush on their skin. Skin blush due to red pigmentation is from the accumulation of anthocyanins. Anthocyanin biosynthesis is related to environmental determinants, including light received by the fruit. It has been observed that mango skin blush varies with position in the tree canopy. However, little investigation into this spatial relationship has been conducted. The objective of this preliminary study was to describe a 'Honey Gold' mango tree by capturing its three-dimensional (3D) architecture. A light path tracing model QuasiMC was then used to predict light received by fruit. The use of this 3D model was to better understand the relationship between mango fruit skin blush and fruit position in the canopy. The digitised mango tree mimicked the real tree at a high level of detail. Observations on mango skin blush distribution supported the proposition that sunlight exposure is an absolute requirement for anthocyanin development. No blush development occurred on shaded skin. It was affirmed that 3D mapping could allow for virtual experiments. For example, for virtual canopy thinning (e.g., 'window pruning') to admit more sunlight with a view to improve fruit blush. Improvements to 3D modelling of mango skin blush could focus on increasing accuracy, e.g., measurement of leaf light reflectance and transmission and the inclusion of the effect shading by branches.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Resumo:
Apresenta-se uma metodologia para caracterizar a transmissividade dos Granitos Hercínicos e Metasedimentos do Complexo Xisto-Grauváquico do maciço envolvente e subjacente à antiga área mineira de urânio da Quinta do Bispo. Inicia-se com a modelação das litologias e grau de alteração a que se segue a simulação condicional da densidade de fracturação. No final, a densidade de fracturação é convertida num modelo 3D de transmissividade por relação com os resultados dos ensaios de bombagem. The purpose of this work is to present a methodology for characterizing the transmissivity of the Hercynian granites and complex schist–greywacke metasediment rocks surrounding and underlying the old Quinta do Bispo uranium mining site. The methodology encompasses modelling of lithologies and weathering levels, followed by a conditional simulation of fracture density. Fracture density is then converted into a 3D model of transmissivity via a relationship with pumping tests.
Resumo:
Teknova have 2D steady-state models of the calciner but wish, in the long term, to have a 3D model that can also cover unsteady conditions, and can can model the loss of axisymmetry that someties occurs. Teknova also wish to understand the processes happening around the tip of the upper electrode, in particular the formation of a lip on it and the the shape of the empty region below it. The Study Group proposed potential models for the degree of graphitization, and for the granular flow. Also the Study Group considered the upper electrode in detail. The proposed model for the lip formation is by sublimation of carbon from the hottest parts of the furnace with redeposition in the region around the electrode, which may stick particles onto the electrode surface. In this model the region below the electrode would be a void, roughly a vertex-down conical cavity. The electric field near the lower rim of the electrode will then have a singularity and so the most intense heating of the charge will be around the rim. We conjecture that the reason why the lower electrode lasts so much longer than the upper is that it is not adjacent to a cavity like this, and therefore does not have a singularity in the field.
3D Surveying and Data Management towards the Realization of a Knowledge System for Cultural Heritage
Resumo:
The research activities involved the application of the Geomatic techniques in the Cultural Heritage field, following the development of two themes: Firstly, the application of high precision surveying techniques for the restoration and interpretation of relevant monuments and archaeological finds. The main case regards the activities for the generation of a high-fidelity 3D model of the Fountain of Neptune in Bologna. In this work, aimed to the restoration of the manufacture, both the geometrical and radiometrical aspects were crucial. The final product was the base of a 3D information system representing a shared tool where the different figures involved in the restoration activities shared their contribution in a multidisciplinary approach. Secondly, the arrangement of 3D databases for a Building Information Modeling (BIM) approach, in a process which involves the generation and management of digital representations of physical and functional characteristics of historical buildings, towards a so-called Historical Building Information Model (HBIM). A first application was conducted for the San Michele in Acerboli’s church in Santarcangelo di Romagna. The survey was performed by the integration of the classical and modern Geomatic techniques and the point cloud representing the church was used for the development of a HBIM model, where the relevant information connected to the building could be stored and georeferenced. A second application regards the domus of Obellio Firmo in Pompeii, surveyed by the integration of the classical and modern Geomatic techniques. An historical analysis permitted the definitions of phases and the organization of a database of materials and constructive elements. The goal is the obtaining of a federate model able to manage the different aspects: documental, analytic and reconstructive ones.
Resumo:
La realtà aumentata (AR) è una nuova tecnologia adottata in chirurgia prostatica con l'obiettivo di migliorare la conservazione dei fasci neurovascolari (NVB) ed evitare i margini chirurgici positivi (PSM). Abbiamo arruolato prospetticamente pazienti con diagnosi di cancro alla prostata (PCa) sul base di biopsia di fusione mirata con mpMRI positiva. Prima dell'intervento, i pazienti arruolati sono stati indirizzati a sottoporsi a ricostruzione del modello virtuale 3D basato su mpMRI preoperatoria immagini. Infine, il chirurgo ha eseguito la RARP con l'ausilio del modello 3D proiettato in AR all'interno della console robotica (RARP guidata AR-3D). I pazienti sottoposti a AR RARP sono stati confrontati con quelli sottoposti a "RARP standard" nello stesso periodo. Nel complesso, i tassi di PSM erano comparabili tra i due gruppi; I PSM a livello della lesione indice erano significativamente più bassi nei pazienti riferiti al gruppo AR-3D (5%) rispetto a quelli nel gruppo di controllo (20%; p = 0,01). La nuova tecnica di guida AR-3D per l'analisi IFS può consentono di ridurre i PSM a livello della lesione dell'indice