979 resultados para Automotive 3D modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A limestone sample was scanned using computed tomography (CT) and the hydraulic conductivity of the 3D reconstructed sample was determined using Lattice- Boltzmann methods (LBM) at varying scales. Due to the shape and size of the original sample, it was challenging to obtain a consistent rectilinear test sample. Through visual inspection however, 91 mm and 76 mm samples were digitally cut from the original. The samples had porosities of 58% and 64% and produced hydraulic conductivity values of K= 13.5 m/s and K=34.5 m/s, respectively. Both of these samples were re-sampled to 1/8 and 1/64 of their original size to produce new virtual samples at lower resolutions of 0.542 mm/lu and 1.084 mm/lu, while still representing the same physical dimensions. The hydraulic conductivity tended to increase slightly as the resolution became coarser. In order to determine an REV, the 91 mm sample was also sub-sampled into blocks that were 1/8 and 1/64 the size of the original. The results were consistent with analytical expectations such as those produced by the Kozeny-Carman equation. A definitive REV size was not reached, however, indicating the need for a larger sample. The methods described here demonstrate the ability of LBM to test rock structures and sizes not normally attainable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two-dimensional, 2D, finite-difference time-domain (FDTD) method is used to analyze two different models of multi-conductor transmission lines (MTL). The first model is a two-conductor MTL and the second is a threeconductor MTL. Apart from the MTL's, a three-dimensional, 3D, FDTD method is used to analyze a three-patch microstrip parasitic array. While the MTL analysis is entirely in time-domain, the microstrip parasitic array is a study of scattering parameter Sn in the frequency-domain. The results clearly indicate that FDTD is an efficient and accurate tool to model and analyze multiconductor transmission line as well as microstrip antennas and arrays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.

We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.

Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this thesis is to explore and quantify the response of large-scale solid mass transfer events on satellite-based gravity observations. The gravity signature of large-scale solid mass transfers has not been deeply explored yet; mainly due to the lack of significant events during dedicated satellite gravity missions‘ lifespans. In light of the next generation of gravity missions, the feasibility of employing satellite gravity observations to detect submarine and surface mass transfers is of importance for geoscience (improves the understanding of geodynamic processes) and for geodesy (improves the understanding of the dynamic gravity field). The aim of this thesis is twofold and focuses on assessing the feasibility of using satellite gravity observations for detecting large-scale solid mass transfers and on modeling the impact on the gravity field caused by these events. A methodology that employs 3D forward modeling simulations and 2D wavelet multiresolution analysis is suggested to estimate the impact of solid mass transfers on satellite gravity observations. The gravity signature of various submarine and subaerial events that occurred in the past was estimated. Case studies were conducted to assess the sensitivity and resolvability required in order to observe gravity differences caused by solid mass transfers. Simulation studies were also employed in order to assess the expected contribution of the Next Generation of Gravity Missions for this application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il caso studio del vestibolo ottagonale di Villa Adriana ha dato la possibilità di applicare ad un edificio di notevole valore storico e artistico tecniche di restituzione digitale e di modellazione tridimensionale basate su applicativi di modellazione geometrica, con lo scopo di generarne il modello 3D digitale fotorealistico e polifunzionale. Nel caso specifico del vestibolo, un modello tridimensionale di questo tipo risulta utile a fini documentativi, a sostegno di ipotesi costruttive e come strumento per la valutazione di interventi di restauro. Il percorso intrapreso ha permesso di valutare le criticità nelle tecniche di acquisizione, modellazione e foto-modellazione tridimensionale applicate in ambito archeologico, tecniche usate abitualmente anche in settori quali l’architettura, il design industriale ma anche nel cinema (effetti speciali e film d’animazione) e in ambito videoludico, con obiettivi differenti: nel settore del design e della progettazione industriale il Reverse Modeling viene impiegato per eseguire controlli di qualità e rispetto delle tolleranze sul prodotto finale, mentre in ambito cinematografico e videoludico (in combinazione con altri software) permette la creazione di modelli realistici da inserire all’interno di film o videogiochi, (modelli non solo di oggetti ma anche di persone). La generazione di un modello tridimensionale ottenuto tramite Reverse Modeling è frutto di un processo opposto alla progettazione e può avvenire secondo diverse strategie, ognuna delle quali presenta vantaggi e svantaggi specifici che la rendono più indicata in alcuni casi piuttosto che in altri. In questo studio sono state analizzate acquisizioni tridimensionali effettuate tramite Laser Scan e tramite applicazioni Structure from Motion/Dense Stereo View.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research focuses on finding a fashion design methodology to reliably translate innovative two-dimensional ideas on paper, via a structural design sculpture, into an intermediate model. The author, both as a fashion designer and a researcher, has witnessed the issues which arise, regarding the loss of some of the initial ideas and distortion during the two-dimensional creative sketch to three-dimensional garment transfer process. Therefore, this research is concerned with fashion designers engaged in transferring a two-dimensional sketch through the method ‘sculptural form giving’. This research method applies the ideal model of conceptual sculpture, in the fashion design process, akin to those used in the disciplines of architecture. These parallel design disciplines share similar processes for realizing design ideas. Moreover, this research investigates and formalizes the processes that utilize the measurable space between the garment and the body, to help transfer garment variation and scale. In summation, this research proposition focuses on helping fashion designers to produce a creative method that helps the designer transfer their imaginative concept through intermediate modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents the development and application of a three-dimensional oil spill model for predicting the movement of an oil slick in the coastal waters of Singapore. In the model, the oil slick is divided into a number of small elements for simulating of the oil processes of spreading, advection, turbulent diffusion. This model is capable of predicting the horizontal movement of surface oil slick. Satellite images and field observations of oil slicks on the surface in the Singapore Straits are used to validate the newly developed model. Compared with the observations, the numerical results of the oil spill model show good conformity. In this study, the 3d model was generated using the geometrical data of Singapore Straits waters by GAMBIT which is a pre-processor of FLUENT programmed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Additive manufacturing, including fused deposition modeling (FDM), is transforming the built world and engineering education. Deep understanding of parts created through FDM technology has lagged behind its adoption in home, work, and academic environments. Properties of parts created from bulk materials through traditional manufacturing are understood well enough to accurately predict their behavior through analytical models. Unfortunately, Additive Manufacturing (AM) process parameters create anisotropy on a scale that fundamentally affects the part properties. Understanding AM process parameters (implemented by program algorithms called slicers) is necessary to predict part behavior. Investigating algorithms controlling print parameters (slicers) revealed stark differences between the generation of part layers. In this work, tensile testing experiments, including a full factorial design, determined that three key factors, width, thickness, infill density, and their interactions, significantly affect the tensile properties of 3D printed test samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los protocolos de medición antropométrica se caracterizan por la profusión de medidas discretas o localizadas, en un intento para caracterizar completamente la forma corporal del sujeto -- Dichos protocolos se utilizan intensivamente en campos como medicina deportiva, forense y/o reconstructiva, diseño de prótesis, ergonomía, en la confección de prendas, accesorios, etc -- Con el avance de algoritmos de recuperación de formas a partir de muestreos (digitalizaciones) la caracterización antropométrica se ha alterado significativamente -- El articulo presente muestra el proceso de caracterización digital de forma corpórea, incluyendo los protocolos de medición sobre el sujeto, el ambiente computacional - DigitLAB- (desarrollado en el CII-CAD-CAM-CG de la Universidad EAFIT) para recuperación de superficies, hasta los modelos geométricos finales -- Se presentan comparaciones de los resultados obtenidos con DigitLAB y con paquetes comerciales de recuperación de forma 3D -- Los resultados de DigitLAB resultan superiores, debido principalmente al hecho de que este toma ventaja de los patrones de las digitalizaciones (planares de contacto, por rejilla de pixels - range images -, etc.) y provee módulos de tratamiento geométrico - estadístico de los datos para poder aplicar efectivamente los algoritmos de recuperación de forma -- Se presenta un caso de estudio dirigido a la industria de la confección, y otros efectuados sobre conjuntos de prueba comunes en el ámbito científico para la homologación de algoritmos

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional vehicles are creating pollution problems, global warming and the extinction of high density fuels. To address these problems, automotive companies and universities are researching on hybrid electric vehicles where two different power devices are used to propel a vehicle. This research studies the development and testing of a dynamic model for Prius 2010 Hybrid Synergy Drive (HSD), a power-split device. The device was modeled and integrated with a hybrid vehicle model. To add an electric only mode for vehicle propulsion, the hybrid synergy drive was modified by adding a clutch to carrier 1. The performance of the integrated vehicle model was tested with UDDS drive cycle using rule-based control strategy. The dSPACE Hardware-In-the-Loop (HIL) simulator was used for HIL simulation test. The HIL simulation result shows that the integration of developed HSD dynamic model with a hybrid vehicle model was successful. The HSD model was able to split power and isolate engine speed from vehicle speed in hybrid mode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past three decades the automotive industry is facing two main conflicting challenges to improve fuel economy and meet emissions standards. This has driven the engineers and researchers around the world to develop engines and powertrain which can meet these two daunting challenges. Focusing on the internal combustion engines there are very few options to enhance their performance beyond the current standards without increasing the price considerably. The Homogeneous Charge Compression Ignition (HCCI) engine technology is one of the combustion techniques which has the potential to partially meet the current critical challenges including CAFE standards and stringent EPA emissions standards. HCCI works on very lean mixtures compared to current SI engines, resulting in very low combustion temperatures and ultra-low NOx emissions. These engines when controlled accurately result in ultra-low soot formation. On the other hand HCCI engines face a problem of high unburnt hydrocarbon and carbon monoxide emissions. This technology also faces acute combustion controls problem, which if not dealt properly with yields highly unfavorable operating conditions and exhaust emissions. This thesis contains two main parts. One part deals in developing an HCCI experimental setup and the other focusses on developing a grey box modelling technique to control HCCI exhaust gas emissions. The experimental part gives the complete details on modification made on the stock engine to run in HCCI mode. This part also comprises details and specifications of all the sensors, actuators and other auxiliary parts attached to the conventional SI engine in order to run and monitor the engine in SI mode and future SI-HCCI mode switching studies. In the latter part around 600 data points from two different HCCI setups for two different engines are studied. A grey-box model for emission prediction is developed. The grey box model is trained with the use of 75% data and the remaining data is used for validation purpose. An average of 70% increase in accuracy for predicting engine performance is found while using the grey-box over an empirical (black box) model during this study. The grey-box model provides a solution for the difficulty faced for real time control of an HCCI engine. The grey-box model in this thesis is the first study in literature to develop a control oriented model for predicting HCCI engine emissions for control.