46 resultados para 3D virtual models
Resumo:
Integrating analysis and design models is a complex task due to differences between the models and the architectures of the toolsets used to create them. This complexity is increased with the use of many different tools for specific tasks during an analysis process. In this work various design and analysis models are linked throughout the design lifecycle, allowing them to be moved between packages in a way not currently available. Three technologies named Cellular Modeling, Virtual Topology and Equivalencing are combined to demonstrate how different finite element meshes generated on abstract analysis geometries can be linked to their original geometry. Establishing the equivalence relationships between models enables analysts to utilize multiple packages for specialist tasks without worrying about compatibility issues or rework.
Resumo:
Abstract. Modern business practices in engineering are increasingly turning to post manufacture service provision in an attempt to generate additional revenue streams and ensure commercial sustainability. Maintainability has always been a consideration during the design process but in the past it has been generally considered to be of tertiary importance behind manufacturability and primary product function in terms of design priorities. The need to draw whole life considerations into concurrent engineering (CE) practice has encouraged companies to address issues such as maintenance, earlier in the design process giving equal importance to all aspects of the product lifecycle. The consideration of design for maintainability (DFM) early in the design process has the potential to significantly reduce maintenance costs, and improve overall running efficiencies as well as safety levels. However a lack of simulation tools still hinders the adaptation of CE to include practical elements of design and therefore further research is required to develop methods by which ‘hands on’ activities such as maintenance can be fully assessed and optimised as concepts develop. Virtual Reality (VR) has the potential to address this issue but the application of these traditionally high cost systems can require complex infrastructure and their use has typically focused on aesthetic aspects of mature designs. This paper examines the application of cost effective VR technology to the rapid assessment of aircraft interior inspection during conceptual design. It focuses on the integration of VR hardware with a typical desktop engineering system and examines the challenges with data transfer, graphics quality and the development of practical user functions within the VR environment. Conclusions drawn to date indicate that the system has the potential to improve maintenance planning through the provision of a usable environment for inspection which is available as soon as preliminary structural models are generated as part of the conceptual design process. Challenges still exist in the efficient transfer of data between the CAD and VR environments as well as the quantification of any benefits that result from the proposed approach. The result of this research will help to improve product maintainability, reduce product development cycle times and lower maintenance costs.
Resumo:
There is a requirement for better integration between design and analysis tools, which is difficult due to their different objectives, separate data representations and workflows. Currently, substantial effort is required to produce a suitable analysis model from design geometry. Robust links are required between these different representations to enable analysis attributes to be transferred between different design and analysis packages for models at various levels of fidelity.
This paper describes a novel approach for integrating design and analysis models by identifying and managing the relationships between the different representations. Three key technologies, Cellular Modeling, Virtual Topology and Equivalencing, have been employed to achieve effective simulation model management. These technologies and their implementation are discussed in detail. Prototype automated tools are introduced demonstrating how multiple simulation models can be linked and maintained to facilitate seamless integration throughout the design cycle.
Resumo:
We investigate whether pure deflagration models ofChandrasekhar-mass carbon-oxygen white dwarf stars can account for one or more subclass of the observed population of Type Ia supernova (SN Ia) explosions. We compute a set of 3D full-star hydrodynamic explosion models, in which the deflagration strength is parametrized using the multispot ignition approach. For each model, we calculate detailed nucleosynthesis yields in a post-processing step with a 384 nuclide nuclear network. We also compute synthetic observables with our 3D Monte Carlo radiative transfer code for comparison with observations. For weak and intermediate deflagration strengths (energy release E {less-than or approximate} 1.1 × 10 erg), we find that the explosion leaves behind a bound remnant enriched with 3 to 10 per cent (by mass) of deflagration ashes. However, we do not obtain the large kick velocities recently reported in the literature. We find that weak deflagrations with E ~ 0.5 × 10 erg fit well both the light curves and spectra of 2002cx-like SNe Ia, and models with even lower explosion energies could explain some of the fainter members of this subclass. By comparing our synthetic observables with the properties of SNe Ia, we can exclude the brightest, most vigorously ignited models as candidates for any observed class of SN Ia: their B-V colours deviate significantly from both normal and 2002cx-like SNe Ia and they are too bright to be candidates for other subclasses.
Resumo:
Integrating analysis and design models is a complex task due to differences between the models and the architectures of the toolsets used to create them. This complexity is increased with the use of many different tools for specific tasks using an analysis process. In this work various design and analysis models are linked throughout the design lifecycle, allowing them to be moved between packages in a way not currently available. Three technologies named Cellular Modeling, Virtual Topology and Equivalencing are combined to demonstrate how different finite element meshes generated on abstract analysis geometries can be linked to their original geometry. Cellular models allow interfaces between adjacent cells to be extracted and exploited to transfer analysis attributes such as mesh associativity or boundary conditions between equivalent model representations. Virtual Topology descriptions used for geometry clean-up operations are explicitly stored so they can be reused by downstream applications. Establishing the equivalence relationships between models enables analysts to utilize multiple packages for specialist tasks without worrying about compatibility issues or substantial rework.
Resumo:
Recent work suggests that the human ear varies significantly between different subjects and can be used for identification. In principle, therefore, using ears in addition to the face within a recognition system could improve accuracy and robustness, particularly for non-frontal views. The paper describes work that investigates this hypothesis using an approach based on the construction of a 3D morphable model of the head and ear. One issue with creating a model that includes the ear is that existing training datasets contain noise and partial occlusion. Rather than exclude these regions manually, a classifier has been developed which automates this process. When combined with a robust registration algorithm the resulting system enables full head morphable models to be constructed efficiently using less constrained datasets. The algorithm has been evaluated using registration consistency, model coverage and minimalism metrics, which together demonstrate the accuracy of the approach. To make it easier to build on this work, the source code has been made available online.
Resumo:
Semiconductor fabrication involves several sequential processing steps with the result that critical production variables are often affected by a superposition of affects over multiple steps. In this paper a Virtual Metrology (VM) system for early stage measurement of such variables is presented; the VM system seeks to express the contribution to the output variability that is due to a defined observable part of the production line. The outputs of the processed system may be used for process monitoring and control purposes. A second contribution of this work is the introduction of Elastic Nets, a regularization and variable selection technique for the modelling of highly-correlated datasets, as a technique for the development of VM models. Elastic Nets and the proposed VM system are illustrated using real data from a multi-stage etch process used in the fabrication of disk drive read/write heads. © 2013 IEEE.
Resumo:
Increasingly semiconductor manufacturers are exploring opportunities for virtual metrology (VM) enabled process monitoring and control as a means of reducing non-value added metrology and achieving ever more demanding wafer fabrication tolerances. However, developing robust, reliable and interpretable VM models can be very challenging due to the highly correlated input space often associated with the underpinning data sets. A particularly pertinent example is etch rate prediction of plasma etch processes from multichannel optical emission spectroscopy data. This paper proposes a novel input-clustering based forward stepwise regression methodology for VM model building in such highly correlated input spaces. Max Separation Clustering (MSC) is employed as a pre-processing step to identify a reduced srt of well-conditioned, representative variables that can then be used as inputs to state-of-the-art model building techniques such as Forward Selection Regression (FSR), Ridge regression, LASSO and Forward Selection Ridge Regression (FCRR). The methodology is validated on a benchmark semiconductor plasma etch dataset and the results obtained are compared with those achieved when the state-of-art approaches are applied directly to the data without the MSC pre-processing step. Significant performance improvements are observed when MSC is combined with FSR (13%) and FSRR (8.5%), but not with Ridge Regression (-1%) or LASSO (-32%). The optimal VM results are obtained using the MSC-FSR and MSC-FSRR generated models. © 2012 IEEE.
Resumo:
Virtual metrology (VM) aims to predict metrology values using sensor data from production equipment and physical metrology values of preceding samples. VM is a promising technology for the semiconductor manufacturing industry as it can reduce the frequency of in-line metrology operations and provide supportive information for other operations such as fault detection, predictive maintenance and run-to-run control. The prediction models for VM can be from a large variety of linear and nonlinear regression methods and the selection of a proper regression method for a specific VM problem is not straightforward, especially when the candidate predictor set is of high dimension, correlated and noisy. Using process data from a benchmark semiconductor manufacturing process, this paper evaluates the performance of four typical regression methods for VM: multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), neural networks (NN) and Gaussian process regression (GPR). It is observed that GPR performs the best among the four methods and that, remarkably, the performance of linear regression approaches that of GPR as the subset of selected input variables is increased. The observed competitiveness of high-dimensional linear regression models, which does not hold true in general, is explained in the context of extreme learning machines and functional link neural networks.
Resumo:
We present a Monte Carlo radiative transfer technique for calculating synthetic spectropolarimetry for multidimensional supernova explosion models. The approach utilizes 'virtual-packets' that are generated during the propagation of the Monte Carlo quanta and used to compute synthetic observables for specific observer orientations. Compared to extracting synthetic observables by direct binning of emergent Monte Carlo quanta, this virtual-packet approach leads to a substantial reduction in the Monte Carlo noise. This is not only vital for calculating synthetic spectropolarimetry (since the degree of polarization is typically very small) but also useful for calculations of light curves and spectra. We first validate our approach via application of an idealized test code to simple geometries. We then describe its implementation in the Monte Carlo radiative transfer code ARTIS and present test calculations for simple models for Type Ia supernovae. Specifically, we use the well-known one-dimensional W7 model to verify that our scheme can accurately recover zero polarization from a spherical model, and to demonstrate the reduction in Monte Carlo noise compared to a simple packet-binning approach. To investigate the impact of aspherical ejecta on the polarization spectra, we then use ARTIS to calculate synthetic observables for prolate and oblate ellipsoidal models with Type Ia supernova compositions.
Resumo:
Virtual metrology (VM) aims to predict metrology values using sensor data from production equipment and physical metrology values of preceding samples. VM is a promising technology for the semiconductor manufacturing industry as it can reduce the frequency of in-line metrology operations and provide supportive information for other operations such as fault detection, predictive maintenance and run-to-run control. Methods with minimal user intervention are required to perform VM in a real-time industrial process. In this paper we propose extreme learning machines (ELM) as a competitive alternative to popular methods like lasso and ridge regression for developing VM models. In addition, we propose a new way to choose the hidden layer weights of ELMs that leads to an improvement in its prediction performance.
Resumo:
Single-Zone modelling is used to assess three 1D impeller loss model collections. An automotive turbocharger centrifugal compressor is used for evaluation. The individual 1D losses are presented relative to each other at three tip speeds to provide a visual description of each author’s perception of the relative importance of each loss. The losses are compared with their resulting prediction of pressure ratio and efficiency, which is further compared with test data; upon comparison, a combination of the 1D loss collections is identified as providing the best performance prediction. 3D CFD simulations have also been carried out for the same geometry using a single passage model. A method of extracting 1D losses from CFD is described and utilised to draw further comparisons with the 1D losses. A 1D scroll volute model has been added to the single passage CFD results; good agreement with the test data is achieved. Short-comings in the existing 1D loss models are identified as a result of the comparisons with 3D CFD losses. Further comparisons are drawn between the predicted 1D data, 3D CFD simulation results, and the test data using a nondimensional method to highlight where the current errors exist in the 1D prediction.
Resumo:
The recent drive towards timely multiple product realizations has caused most Manufacturing Enterprises (MEs) to develop more flexible assembly lines supported by better manufacturing design and planning. The aim of this work is to develop a methodology which will support feasibility analyses of assembly tasks, in order to simulate either a manufacturing process or a single work-cell in which digital human models act. The methodology has been applied in a case study relating to a railway industry. Simulations were applied to help standardize the methodology and suggest new solutions for realizing ergonomic and efficient assembly processes in the railway industry.
Resumo:
This paper describes an implementation of a method capable of integrating parametric, feature based, CAD models based on commercial software (CATIA) with the SU2 software framework. To exploit the adjoint based methods for aerodynamic optimisation within the SU2, a formulation to obtain geometric sensitivities directly from the commercial CAD parameterisation is introduced, enabling the calculation of gradients with respect to CAD based design variables. To assess the accuracy and efficiency of the alternative approach, two aerodynamic optimisation problems are investigated: an inviscid, 3D, problem with multiple constraints, and a 2D high-lift aerofoil, viscous problem without any constraints. Initial results show the new parameterisation obtaining reliable optimums, with similar levels of performance of the software native parameterisations. In the final paper, details of computing CAD sensitivities will be provided, including accuracy as well as linking geometric sensitivities to aerodynamic objective functions and constraints; the impact in the robustness of the overall method will be assessed and alternative parameterisations will be included.
Resumo:
Calculations of synthetic spectropolarimetry are one means to test multidimensional explosion models for Type Ia supernovae. In a recent paper, we demonstrated that the violent merger of a 1.1 and 0.9 M⊙ white dwarf binary system is too asymmetric to explain the low polarization levels commonly observed in normal Type Ia supernovae. Here, we present polarization simulations for two alternative scenarios: the sub-Chandrasekhar mass double-detonation and the Chandrasekhar mass delayed-detonation model. Specifically, we study a 2D double-detonation model and a 3D delayed-detonation model, and calculate polarization spectra for multiple observer orientations in both cases. We find modest polarization levels (<1 per cent) for both explosion models. Polarization in the continuum peaks at ∼0.1–0.3 per cent and decreases after maximum light, in excellent agreement with spectropolarimetric data of normal Type Ia supernovae. Higher degrees of polarization are found across individual spectral lines. In particular, the synthetic Si II λ6355 profiles are polarized at levels that match remarkably well the values observed in normal Type Ia supernovae, while the low degrees of polarization predicted across the O I λ7774 region are consistent with the non-detection of this feature in current data. We conclude that our models can reproduce many of the characteristics of both flux and polarization spectra for well-studied Type Ia supernovae, such as SN 2001el and SN 2012fr. However, the two models considered here cannot account for the unusually high level of polarization observed in extreme cases such as SN 2004dt.