11 resultados para Level of Detail (LOD)
em Cambridge University Engineering Department Publications Database
Resumo:
Atlases and statistical models play important roles in the personalization and simulation of cardiac physiology. For the study of the heart, however, the construction of comprehensive atlases and spatio-temporal models is faced with a number of challenges, in particular the need to handle large and highly variable image datasets, the multi-region nature of the heart, and the presence of complex as well as small cardiovascular structures. In this paper, we present a detailed atlas and spatio-temporal statistical model of the human heart based on a large population of 3D+time multi-slice computed tomography sequences, and the framework for its construction. It uses spatial normalization based on nonrigid image registration to synthesize a population mean image and establish the spatial relationships between the mean and the subjects in the population. Temporal image registration is then applied to resolve each subject-specific cardiac motion and the resulting transformations are used to warp a surface mesh representation of the atlas to fit the images of the remaining cardiac phases in each subject. Subsequently, we demonstrate the construction of a spatio-temporal statistical model of shape such that the inter-subject and dynamic sources of variation are suitably separated. The framework is applied to a 3D+time data set of 138 subjects. The data is drawn from a variety of pathologies, which benefits its generalization to new subjects and physiological studies. The obtained level of detail and the extendability of the atlas present an advantage over most cardiac models published previously. © 1982-2012 IEEE.
Resumo:
An inherent trade-off exists in simulation model development and employment: a trade-off between the level of detail simulated and the simulation models computational cost. It is often desirable to simulate a high level of detail to a high degree of accuracy. However, due to the nature of design optimisation, which requires a large number of design evaluations, the application of such simulation models can be prohibitively expensive. A induction motor modelling approache to reduce the computational cost while maintaining a high level of detail and accuracy in the final design is presented. © 2012 IEEE.
Resumo:
If a product is being designed to be genuinely inclusive, then the designers need to be able to assess the level of exclusion of the product that they are working on and to identify possible areas of improvement. To be of practical use, the assessments need to be quick, consistent and repeatable. The aim of this workshop is to invite attendees to participate in the evaluation of a number of everyday objects using an assessment technique being considered by the workshop organisers. The objectives of the workshop include evaluating the effectiveness of the assessment method, evaluating the accessibility of the products being assessed and to suggest revisions to the assessment scales being used. The assessment technique is to be based on the ONS capability measures [1]. This source recognises fourteen capability scales of which seven are particularly pertinent to product evaluation, namely: motion, dexterity, reach and stretch, vision, hearing, communication, and intellectual functioning. Each of these scales ranges from 0 (fully able) through 1 (minimal impairment) to 10 (severe impairment). The attendees will be asked to rate the products on these scales. Clearly the assessed accessibility of the product depends on the assumptions made about the context of use. The attendees will be asked to clearly note the assumptions that they are making about the context in which the product is being assessed. For instance, with a hot water bottle, assumptions have to be made about the availability of hot water and these can affect the overall accessibility rating. The workshop organisers will not specify the context of use as the aim is to identify how assessors would use the assessment method in the real world. The objects being assessed will include items such as remote controls, pill bottles, food packaging, hot water bottles and mobile telephones. the attendees will be encouraged to assess two or more products in detail. Helpers will be on hand to assist and observe the assessments. The assessments will be collated and compared and feedback about the assessment method sought from the attendees. Drawing on a preliminary review of the assessment results, initial conclusions will be presented at the end of the workshop. More detailed analyses will be made available in subsequent proceedings. It is intended that the workshop will provide workshop attendees with an opportunity to perform hands-on assessment of a number everyday products and identify features which are inclusive and those that are not. It is also intended to encourage an appreciation of the capabilities to be considered when evaluating accessibility.
Resumo:
We propose a computational method for the coupled simulation of a compressible flow interacting with a thin-shell structure undergoing large deformations. An Eulerian finite volume formulation is adopted for the fluid and a Lagrangian formulation based on subdivision finite elements is adopted for the shell response. The coupling between the fluid and the solid response is achieved via a novel approach based on level sets. The basic approach furnishes a general algorithm for coupling Lagrangian shell solvers with Cartesian grid based Eulerian fluid solvers. The efficiency and robustness of the proposed approach is demonstrated with a airbag deployment simulation. It bears emphasis that in the proposed approach the solid and the fluid components as well as their coupled interaction are considered in full detail and modeled with an equivalent level of fidelity without any oversimplifying assumptions or bias towards a particular physical aspect of the problem.
Resumo:
Design work involves uncertainty that arises from, and influences, the progressive development of solutions. This paper analyses the influences of evolving uncertainty levels on the design process. We focus on uncertainties associated with choosing the values of design parameters, and do not consider in detail the issues that arise when parameters must first be identified. Aspects of uncertainty and its evolution are discussed, and a new task-based model is introduced to describe process behaviour in terms of changing uncertainty levels. The model is applied to study two process configuration problems based on aircraft wing design: one using an analytical solution and one using Monte-Carlo simulation. The applications show that modelling uncertainty levels during design can help assess management policies, such as how many concepts should be considered during design and to what level of accuracy. © 2011 Springer-Verlag.
Resumo:
Computational Design has traditionally required a great deal of geometrical and parametric data. This data can only be supplied at stages later than conceptual design, typically the detail stage, and design quality is given by some absolute fitness function. On the other hand, design evaluation offers a relative measure of design quality that requires only a sparse representation. Quality, in this case, is a measure of how well a design will complete its task.
The research intends to address the question: "Is it possible to evaluate a mechanical design at the conceptual design phase and be able to make some prediction of its quality?" Quality can be interpreted as success in the marketplace, success in performing the required task, or some other user requirement. This work aims to determine a minimum level of representation such that conceptual designs can be usefully evaluated without needing to capture detailed geometry. This representation will form the model for the conceptual designs that are being considered for evaluation. The method to be developed will be a case-based evaluation system, that uses a database of previous designs to support design exploration. The method will not be able to support novel design as case-based design implies the model topology must be fixed.
Resumo:
Electron tunnelling through semiconductor tunnel barriers is exponentially sensitive to the thickness of the barrier layer, and in the most common system, the AlAs tunnel barrier in GaAs, a one monolayer variation in thickness results in a 300% variation in the tunnelling current for a fixed bias voltage. We use this degree of sensitivity to demonstrate that the level of control at 0.06 monolayer can be achieved in the growth by molecular beam epitaxy, and the geometrical variation of layer thickness across a wafer at the 0.01 monolayer level can be detected.
Resumo:
In this article, we detail the methodology developed to construct arbitrarily high order schemes - linear and WENO - on 3D mixed-element unstructured meshes made up of general convex polyhedral elements. The approach is tailored specifically for the solution of scalar level set equations for application to incompressible two-phase flow problems. The construction of WENO schemes on 3D unstructured meshes is notoriously difficult, as it involves a much higher level of complexity than 2D approaches. This due to the multiplicity of geometrical considerations introduced by the extra dimension, especially on mixed-element meshes. Therefore, we have specifically developed a number of algorithms to handle mixed-element meshes composed of convex polyhedra with convex polygonal faces. The contribution of this work concerns several areas of interest: the formulation of an improved methodology in 3D, the minimisation of computational runtime in the implementation through the maximum use of pre-processing operations, the generation of novel methods to handle complex 3D mixed-element meshes and finally the application of the method to the transport of a scalar level set. © 2012 Global-Science Press.
Resumo:
A three-dimensional (3D) numerical model is proposed to solve the electromagnetic problems involving transport current and background field of a high-T c superconducting (HTS) system. The model is characterized by the E-J power law and H-formulation, and is successfully implemented using finite element software. We first discuss the model in detail, including the mesh methods, boundary conditions and computing time. To validate the 3D model, we calculate the ac loss and trapped field solution for a bulk material and compare the results with the previously verified 2D solutions and an analytical solution. We then apply our model to test some typical problems such as superconducting bulk array and twisted conductors, which cannot be tackled by the 2D models. The new 3D model could be a powerful tool for researchers and engineers to investigate problems with a greater level of complicity.
Resumo:
We present a combined analytical and numerical study of the early stages (sub-100-fs) of the nonequilibrium dynamics of photoexcited electrons in graphene. We employ the semiclassical Boltzmann equation with a collision integral that includes contributions from electron-electron (e-e) and electron-optical phonon interactions. Taking advantage of circular symmetry and employing the massless Dirac fermion (MDF) Hamiltonian, we are able to perform an essentially analytical study of the e-e contribution to the collision integral. This allows us to take particular care of subtle collinear scattering processes - processes in which incoming and outgoing momenta of the scattering particles lie on the same line - including carrier multiplication (CM) and Auger recombination (AR). These processes have a vanishing phase space for two-dimensional MDF bare bands. However, we argue that electron-lifetime effects, seen in experiments based on angle-resolved photoemission spectroscopy, provide a natural pathway to regularize this pathology, yielding a finite contribution due to CM and AR to the Coulomb collision integral. Finally, we discuss in detail the role of physics beyond the Fermi golden rule by including screening in the matrix element of the Coulomb interaction at the level of the random phase approximation (RPA), focusing in particular on the consequences of various approximations including static RPA screening, which maximizes the impact of CM and AR processes, and dynamical RPA screening, which completely suppresses them. © 2013 American Physical Society.