13 resultados para Computational modelling

em Greenwich Academic Literature Archive - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract not available

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract not available

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract not available

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

One of the core tasks of the virtual-manufacturing environment is to characterise the transformation of the state of material during each of the unit processes. This transformation in shape, material properties, etc. can only be reliably achieved through the use of models in a simulation context. Unfortunately, many manufacturing processes involve the material being treated in both the liquid and solid state, the trans-formation of which may be achieved by heat transfer and/or electro-magnetic fields. The computational modelling of such processes, involving the interactions amongst various interacting phenomena, is a consider-able challenge. However, it must be addressed effectively if Virtual Manufacturing Environments are to become a reality! This contribution focuses upon one attempt to develop such a multi-physics computational toolkit. The approach uses a single discretisation procedure and provides for direct interaction amongst the component phenomena. The need to exploit parallel high performance hardware is addressed so that simulation elapsed times can be brought within the realms of practicality. Examples of Multiphysics modelling in relation to shape casting, and solder joint formation reinforce the motivation for this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Once the preserve of university academics and research laboratories with high-powered and expensive computers, the power of sophisticated mathematical fire models has now arrived on the desk top of the fire safety engineer. It is a revolution made possible by parallel advances in PC technology and fire modelling software. But while the tools have proliferated, there has not been a corresponding transfer of knowledge and understanding of the discipline from expert to general user. It is a serious shortfall of which the lack of suitable engineering courses dealing with the subject is symptomatic, if not the cause. The computational vehicles to run the models and an understanding of fire dynamics are not enough to exploit these sophisticated tools. Too often, they become 'black boxes' producing magic answers in exciting three-dimensional colour graphics and client-satisfying 'virtual reality' imagery. As well as a fundamental understanding of the physics and chemistry of fire, the fire safety engineer must have at least a rudimentary understanding of the theoretical basis supporting fire models to appreciate their limitations and capabilities. The five day short course, "Principles and Practice of Fire Modelling" run by the University of Greenwich attempt to bridge the divide between the expert and the general user, providing them with the expertise they need to understand the results of mathematical fire modelling. The course and associated text book, "Mathematical Modelling of Fire Phenomena" are aimed at students and professionals with a wide and varied background, they offer a friendly guide through the unfamiliar terrain of mathematical modelling. These concepts and techniques are introduced and demonstrated in seminars. Those attending also gain experience in using the methods during "hands-on" tutorial and workshop sessions. On completion of this short course, those participating should: - be familiar with the concept of zone and field modelling; - be familiar with zone and field model assumptions; - have an understanding of the capabilities and limitations of modelling software packages for zone and field modelling; - be able to select and use the most appropriate mathematical software and demonstrate their use in compartment fire applications; and - be able to interpret model predictions. The result is that the fire safety engineer is empowered to realise the full value of mathematical models to help in the prediction of fire development, and to determine the consequences of fire under a variety of conditions. This in turn enables him or her to design and implement safety measures which can potentially control, or at the very least reduce the impact of fire.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metal casting is a process governed by the interaction of a range of physical phenomena. Most computational models of this process address only what are conventionally regarded as the primary phenomena – heat conduction and solidification. However, to predict other phenomena, such as porosity formation, requires modelling the interaction of the fluid flow, heat transfer, solidification and the development of stressdeformation in the solidified part of the casting. This paper will describe a modelling framework called PHYSICA[1] which has the capability to stimulate such multiphysical phenomena.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract not available

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a three dimensional, thermos-mechanical modelling approach to the cooling and solidification phases associated with the shape casting of metals ei. Die, sand and investment casting. Novel vortex-based Finite Volume (FV) methods are described and employed with regard to the small strain, non-linear Computational Solid Mechanics (CSM) capabilities required to model shape casting. The CSM capabilities include the non-linear material phenomena of creep and thermo-elasto-visco-plasticity at high temperatures and thermo-elasto-visco-plasticity at low temperatures and also multi body deformable contact with which can occur between the metal casting of the mould. The vortex-based FV methods, which can be readily applied to unstructured meshes, are included within a comprehensive FV modelling framework, PHYSICA. The additional heat transfer, by conduction and convection, filling, porosity and solidification algorithms existing within PHYSICA for the complete modelling of all shape casting process employ cell-centred FV methods. The termo-mechanical coupling is performed in a staggered incremental fashion, which addresses the possible gap formation between the component and the mould, and is ultimately validated against a variety of shape casting benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of research groups are now developing and using finite volume (FV) methods for computational solid mechanics (CSM). These methods are proving to be equivalent and in some cases superior to their finite element (FE) counterparts. In this paper we will describe a vertex-based FV method with arbitrarily structured meshes, for modelling the elasto-plastic deformation of solid materials undergoing small strains in complex geometries. Comparisons with rational FE methods will be given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The difficulties encountered in implementing large scale CM codes on multiprocessor systems are now fairly well understood. Despite the claims of shared memory architecture manufacturers to provide effective parallelizing compilers, these have not proved to be adequate for large or complex programs. Significant programmer effort is usually required to achieve reasonable parallel efficiencies on significant numbers of processors. The paradigm of Single Program Multi Data (SPMD) domain decomposition with message passing, where each processor runs the same code on a subdomain of the problem, communicating through exchange of messages, has for some time been demonstrated to provide the required level of efficiency, scalability, and portability across both shared and distributed memory systems, without the need to re-author the code into a new language or even to support differing message passing implementations. Extension of the methods into three dimensions has been enabled through the engineering of PHYSICA, a framework for supporting 3D, unstructured mesh and continuum mechanics modeling. In PHYSICA, six inspectors are used. Part of the challenge for automation of parallelization is being able to prove the equivalence of inspectors so that they can be merged into as few as possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unstructured mesh codes for modelling continuum physics phenomena have evolved to provide the facility to model complex interacting systems. Parallelisation of such codes using single Program Multi Data (SPMD) domain decomposition techniques implemented with message passing has been demonstrated to provide high parallel efficiency, scalability to large numbers of processors P and portability across a wide range of parallel platforms. High efficiency, especially for large P requires that load balance is achieved in each parallel loop. For a code in which loops span a variety of mesh entity types, for example, elements, faces and vertices, some compromise is required between load balance for each entity type and the quantity of inter-processor communication required to satisfy data dependence between processors.