915 resultados para Cinetica chimica, modelli cinetici, detonazione, evoluzione, CFD
Resumo:
Parallel computing is now widely used in numerical simulation, particularly for application codes based on finite difference and finite element methods. A popular and successful technique employed to parallelize such codes onto large distributed memory systems is to partition the mesh into sub-domains that are then allocated to processors. The code then executes in parallel, using the SPMD methodology, with message passing for inter-processor interactions. In order to improve the parallel efficiency of an imbalanced structured mesh CFD code, a new dynamic load balancing (DLB) strategy has been developed in which the processor partition range limits of just one of the partitioned dimensions uses non-coincidental limits, as opposed to coincidental limits. The ‘local’ partition limit change allows greater flexibility in obtaining a balanced load distribution, as the workload increase, or decrease, on a processor is no longer restricted by the ‘global’ (coincidental) limit change. The automatic implementation of this generic DLB strategy within an existing parallel code is presented in this chapter, along with some preliminary results.
Resumo:
The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier–Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright © 2000 John Wiley & Sons, Ltd.
Resumo:
The FIRE Detection and Suppression Simulation (FIREDASS) project was concerned with the development of water misting systems as a possible replacement for halon based fire suppression systems currently used in aircraft cargo holds and ship engine rooms. As part of this program of work, a computational model was developed to assist engineers optimize the design of water mist suppression systems. The model is based on Computational Fluid Dynamics (CFD) and comprised of the following components: fire model; mist model; two-phase radiation model; suppression model; detector/activation model. In this paper the FIREDASS software package is described and the theory behind the fire and radiation sub-models is detailed. The fire model uses prescribed release rates for heat and gaseous combustion products to represent the fire load. Typical release rates have been determined through experimentation. The radiation model is a six-flux model coupled to the gas (and mist) phase. As part of the FIREDASS project, a detailed series of fire experiments were conducted in order to validate the fire model. Model predictions are compared with data from these experiments and good agreement is found.
Resumo:
A cell-centred finite volume(CC-FV) solid mechanics formulation, based on a computational fluid dynamics(CFD) procedure, is presented. A CFD code is modified such that the velocity variable is used as to the displacement variable. Displacement and pressure fields are considered as unknown variables. The results are validated with finite element(FE) and cell-vertex finite volume(CV-FV) predictions based on discretisation of the equilibrium equations. The developed formulation is applicable for both compressible and incompressible solids behaviour. The method is general and can be extended for the simultaneous analysis of problems involving flow-thermal and stress effects.
Resumo:
This paper describes work performed at IRSID/USINOR in France and the University of Greenwich, UK, to investigate flow structures and turbulence in a water-model container, simulating aspects typical of metal tundish operation. Extensive mean and fluctuating velocity measurements were performed at IRSID using LDA to determine the flow field and these form the basis for a numerical model validation. This apparently simple problem poses several difficulties for the CFD modelling. The flow is driven by the strong impinging jet at the inlet. Accurate description of the jet is most important and requires a localized fine grid, but also a turbulence model that predicts the correct spreading rates of jet and impinging wall boundary layers. The velocities in the bulk of the tundish tend to be (indeed need to be) much smaller than those of the jet, leading to damping of turbulence, or even laminar flow. The authors have developed several low-Reynolds number (low-Re) k–var epsilon model variants to compute this flow and compare against measurements. Best agreement is obtained when turbulence damping is introduced to account not only for walls, but also for low-Re regions in the bulk – the k–var epsilon model otherwise allows turbulence to accumulate in the container due to the restricted outlet. Several damping functions are tested and the results reported here. The k–ω model, which is more suited to transitional flow, also seems to perform well in this problem.
Resumo:
In this paper, the framework is described for the modelling of granular material by employing Computational Fluid Dynamics (CFD). This is achieved through the use and implementation in the continuum theory of constitutive relations, which are derived in a granular dynamics framework and parametrise particle interactions that occur at the micro-scale level. The simulation of a process often met in bulk solids handling industrial plants involving granular matter, (i.e. filling of a flat-bottomed bin with a binary material mixture through pneumatic conveying-emptying of the bin in core flow mode-pneumatic conveying of the material coming out of a the bin) is presented. The results of the presented simulation demonstrate the capability of the numerical model to represent successfully key granular processes (i.e. segregation/degradation), the prediction of which is of great importance in the process engineering industry.
Resumo:
Most lead bullion is refined by pyrometallurgical methods - this involves a serics of processes that remove the antimony (softening) silver (Parkes process), zinc (vacuum dezincing) and if need be, bismuth (Betterton-Kroll process). The first step, softening, removes the antimony, arsenic and tin by air oxidation in a furnace or by the Harris process. Next, in the Parkes process, zinc is added to the melt to remove the silver and gold. Insoluble zinc, silver and gold compounds are skimmed off from the melt surface. Excess zinc added during desilvering is removed from lead bullion using one of ghree methods: * Vacuum dezincing; * Chlorine dezincing; or * Harris dezincing. The present study concentrates on the Vacuum dezincing process for lead refining. The main aims of the research are to develop mathematical model(s), using Computational Fluid Dyanmics (CFD) a Surface Averaged Model (SAM), to predict the process behaviour under various operating conditions, thus providing detailed information of the process - insight into its reaction to changes of key operating parameters. Finally, the model will be used to optimise the process in terms of initial feed concentration, temperature, vacuum height cooling rate, etc.
Resumo:
This paper will discuss Computational Fluid Dynamics (CFD) results from an investigation into the accuracy of several turbulence models to predict air cooling for electronic packages and systems. Also new transitional turbulence models will be proposed with emphasis on hybrid techniques that use the k-ε model at an appropriate distance away from the wall and suitable models, with wall functions, near wall regions. A major proportion of heat emitted from electronic packages can be extracted by air cooling. This flow of air throughout an electronic system and the heat extracted is highly dependent on the nature of turbulence present in the flow. The use of CFD for such investigations is fast becoming a powerful and almost essential tool for the design, development and optimization of engineering applications. However turbulence models remain a key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt fluctuations experienced by the turbulent energy and other parameters located at near wall regions and shear layers a particularly fine computational mesh is necessary which inevitably increases the computer storage and run-time requirements. The PHYSICA Finite Volume code was used for this investigation. With the exception of the k-ε and k-ω models which are available as standard within PHYSICA, all other turbulence models mentioned were implemented via the source code by the authors. The LVEL, LVEL CAP, Wolfshtein, k-ε, k-ω, SST and kε/kl models are described and compared with experimental data.
Resumo:
A major percentage of the heat emitted from electronic packages can be extracted by air cooling whether by means of natural or forced convection. This flow of air throughout an electronic system and the heat extracted is highly dependable on the nature of turbulence present in the flow field. This paper will discuss results from an investigation into the accuracy of turbulence models to predict air cooling for electronic packages and systems.
Resumo:
Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as fire-field modelling. This has traditionally been achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this article we demonstrate how typical office-based PCs attached to a Local Area Network has the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. It was found that good speedups could be achieved on homogeneous networks of PCs, for example a problem composed of ~100,000 cells would run 9.3 times faster on a network of 12 800MHz PCs than on a single 800MHz PC. It was also found that a network of eight 3.2GHz Pentium 4 PCs would run 7.04 times faster than a single 3.2GHz Pentium computer. A dynamic load balancing scheme was also devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact between the parallel processing task and other computer users on the network was minimized.