915 resultados para Microscopic simulation models
Resumo:
The electronics industry and the problems associated with the cooling of microelectronic equipment are developing rapidly. Thermal engineers now find it necessary to consider the complex area of equipment cooling at some level. This continually growing industry also faces heightened pressure from consumers to provide electronic product miniaturization, which in itself increases the demand for accurate thermal management predictions to assure product reliability. Computational fluid dynamics (CFD) is considered a powerful and almost essential tool for the design, development and optimization of engineering applications. CFD is now widely used within the electronics packaging design community to thermally characterize the performance of both the electronic component and system environment. This paper discusses CFD results for a large variety of investigated turbulence models. Comparison against experimental data illustrates the predictive accuracy of currently used models and highlights the growing demand for greater mathematical modelling accuracy with regards to thermal characterization. Also a newly formulated low Reynolds number (i.e. transitional) turbulence model is proposed with emphasis on hybrid techniques.
Resumo:
In this paper a methodology for the application of computer simulation to the evacuation certification of aircraft is suggested. The methodology suggested here involves the use of computer simulation, historic certification data, component testing and full-scale certification trials. The proposed methodology sets out a protocol for how computer simulation should be undertaken in a certification environment and draws on experience from both the marine and building industries. Along with the suggested protocol, a phased introduction of computer models to certification is suggested. Given the sceptical nature of the aviation community regarding any certification methodology change in general, this would involve as a first step the use of computer simulation in conjunction with full-scale testing. The computer model would be used to reproduce a probability distribution of likely aircraft performance under current certification conditions and in addition, several other more challenging scenarios could be developed. The combination of full-scale trial, computer simulation (and if necessary component testing) would provide better insight into the actual performance capabilities of the aircraft by generating a performance probability distribution or performance envelope rather than a single datum. Once further confidence in the technique is established, the second step would only involve computer simulation and component testing. This would only be contemplated after sufficient experience and confidence in the use of computer models have been developed. The third step in the adoption of computer simulation for certification would involve the introduction of several scenarios based on for example exit availability instructed by accident analysis. The final step would be the introduction of more realistic accident scenarios into the certification process. This would require the continued development of aircraft evacuation modelling technology to include additional behavioural features common in real accident scenarios.
Resumo:
This paper reports on research work undertaken for the European Commission funded study GMA2/2000/32039 Very Large Transport Aircraft (VLTA) Emergency Requirements Research Evacuation Study (VERRES). A particular focus was on evacuation issues with a detailed study of evacuation performance using computer models being undertaken as part of Work Package 2. This paper describes this work and investigates the use of internal stairs during evacuation using computer simulation.
Proposed methodology for the use of computer simulation to enhance aircraft evacuation certification
Resumo:
In this paper a methodology for the application of computer simulation to evacuation certification of aircraft is suggested. This involves the use of computer simulation, historic certification data, component testing, and full-scale certification trials. The methodology sets out a framework for how computer simulation should be undertaken in a certification environment and draws on experience from both the marine and building industries. In addition, a phased introduction of computer models to certification is suggested. This involves as a first step the use of computer simulation in conjunction with full-scale testing. The combination of full-scale trial, computer simulation (and if necessary component testing) provides better insight into aircraft evacuation performance capabilities by generating a performance probability distribution rather than a single datum. Once further confidence in the technique is established the requirement for the full-scale demonstration could be dropped. The second step in the adoption of computer simulation for certification involves the introduction of several scenarios based on, for example, exit availability, instructed by accident analysis. The final step would be the introduction of more realistic accident scenarios. This would require the continued development of aircraft evacuation modeling technology to include additional behavioral features common in real accident scenarios.
Resumo:
Problems in the preservation of the quality of granular material products are complex and arise from a series of sources during transport and storage. In either designing a new plant or, more likely, analysing problems that give rise to product quality degradation in existing operations, practical measurement and simulation tools and technologies are required to support the process engineer. These technologies are required to help in both identifying the source of such problems and then designing them out. As part of a major research programme on quality in particulate manufacturing computational models have been developed for segregation in silos, degradation in pneumatic conveyors, and the development of caking during storage, which use where possible, micro-mechanical relationships to characterize the behaviour of granular materials. The objective of the work presented here is to demonstrate the use of these computational models of unit processes involved in the analysis of large-scale processes involving the handling of granular materials. This paper presents a set of simulations of a complete large-scale granular materials handling operation, involving the discharge of the materials from a silo, its transport through a dilute-phase pneumatic conveyor, and the material storage in a big bag under varying environmental temperature and humidity conditions. Conclusions are drawn on the capability of the computational models to represent key granular processes, including particle size segregation, degradation, and moisture migration caking.
Resumo:
In all but the most sterile environments bacteria will reside in fluid being transported through conduits and some of these will attach and grow as biofilms on the conduit walls. The concentration and diversity of bacteria in the fluid at the point of delivery will be a mix of those when it entered the conduit and those that have become entrained into the flow due to seeding from biofilms. Examples include fluids through conduits such as drinking water pipe networks, endotracheal tubes, catheters and ventilation systems. Here we present two probabilistic models to describe changes in the composition of bulk fluid microbial communities as they are transported through a conduit whilst exposed to biofilm communities. The first (discrete) model simulates absolute numbers of individual cells, whereas the other (continuous) model simulates the relative abundance of taxa in the bulk fluid. The discrete model is founded on a birth-death process whereby the community changes one individual at a time and the numbers of cells in the system can vary. The continuous model is a stochastic differential equation derived from the discrete model and can also accommodate changes in the carrying capacity of the bulk fluid. These models provide a novel Lagrangian framework to investigate and predict the dynamics of migrating microbial communities. In this paper we compare the two models, discuss their merits, possible applications and present simulation results in the context of drinking water distribution systems. Our results provide novel insight into the effects of stochastic dynamics on the composition of non-stationary microbial communities that are exposed to biofilms and provides a new avenue for modelling microbial dynamics in systems where fluids are being transported.
Resumo:
Presented is a study that expands the body of knowledge on the effect of in-cycle speed fluctuations on performance of small engines. It uses the engine and drivetrain models developed previously by Callahan, et al. (1) to examine a variety of engines. The predicted performance changes due to drivetrain effects are shown in each case, and conclusions are drawn from those results. The single-cylinder, high performance four-stroke engine showed significant changes in predicted performance compared to the prediction with zero speed fluctuation in the model. Measured speed fluctuations from a firing Yamaha YZ426 engine were applied to the simulation in addition to data from a simple free mass model. Both methods predicted similar changes in performance. The multiple-cylinder, high performance two-stroke engine also showed significant changes in performance depending on the firing configuration. With both engines, the change in performance diminished with increasing mean engine speed. The low output, single-cylinder two-stroke engine simulation showed only a negligible change in performance, even with high amplitude speed fluctuations. Because the torque versus engine speed characteristic for the engine was so flat, this was expected. The cross-charged, multi-cylinder two-stroke engine also showed only a negligible change in performance. In this case, the combination of a relatively high inertia rotating assembly and the multiple cylinder firing events within the revolution smoothing the torque pulsations reduced the speed fluctuation amplitude itself.
Resumo:
Models and software products have been developed for modelling, simulation and prediction of different correlations in materials science, including 1. the correlation between processing parameters and properties in titanium alloys and ?-titanium aluminides; 2. time–temperature–transformation (TTT) diagrams for titanium alloys; 3. corrosion resistance of titanium alloys; 4. surface hardness and microhardness profile of nitrocarburised layers; 5. fatigue stress life (S–N) diagrams for Ti–6Al–4V alloys. The programs are based on trained artificial neural networks. For each particular case appropriate combination of inputs and outputs is chosen. Very good performances of the models are achieved. Graphical user interfaces (GUI) are created for easy use of the models. In addition interactive text versions are developed. The models designed are combined and integrated in software package that is built up on a modular fashion. The software products are available in versions for different platforms including Windows 95/98/2000/NT, UNIX and Apple Macintosh. Description of the software products is given, to demonstrate that they are convenient and powerful tools for practical applications in solving various problems in materials science. Examples for optimisation of the alloy compositions, processing parameters and working conditions are illustrated. An option for use of the software in materials selection procedure is described.
Resumo:
The eng-genes concept involves the use of fundamental known system functions as activation functions in a neural model to create a 'grey-box' neural network. One of the main issues in eng-genes modelling is to produce a parsimonious model given a model construction criterion. The challenges are that (1) the eng-genes model in most cases is a heterogenous network consisting of more than one type of nonlinear basis functions, and each basis function may have different set of parameters to be optimised; (2) the number of hidden nodes has to be chosen based on a model selection criterion. This is a mixed integer hard problem and this paper investigates the use of a forward selection algorithm to optimise both the network structure and the parameters of the system-derived activation functions. Results are included from case studies performed on a simulated continuously stirred tank reactor process, and using actual data from a pH neutralisation plant. The resulting eng-genes networks demonstrate superior simulation performance and transparency over a range of network sizes when compared to conventional neural models. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
We employ a quantum mechanical bond order potential in an atomistic simulation of channeled flow. We show that the original hypothesis that this is achieved by a cooperative deployment of slip and twinning is correct, first because a twin is able to “protect” a 60° ordinary dislocation from becoming sessile, and second because the two processes are found to be activated by Peierls stresses of similar magnitude. In addition we show an explicit demonstration of the lateral growth of a twin, again at a similar level of stress. Thus these simultaneous processes are shown to be capable of channeling deformation into the observed state of plane strain in so-called “A”-oriented mechanical testing of titanium aluminide superalloy.
Resumo:
Generation of hardware architectures directly from dataflow representations is increasingly being considered as research moves toward system level design methodologies. Creation of networks of IP cores to implement actor functionality is a common approach to the problem, but often the memory sub-systems produced using these techniques are inefficiently utilised. This paper explores some of the issues in terms of memory organisation and accesses when developing systems from these high level representations. Using a template matching design study, challenges such as modelling memory reuse and minimising buffer requirements are examined, yielding results with significantly less memory requirements and costly off-chip memory accesses.
Resumo:
This paper presents a practical algorithm for the simulation of interactive deformation in a 3D polygonal mesh model. The algorithm combines the conventional simulation of deformation using a spring-mass-damping model, solved by explicit numerical integration, with a set of heuristics to describe certain features of the transient behaviour, to increase the speed and stability of solution. In particular, this algorithm was designed to be used in the simulation of synthetic environments where it is necessary to model realistically, in real time, the effect on non-rigid surfaces being touched, pushed, pulled or squashed. Such objects can be solid or hollow, and have plastic, elastic or fabric-like properties. The algorithm is presented in an integrated form including collision detection and adaptive refinement so that it may be used in a self-contained way as part of a simulation loop to include human interface devices that capture data and render a realistic stereoscopic image in real time. The algorithm is designed to be used with polygonal mesh models representing complex topology, such as the human anatomy in a virtual-surgery training simulator. The paper evaluates the model behaviour qualitatively and then concludes with some examples of the use of the algorithm.
Resumo:
The identification of nonlinear dynamic systems using radial basis function (RBF) neural models is studied in this paper. Given a model selection criterion, the main objective is to effectively and efficiently build a parsimonious compact neural model that generalizes well over unseen data. This is achieved by simultaneous model structure selection and optimization of the parameters over the continuous parameter space. It is a mixed-integer hard problem, and a unified analytic framework is proposed to enable an effective and efficient two-stage mixed discrete-continuous; identification procedure. This novel framework combines the advantages of an iterative discrete two-stage subset selection technique for model structure determination and the calculus-based continuous optimization of the model parameters. Computational complexity analysis and simulation studies confirm the efficacy of the proposed algorithm.
Resumo:
Nano- and meso-scale simulation of chemical ordering kinetics in nano-layered L1(0)-AB binary intermetallics was performed. In the nano- (atomistic) scale Monte Carlo (MC) technique with vacancy mechanism of atomic migration implemented with diverse models for the system energetics was used. The meso-scale microstructure evolution was, in turn, simulated by means of a MC procedure applied to a system built of meso-scale voxels ordered in particular L1(0) variants. The voxels were free to change the L1(0) variant and interacted with antiphase-boundary energies evaluated within the nano-scale simulations. The study addressed FePt thin layers considered as a material for ultra-high-density magnetic storage media and revealed metastability of the L1(0) c-variant superstructure with monoatomic planes parallel to the (001)-oriented layer surface and off-plane easy magnetization. The layers, originally perfectly ordered in the c-variant, showed discontinuous precipitation of a- and b-L1(0)-variant domains running in parallel with homogeneous disordering (i.e. generation of antisite defects). The domains nucleated heterogeneously on the free monoatomic Fe surface of the layer, grew inwards its volume and relaxed towards an equilibrium microstructure of the system. Two
Resumo:
Polypropylene (PP), a semi-crystalline material, is typically solid phase thermoformed at temperatures associated with crystalline melting, generally in the 150° to 160°Celsius range. In this very narrow thermoforming window the mechanical properties of the material rapidly decline with increasing temperature and these large changes in properties make Polypropylene one of the more difficult materials to process by thermoforming. Measurement of the deformation behaviour of a material under processing conditions is particularly important for accurate numerical modelling of thermoforming processes. This paper presents the findings of a study into the physical behaviour of industrial thermoforming grades of Polypropylene. Practical tests were performed using custom built materials testing machines and thermoforming equipment at Queen′s University Belfast. Numerical simulations of these processes were constructed to replicate thermoforming conditions using industry standard Finite Element Analysis software, namely ABAQUS and custom built user material model subroutines. Several variant constitutive models were used to represent the behaviour of the Polypropylene materials during processing. This included a range of phenomenological, rheological and blended constitutive models. The paper discusses approaches to modelling industrial plug-assisted thermoforming operations using Finite Element Analysis techniques and the range of material models constructed and investigated. It directly compares practical results to numerical predictions. The paper culminates discussing the learning points from using Finite Element Methods to simulate the plug-assisted thermoforming of Polypropylene, which presents complex contact, thermal, friction and material modelling challenges. The paper makes recommendations as to the relative importance of these inputs in general terms with regard to correlating to experimentally gathered data. The paper also presents recommendations as to the approaches to be taken to secure simulation predictions of improved accuracy.