22 resultados para Finite Volume Methods
em Aston University Research Archive
Resumo:
This thesis reports the development of a reliable method for the prediction of response to electromagnetically induced vibration in large electric machines. The machines of primary interest are DC ship-propulsion motors but much of the work reported has broader significance. The investigation has involved work in five principal areas. (1) The development and use of dynamic substructuring methods. (2) The development of special elements to represent individual machine components. (3) Laboratory scale investigations to establish empirical values for properties which affect machine vibration levels. (4) Experiments on machines on the factory test-bed to provide data for correlation with prediction. (5) Reasoning with regard to the effect of various design features. The limiting factor in producing good models for machines in vibration is the time required for an analysis to take place. Dynamic substructuring methods were adopted early in the project to maximise the efficiency of the analysis. A review of existing substructure- representation and composite-structure assembly methods includes comments on which are most suitable for this application. In three appendices to the main volume methods are presented which were developed by the author to accelerate analyses. Despite significant advances in this area, the limiting factor in machine analyses is still time. The representation of individual machine components was addressed as another means by which the time required for an analysis could be reduced. This has resulted in the development of special elements which are more efficient than their finite-element counterparts. The laboratory scale experiments reported were undertaken to establish empirical values for the properties of three distinct features - lamination stacks, bolted-flange joints in rings and cylinders and the shimmed pole-yoke joint. These are central to the preparation of an accurate machine model. The theoretical methods are tested numerically and correlated with tests on two machines (running and static). A system has been devised with which the general electromagnetic forcing may be split into its most fundamental components. This is used to draw some conclusions about the probable effects of various design features.
Resumo:
This thesis demonstrates that the use of finite elements need not be confined to space alone, but that they may also be used in the time domain, It is shown that finite element methods may be used successfully to obtain the response of systems to applied forces, including, for example, the accelerations in a tall structure subjected to an earthquake shock. It is further demonstrated that at least one of these methods may be considered to be a practical alternative to more usual methods of solution. A detailed investigation of the accuracy and stability of finite element solutions is included, and methods of applications to both single- and multi-degree of freedom systems are described. Solutions using two different temporal finite elements are compared with those obtained by conventional methods, and a comparison of computation times for the different methods is given. The application of finite element methods to distributed systems is described, using both separate discretizations in space and time, and a combined space-time discretization. The inclusion of both viscous and hysteretic damping is shown to add little to the difficulty of the solution. Temporal finite elements are also seen to be of considerable interest when applied to non-linear systems, both when the system parameters are time-dependent and also when they are functions of displacement. Solutions are given for many different examples, and the computer programs used for the finite element methods are included in an Appendix.
Resumo:
Background/aims - To determine which biometric parameters provide optimum predictive power for ocular volume. Methods - Sixty-seven adult subjects were scanned with a Siemens 3-T MRI scanner. Mean spherical error (MSE) (D) was measured with a Shin-Nippon autorefractor and a Zeiss IOLMaster used to measure (mm) axial length (AL), anterior chamber depth (ACD) and corneal radius (CR). Total ocular volume (TOV) was calculated from T2-weighted MRIs (voxel size 1.0 mm3) using an automatic voxel counting and shading algorithm. Each MR slice was subsequently edited manually in the axial, sagittal and coronal plane, the latter enabling location of the posterior pole of the crystalline lens and partitioning of TOV into anterior (AV) and posterior volume (PV) regions. Results - Mean values (±SD) for MSE (D), AL (mm), ACD (mm) and CR (mm) were −2.62±3.83, 24.51±1.47, 3.55±0.34 and 7.75±0.28, respectively. Mean values (±SD) for TOV, AV and PV (mm3) were 8168.21±1141.86, 1099.40±139.24 and 7068.82±1134.05, respectively. TOV showed significant correlation with MSE, AL, PV (all p<0.001), CR (p=0.043) and ACD (p=0.024). Bar CR, the correlations were shown to be wholly attributable to variation in PV. Multiple linear regression indicated that the combination of AL and CR provided optimum R2 values of 79.4% for TOV. Conclusion - Clinically useful estimations of ocular volume can be obtained from measurement of AL and CR.
A CFD approach on the effect of particle size on char entrainment in bubbling fluidised bed reactors
Resumo:
The fluid – particle interaction inside a 41.7 mg s-1 fluidised bed reactor is modelled. Three char particles of sizes 500 µm, 250 µm, and 100 µm are injected into the fluidised bed and the momentum transport from the fluidising gas and fluidised sand is modelled. Due to the fluidising conditions and reactor design the char particles will either be entrained from the reactor or remain inside the bubbling bed. The particle size is the factor that differentiates the particle motion inside the reactor and their efficient entrainment out of it. A 3-Dimensional simulation has been performed with a completele revised momentum transport model for bubble three-phase flow according to the literature as an extension to the commercial finite volume code FLUENT 6.2.
Resumo:
The thesis presents an experimentally validated modelling study of the flow of combustion air in an industrial radiant tube burner (RTB). The RTB is used typically in industrial heat treating furnaces. The work has been initiated because of the need for improvements in burner lifetime and performance which are related to the fluid mechanics of the com busting flow, and a fundamental understanding of this is therefore necessary. To achieve this, a detailed three-dimensional Computational Fluid Dynamics (CFD) model has been used, validated with experimental air flow, temperature and flue gas measurements. Initially, the work programme is presented and the theory behind RTB design and operation in addition to the theory behind swirling flows and methane combustion. NOx reduction techniques are discussed and numerical modelling of combusting flows is detailed in this section. The importance of turbulence, radiation and combustion modelling is highlighted, as well as the numerical schemes that incorporate discretization, finite volume theory and convergence. The study first focuses on the combustion air flow and its delivery to the combustion zone. An isothermal computational model was developed to allow the examination of the flow characteristics as it enters the burner and progresses through the various sections prior to the discharge face in the combustion area. Important features identified include the air recuperator swirler coil, the step ring, the primary/secondary air splitting flame tube and the fuel nozzle. It was revealed that the effectiveness of the air recuperator swirler is significantly compromised by the need for a generous assembly tolerance. Also, there is a substantial circumferential flow maldistribution introduced by the swirier, but that this is effectively removed by the positioning of a ring constriction in the downstream passage. Computations using the k-ε turbulence model show good agreement with experimentally measured velocity profiles in the combustion zone and proved the use of the modelling strategy prior to the combustion study. Reasonable mesh independence was obtained with 200,000 nodes. Agreement was poorer with the RNG k-ε and Reynolds Stress models. The study continues to address the combustion process itself and the heat transfer process internal to the RTB. A series of combustion and radiation model configurations were developed and the optimum combination of the Eddy Dissipation (ED) combustion model and the Discrete Transfer (DT) radiation model was used successfully to validate a burner experimental test. The previously cold flow validated k-ε turbulence model was used and reasonable mesh independence was obtained with 300,000 nodes. The combination showed good agreement with temperature measurements in the inner and outer walls of the burner, as well as with flue gas composition measured at the exhaust. The inner tube wall temperature predictions validated the experimental measurements in the largest portion of the thermocouple locations, highlighting a small flame bias to one side, although the model slightly over predicts the temperatures towards the downstream end of the inner tube. NOx emissions were initially over predicted, however, the use of a combustion flame temperature limiting subroutine allowed convergence to the experimental value of 451 ppmv. With the validated model, the effectiveness of certain RTB features identified previously is analysed, and an analysis of the energy transfers throughout the burner is presented, to identify the dominant mechanisms in each region. The optimum turbulence-combustion-radiation model selection was then the baseline for further model development. One of these models, an eccentrically positioned flame tube model highlights the failure mode of the RTB during long term operation. Other models were developed to address NOx reduction and improvement of the flame profile in the burner combustion zone. These included a modified fuel nozzle design, with 12 circular section fuel ports, which demonstrates a longer and more symmetric flame, although with limited success in NOx reduction. In addition, a zero bypass swirler coil model was developed that highlights the effect of the stronger swirling combustion flow. A reduced diameter and a 20 mm forward displaced flame tube model shows limited success in NOx reduction; although the latter demonstrated improvements in the discharge face heat distribution and improvements in the flame symmetry. Finally, Flue Gas Recirculation (FGR) modelling attempts indicate the difficulty of the application of this NOx reduction technique in the Wellman RTB. Recommendations for further work are made that include design mitigations for the fuel nozzle and further burner modelling is suggested to improve computational validation. The introduction of fuel staging is proposed, as well as a modification in the inner tube to enhance the effect of FGR.
Resumo:
Recent developments in aerostatic thrust bearings have included: (a) the porous aerostatic thrust bearing containing a porous pad and (b) the inherently compensated compliant surface aerostatic thrust bearing containing a thin elastomer layer. Both these developments have been reported to improve the bearing load capacity compared to conventional aerostatic thrust bearings with rigid surfaces. This development is carried one stage further in a porous and compliant aerostatic thrust bearing incorporating both a porous pad and an opposing compliant surface. The thin elastomer layer forming the compliant surface is bonded to a rigid backing and is of a soft rubber like material. Such a bearing is studied experimentally and theoretically under steady state operating conditions. A mathematical model is presented to predict the bearing performance. In this model is a simplified solution to the elasticity equations for deflections of the compliant surface. Account is also taken of deflections in the porous pad due to the pressure difference across its thickness. The lubrication equations for flow in the porous pad and bearing clearance are solved by numerical finite difference methods. An iteration procedure is used to couple deflections of the compliant surface and porous pad with solutions to the lubrication equations. Comparisons between experimental results and theoretically predicted bearing performance are in good agreement. However these results show that the porous and compliant aerostatic thrust bearing performance is lower than that of a porous aerostatic thrust bearing with a rigid surface in place of the compliant surface. This discovery is accounted to the recess formed in the bearing clearance by deflections of the compliant surface and its effect on flow through the porous pad.
Resumo:
An initial review of the subject emphasises the need for improved fuel efficiency in vehicles and the possible role of aluminium in reducing weight. The problems of formability generally in manufacture and of aluminium in particular are discussed in the light of published data. A range of thirteen commercially available sheet aluminium alloys have been compared with respect to mechanical properties as these affect forming processes and behaviour in service. Four alloys were selected for detailed comparison. The formability and strength of these were investigated in terms of underlying mechanisms of deformation as well as the microstructural characteristics of the alloys including texture, particle dispersion, grain size and composition. In overall terms, good combinations of strength and ductility are achievable with alloys of the 2xxx and 6xxx series. Some specific alloys are notably better than others. The strength of formed components is affected by paint baking in the final stages of manufacture. Generally, alloys of the 6xxx family are strengthened while 2xxx and 5xxx become weaker. Some anomalous behaviour exists, however. Work hardening of these alloys appears to show rather abrupt decreases over certain strain ranges which is probably responsible for the relatively low strains at which both diffuse and local necking occur. Using data obtained from extended range tensile tests, the strain distribution in more complex shapes can be successfully modelled using finite element methods.Sheet failure during forming occurs by abrupt shear fracture in many instances. This condition is favoured by states of biaxial tension, surface defects in the form of fine scratches and certain types of crystallographic texture. The measured limit strains of the materials can be understood on the basis of attainment of a critical shear stress for fracture.
Resumo:
The paper presents a 3-dimensional simulation of the effect of particle shape on char entrainment in a bubbling fluidised bed reactor. Three char particles of 350 μm side length but of different shapes (cube, sphere, and tetrahedron) are injected into the fluidised bed and the momentum transport from the fluidising gas and fluidised sand is modelled. Due to the fluidising conditions, reactor design and particle shape the char particles will either be entrained from the reactor or remain inside the bubbling bed. The sphericity of the particles is the factor that differentiates the particle motion inside the reactor and their efficient entrainment out of it. The simulation has been performed with a completely revised momentum transport model for bubble three-phase flow, taking into account the sphericity factors, and has been applied as an extension to the commercial finite volume code FLUENT 6.3. © 2010 Elsevier B.V.All rights reserved.
Resumo:
The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
The aim of this letter is to demonstrate that complete removal of spectral aliasing occurring due to finite numerical bandwidth used in the split-step Fourier simulations of nonlinear interactions of optical waves can be achieved by enlarging each dimension of the spectral domain by a factor (n+1)/2, where n is the number of interacting waves. Alternatively, when using low-pass filtering for dealiasing this amounts to the need for filtering a 2/(n+1) fraction of each spectral dimension.
Resumo:
The modelling of mechanical structures using finite element analysis has become an indispensable stage in the design of new components and products. Once the theoretical design has been optimised a prototype may be constructed and tested. What can the engineer do if the measured and theoretically predicted vibration characteristics of the structure are significantly different? This thesis considers the problems of changing the parameters of the finite element model to improve the correlation between a physical structure and its mathematical model. Two new methods are introduced to perform the systematic parameter updating. The first uses the measured modal model to derive the parameter values with the minimum variance. The user must provide estimates for the variance of the theoretical parameter values and the measured data. Previous authors using similar methods have assumed that the estimated parameters and measured modal properties are statistically independent. This will generally be the case during the first iteration but will not be the case subsequently. The second method updates the parameters directly from the frequency response functions. The order of the finite element model of the structure is reduced as a function of the unknown parameters. A method related to a weighted equation error algorithm is used to update the parameters. After each iteration the weighting changes so that on convergence the output error is minimised. The suggested methods are extensively tested using simulated data. An H frame is then used to demonstrate the algorithms on a physical structure.
The transformational implementation of JSD process specifications via finite automata representation
Resumo:
Conventional structured methods of software engineering are often based on the use of functional decomposition coupled with the Waterfall development process model. This approach is argued to be inadequate for coping with the evolutionary nature of large software systems. Alternative development paradigms, including the operational paradigm and the transformational paradigm, have been proposed to address the inadequacies of this conventional view of software developement, and these are reviewed. JSD is presented as an example of an operational approach to software engineering, and is contrasted with other well documented examples. The thesis shows how aspects of JSD can be characterised with reference to formal language theory and automata theory. In particular, it is noted that Jackson structure diagrams are equivalent to regular expressions and can be thought of as specifying corresponding finite automata. The thesis discusses the automatic transformation of structure diagrams into finite automata using an algorithm adapted from compiler theory, and then extends the technique to deal with areas of JSD which are not strictly formalisable in terms of regular languages. In particular, an elegant and novel method for dealing with so called recognition (or parsing) difficulties is described,. Various applications of the extended technique are described. They include a new method of automatically implementing the dismemberment transformation; an efficient way of implementing inversion in languages lacking a goto-statement; and a new in-the-large implementation strategy.
Resumo:
The present dissertation is concerned with the determination of the magnetic field distribution in ma[.rnetic electron lenses by means of the finite element method. In the differential form of this method a Poisson type equation is solved by numerical methods over a finite boundary. Previous methods of adapting this procedure to the requirements of digital computers have restricted its use to computers of extremely large core size. It is shown that by reformulating the boundary conditions, a considerable reduction in core store can be achieved for a given accuracy of field distribution. The magnetic field distribution of a lens may also be calculated by the integral form of the finite element rnethod. This eliminates boundary problems mentioned but introduces other difficulties. After a careful analysis of both methods it has proved possible to combine the advantages of both in a .new approach to the problem which may be called the 'differential-integral' finite element method. The application of this method to the determination of the magnetic field distribution of some new types of magnetic lenses is described. In the course of the work considerable re-programming of standard programs was necessary in order to reduce the core store requirements to a minimum.
Resumo:
Particle impacts are of fundamental importance in many areas and there has been a renewed interest in research on particle impact problems. A comprehensive investigation of the particle impact problems, using finite element (FE) methods, is presented in this thesis. The capability of FE procedures for modelling particle impacts is demonstrated by excellent agreements between FE analysis results and previous theoretical, experimental and numerical results. For normal impacts of elastic particles, it is found that the energy loss due to stress wave propagation is negligible if it can reflect more than three times during the impact, for which Hertz theory provides a good prediction of impact behaviour provided that the contact deformation is sufficiently small. For normal impact of plastic particles, the energy loss due to stress wave propagation is also generally negligible so that the energy loss is mainly due to plastic deformation. Finite-deformation plastic impact is addressed in this thesis so that plastic impacts can be categorised into elastic-plastic impact and finite-deformation plastic impact. Criteria for the onset of finite-deformation plastic impacts are proposed in terms of impact velocity and material properties. It is found that the coefficient of restitution depends mainly upon the ratio of impact velocity to yield Vni/Vy0 for elastic-plastic impacts, but it is proportional to [(Vni/Vy0)*(Y/E*)]-1/2, where Y /E* is the representative yield strain for finite-deformation plastic impacts. A theoretical model for elastic-plastic impacts is also developed and compares favourably with FEA and previous experimental results. The effect of work hardening is also investigated.