891 resultados para Explicit Finite Element Modelling
Resumo:
Three dimensional (3D) composites are strong contenders for the structural applications in situations like aerospace,aircraft and automotive industries where multidirectional thermal and mechanical stresses exist. The presence of reinforcement along the thickness direction in 3D composites,increases the through the thickness stiffness and strength properties.The 3D preforms can be manufactured with numerous complex architecture variations to meet the needs of specific applications.For hot structure applications Carbon-Carbon(C-C) composites are generally used,whose property variation with respect to temperature is essential for carrying out the design of hot structures.The thermomechanical behavior of 3D composites is not fully understood and reported.The methodology to find the thermomechanical properties using analytical modelling of 3D woven,3D 4-axes braided and 3D 5-axes braided composites from Representative Unit Cells(RUC's) based on constitutive equations for 3D composites has been dealt in the present study.High Temperature Unidirectional (UD) Carbon-Carbon material properties have been evaluated using analytical methods,viz.,Composite cylinder assemblage Model and Method of Cells based on experiments carried out on Carbon-Carbon fabric composite for a temparature range of 300 degreeK to 2800degreeK.These properties have been used for evaluating the 3D composite properties.From among the existing methods of solution sequences for 3D composites,"3D composite Strength Model" has been identified as the most suitable method.For thegeneration of material properies of RUC's od 3D composites,software has been developed using MATLAB.Correlaton of the analytically determined properties with test results available in literature has been established.Parametric studies on the variation of all the thermomechanical constants for different 3D performs of Carbon-Carbon material have been studied and selection criteria have been formulated for their applications for the hot structures.Procedure for the structural design of hot structures made of 3D Carbon-Carbon composites has been established through the numerical investigations on a Nosecap.Nonlinear transient thermal and nonlinear transient thermo-structural analysis on the Nosecap have been carried out using finite element software NASTRAN.Failure indices have been established for the identified performs,identification of suitable 3D composite based on parametric studies on strength properties and recommendation of this material for Nosecap of RLV based on structural performance have been carried out in this Study.Based on the 3D failure theory the best perform for the Nosecap has been identified as 4-axis 15degree braided composite.
Resumo:
Gabion faced re.taining walls are essentially semi rigid structures that can generally accommodate large lateral and vertical movements without excessive structural distress. Because of this inherent feature, they offer technical and economical advantage over the conventional concrete gravity retaining walls. Although they can be constructed either as gravity type or reinforced soil type, this work mainly deals with gabion faced reinforced earth walls as they are more suitable to larger heights. The main focus of the present investigation was the development of a viable plane strain two dimensional non linear finite element analysis code which can predict the stress - strain behaviour of gabion faced retaining walls - both gravity type and reinforced soil type. The gabion facing, backfill soil, In - situ soil and foundation soil were modelled using 20 four noded isoparametric quadrilateral elements. The confinement provided by the gabion boxes was converted into an induced apparent cohesion as per the membrane correction theory proposed by Henkel and Gilbert (1952). The mesh reinforcement was modelled using 20 two noded linear truss elements. The interactions between the soil and the mesh reinforcement as well as the facing and backfill were modelled using 20 four noded zero thickness line interface elements (Desai et al., 1974) by incorporating the nonlinear hyperbolic formulation for the tangential shear stiffness. The well known hyperbolic formulation by Ouncan and Chang (1970) was used for modelling the non - linearity of the soil matrix. The failure of soil matrix, gabion facing and the interfaces were modelled using Mohr - Coulomb failure criterion. The construction stages were also modelled.Experimental investigations were conducted on small scale model walls (both in field as well as in laboratory) to suggest an alternative fill material for the gabion faced retaining walls. The same were also used to validate the finite element programme developed as a part of the study. The studies were conducted using different types of gabion fill materials. The variation was achieved by placing coarse aggregate and quarry dust in different proportions as layers one above the other or they were mixed together in the required proportions. The deformation of the wall face was measured and the behaviour of the walls with the variation of fill materials was analysed. It was seen that 25% of the fill material in gabions can be replaced by a soft material (any locally available material) without affecting the deformation behaviour to large extents. In circumstances where deformation can be allowed to some extents, even up to 50% replacement with soft material can be possible.The developed finite element code was validated using experimental test results and other published results. Encouraged by the close comparison between the theory and experiments, an extensive and systematic parametric study was conducted, in order to gain a closer understanding of the behaviour of the system. Geometric parameters as well as material parameters were varied to understand their effect on the behaviour of the walls. The final phase of the study consisted of developing a simplified method for the design of gabion faced retaining walls. The design was based on the limit state method considering both the stability and deformation criteria. The design parameters were selected for the system and converted to dimensionless parameters. Thus the procedure for fixing the dimensions of the wall was simplified by eliminating the conventional trial and error procedure. Handy design charts were developed which would prove as a hands - on - tool to the design engineers at site. Economic studies were also conducted to prove the cost effectiveness of the structures with respect to the conventional RCC gravity walls and cost prediction models and cost breakdown ratios were proposed. The studies as a whole are expected to contribute substantially to understand the actual behaviour of gabion faced retaining wall systems with particular reference to the lateral deformations.
Resumo:
Das Werkstoffverhalten von stahlfaserfreiem bzw. stahlfaserverstärktem Stahlbeton unter biaxialle Druck- Zugbeanspruchung wurde experimentell und theoretisch untersucht. Die Basis der experimentellen Untersuchungen waren zahlreiche Versuche, die in der Vergangenheit an faserfreiem Stahlbetonscheiben zur Bestimmung des Werkstoffverhaltens von gerissenem Stahlbeton im ebenen Spannungszustand durchgeführt wurden. Bei diesen Untersuchungen wurde festgestellt, dass infolge einer Querzugbeanspruchung eine Abminderung der biaxialen Druckfestigkeit entsteht. Unter Berücksichtigung dieser Erkenntnisse sind zur Verbesserung der Werkstoffeigenschaften des Betons, Stahlbetonscheiben aus stahlfaserverstärktem Beton hergestellt worden. Die aus der Literatur bekannten Werkstoffmodelle für Beton sowie Stahlbeton, im ungerissenen und gerissenen Zustand wurden hinsichtlich der in der Vergangenheit ermittelten Materialeigenschaften des Betons bzw. Stahlbetons unter proportionalen sowie nichtproportionalen äußeren Belastungen erklärt und kritisch untersucht. In den frischen Beton wurden Stahlfasern hinzugegeben. Dadurch konnte die Festigkeits- und die Materialsteifigkeitsabminderung infolge Rissbildung, die zur Schädigung des Verbundwerkstoffs Beton führt, reduziert werden. Man konnte sehen, dass der Druckfestigkeitsabminderungsfaktor und insbesondere die zur maximal aufnehmbaren Zylinderdruckfestigkeit gehörende Stauchung, durch Zugabe von Stahlfasern besser begrenzt wird. Die experimentelle Untersuchungen wurden an sechs faserfreien und sieben stahlfaserverstärkten Stahlbetonscheiben unter Druck-Zugbelastung zur Bestimmung des Verhaltens des gerissenen faserfreien und stahlfaserverstärkten Stahlbetons durchgeführt. Die aus eigenen Versuchen ermittelten Materialeigenschaften des Betons, des stahlfaserverstärkten Betons und Stahlbetons im gerissenen Zustand wurden dargelegt und diskutiert. Bei der Rissbildung des quasi- spröden Werkstoffs Beton und dem stahlfaserverstärkten Beton wurde neben dem plastischen Fließen, auch die Abnahme des Elastizitätsmoduls festgestellt. Die Abminderung der aufnehmbaren Festigkeit und der zugehörigen Verzerrung lässt sich nicht mit der klassischen Fließtheorie der Plastizität ohne Modifizierung des Verfestigungsgesetzes erfassen. Es wurden auf elasto-plastischen Werkstoffmodellen basierende konstitutive Beziehungen für den faserfreien sowie den stahlfaserverstärkten Beton vorgeschlagen. Darüber hinaus wurde in der vorliegenden Arbeit eine auf dem elasto-plastischen Werkstoffmodell basierende konstitutive Beziehung für Beton und den stahlfaser-verstärkten Beton im gerissenen Zustand formuliert. Die formulierten Werkstoffmodelle wurden mittels dem in einer modularen Form aufgebauten nichtlinearen Finite Elemente Programm DIANA zu numerischen Untersuchungen an ausgewählten experimentell untersuchten Flächentragwerken, wie scheibenartigen-, plattenartigen- und Schalentragwerken aus faserfreiem sowie stahlfaserverstärktem Beton verwendet. Das entwickelte elasto-plastische Modell ermöglichte durch eine modifizierte effektive Spannungs-Verzerrungs-Beziehung für das Verfestigungsmodell, nicht nur die Erfassung des plastischen Fließens sondern auch die Berücksichtigung der Schädigung der Elastizitätsmodule infolge Mikrorissen sowie Makrorissen im Hauptzugspannungs-Hauptdruckspannungs-Bereich. Es wurde bei den numerischen Untersuchungen zur Ermittlung des Last-Verformungsverhaltens von scheibenartigen, plattenartigen- und Schalentragwerken aus faserfreiem und stahlfaserverstärktem Stahlbeton, im Vergleich mit den aus Versuchen ermittelten Ergebnissen, eine gute Übereinstimmung festgestellt.
Resumo:
Die Untersuchung des dynamischen aeroelastischen Stabilitätsverhaltens von Flugzeugen erfordert sehr komplexe Rechenmodelle, welche die wesentlichen elastomechanischen und instationären aerodynamischen Eigenschaften der Konstruktion wiedergeben sollen. Bei der Modellbildung müssen einerseits Vereinfachungen und Idealisierungen im Rahmen der Anwendung der Finite Elemente Methode und der aerodynamischen Theorie vorgenommen werden, deren Auswirkungen auf das Simulationsergebnis zu bewerten sind. Andererseits können die strukturdynamischen Kenngrößen durch den Standschwingungsversuch identifiziert werden, wobei die Ergebnisse Messungenauigkeiten enthalten. Für eine robuste Flatteruntersuchung müssen die identifizierten Unwägbarkeiten in allen Prozessschritten über die Festlegung von unteren und oberen Schranken konservativ ermittelt werden, um für alle Flugzustände eine ausreichende Flatterstabilität sicherzustellen. Zu diesem Zweck wird in der vorliegenden Arbeit ein Rechenverfahren entwickelt, welches die klassische Flatteranalyse mit den Methoden der Fuzzy- und Intervallarithmetik verbindet. Dabei werden die Flatterbewegungsgleichungen als parameterabhängiges nichtlineares Eigenwertproblem formuliert. Die Änderung der komplexen Eigenlösung infolge eines veränderlichen Einflussparameters wird mit der Methode der numerischen Fortsetzung ausgehend von der nominalen Startlösung verfolgt. Ein modifizierter Newton-Iterations-Algorithmus kommt zur Anwendung. Als Ergebnis liegen die berechneten aeroelastischen Dämpfungs- und Frequenzverläufe in Abhängigkeit von der Fluggeschwindigkeit mit Unschärfebändern vor.
Resumo:
Im Rahmen der Dichtefunktionaltheorie wurden Orbitalfunktionale wie z.B. B3LYP entwickelt. Diese lassen sich mit der „optimized effective potential“ – Methode selbstkonsistent auswerten. Während sie früher nur im 1D-Fall genau berechnet werden konnte, entwickelten Kümmel und Perdew eine Methode, bei der das OEP-Problem unter Verwendung einer Differentialgleichung selbstkonsistent gelöst werden kann. In dieser Arbeit wird ein Finite-Elemente-Mehrgitter-Verfahren verwendet, um die entstehenden Gleichungen zu lösen und damit Energien, Dichten und Ionisationsenergien für Atome und zweiatomige Moleküle zu berechnen. Als Orbitalfunktional wird dabei der „exakte Austausch“ verwendet; das Programm ist aber leicht auf jedes beliebige Funktional erweiterbar. Für das Be-Atom ließ sich mit 8.Ordnung –FEM die Gesamtenergien etwa um 2 Größenordnungen genauer berechnen als der Finite-Differenzen-Code von Makmal et al. Für die Eigenwerte und die Eigenschaften der Atome N und Ne wurde die Genauigkeit anderer numerischer Methoden erreicht. Die Rechenzeit wuchs erwartungsgemäß linear mit der Punktzahl. Trotz recht langsamer scf-Konvergenz wurden für das Molekül LiH Genauigkeiten wie bei FD und bei HF um 2-3 Größenordnungen bessere als mit Basismethoden erzielt. Damit zeigt sich, dass auf diese Weise benchmark-Rechnungen durchgeführt werden können. Diese dürften wegen der schnellen Konvergenz über der Punktzahl und dem geringen Zeitaufwand auch auf schwerere Systeme ausweitbar sein.
Resumo:
In this paper we consider the problem of time-harmonic acoustic scattering in two dimensions by convex polygons. Standard boundary or finite element methods for acoustic scattering problems have a computational cost that grows at least linearly as a function of the frequency of the incident wave. Here we present a novel Galerkin boundary element method, which uses an approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh, with smaller elements closer to the corners of the polygon. We prove that the best approximation from the approximation space requires a number of degrees of freedom to achieve a prescribed level of accuracy that grows only logarithmically as a function of the frequency. Numerical results demonstrate the same logarithmic dependence on the frequency for the Galerkin method solution. Our boundary element method is a discretization of a well-known second kind combined-layer-potential integral equation. We provide a proof that this equation and its adjoint are well-posed and equivalent to the boundary value problem in a Sobolev space setting for general Lipschitz domains.
Resumo:
The and RT0 finite element schemes are among the most promising low order elements for use in unstructured mesh marine and lake models. They are both free of spurious elevation modes, have good dispersive properties and have a relatively low computational cost. In this paper, we derive both finite element schemes in the same unified framework and discuss their respective qualities in terms of conservation, consistency, propagation factor and convergence rate. We also highlight the impact that the local variables placement can have on the model solution. The main conclusion that we can draw is that the choice between elements is highly application dependent. We suggest that the element is better suited to purely hydrodynamical applications while the RT0 element might perform better for hydrological applications that require scalar transport calculations.
Resumo:
Simulations of the global atmosphere for weather and climate forecasting require fast and accurate solutions and so operational models use high-order finite differences on regular structured grids. This precludes the use of local refinement; techniques allowing local refinement are either expensive (eg. high-order finite element techniques) or have reduced accuracy at changes in resolution (eg. unstructured finite-volume with linear differencing). We present solutions of the shallow-water equations for westerly flow over a mid-latitude mountain from a finite-volume model written using OpenFOAM. A second/third-order accurate differencing scheme is applied on arbitrarily unstructured meshes made up of various shapes and refinement patterns. The results are as accurate as equivalent resolution spectral methods. Using lower order differencing reduces accuracy at a refinement pattern which allows errors from refinement of the mountain to accumulate and reduces the global accuracy over a 15 day simulation. We have therefore introduced a scheme which fits a 2D cubic polynomial approximately on a stencil around each cell. Using this scheme means that refinement of the mountain improves the accuracy after a 15 day simulation. This is a more severe test of local mesh refinement for global simulations than has been presented but a realistic test if these techniques are to be used operationally. These efficient, high-order schemes may make it possible for local mesh refinement to be used by weather and climate forecast models.
Resumo:
The P-1-P-1 finite element pair is known to allow the existence of spurious pressure (surface elevation) modes for the shallow water equations and to be unstable for mixed formulations. We show that this behavior is strongly influenced by the strong or the weak enforcement of the impermeability boundary conditions. A numerical analysis of the Stommel model is performed for both P-1-P-1 and P-1(NC)-P-1 mixed formulations. Steady and transient test cases are considered. We observe that the P-1-P-1 element exhibits stable discrete solutions with weak boundary conditions or with fully unstructured meshes. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
A scale-invariant moving finite element method is proposed for the adaptive solution of nonlinear partial differential equations. The mesh movement is based on a finite element discretisation of a scale-invariant conservation principle incorporating a monitor function, while the time discretisation of the resulting system of ordinary differential equations is carried out using a scale-invariant time-stepping which yields uniform local accuracy in time. The accuracy and reliability of the algorithm are successfully tested against exact self-similar solutions where available, and otherwise against a state-of-the-art h-refinement scheme for solutions of a two-dimensional porous medium equation problem with a moving boundary. The monitor functions used are the dependent variable and a monitor related to the surface area of the solution manifold. (c) 2005 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
The implications of whether new surfaces in cutting are formed just by plastic flow past the tool or by some fracturelike separation process involving significant surface work, are discussed. Oblique metalcutting is investigated using the ideas contained in a new algebraic model for the orthogonal machining of metals (Atkins, A. G., 2003, "Modeling Metalcutting Using Modern Ductile Fracture Mechanics: Quantitative Explanations for Some Longstanding Problems," Int. J. Mech. Sci., 45, pp. 373–396) in which significant surface work (ductile fracture toughnesses) is incorporated. The model is able to predict explicit material-dependent primary shear plane angles and provides explanations for a variety of well-known effects in cutting, such as the reduction of at small uncut chip thicknesses; the quasilinear plots of cutting force versus depth of cut; the existence of a positive force intercept in such plots; why, in the size-effect regime of machining, anomalously high values of yield stress are determined; and why finite element method simulations of cutting have to employ a "separation criterion" at the tool tip. Predictions from the new analysis for oblique cutting (including an investigation of Stabler's rule for the relation between the chip flow velocity angle C and the angle of blade inclination i) compare consistently and favorably with experimental results.
Resumo:
Several previous studies have attempted to assess the sublimation depth-scales of ice particles from clouds into clear air. Upon examining the sublimation depth-scales in the Met Office Unified Model (MetUM), it was found that the MetUM has evaporation depth-scales 2–3 times larger than radar observations. Similar results can be seen in the European Centre for Medium-Range Weather Forecasts (ECMWF), Regional Atmospheric Climate Model (RACMO) and Météo-France models. In this study, we use radar simulation (converting model variables into radar observations) and one-dimensional explicit microphysics numerical modelling to test and diagnose the cause of the deep sublimation depth-scales in the forecast model. The MetUM data and parametrization scheme are used to predict terminal velocity, which can be compared with the observed Doppler velocity. This can then be used to test the hypothesis as to why the sublimation depth-scale is too large within the MetUM. Turbulence could lead to dry air entrainment and higher evaporation rates; particle density may be wrong, particle capacitance may be too high and lead to incorrect evaporation rates or the humidity within the sublimating layer may be incorrectly represented. We show that the most likely cause of deep sublimation zones is an incorrect representation of model humidity in the layer. This is tested further by using a one-dimensional explicit microphysics model, which tests the sensitivity of ice sublimation to key atmospheric variables and is capable of including sonde and radar measurements to simulate real cases. Results suggest that the MetUM grid resolution at ice cloud altitudes is not sufficient enough to maintain the sharp drop in humidity that is observed in the sublimation zone.
Resumo:
The transfer of hillslope water to and through the riparian zone forms a research area of importance in hydrological investigations. Numerical modelling schemes offer a way to visualise and quantify first-order controls on catchment runoff response and mixing. We use a two-dimensional Finite Element model to assess the link between model setup decisions (e.g. zero-flux boundary definitions, soil algorithm choice) and the consequential hydrological process behaviour. A detailed understanding of the consequences of model configuration is required in order to produce reliable estimates of state variables. We demonstrate that model configuration decisions can determine effectively the presence or absence of particular hillslope flow processes and, the magnitude and direction of flux at the hillslope–riparian interface. If these consequences are not fully explored for any given scheme and application, the resulting process inference may well be misleading.