67 resultados para Coupled Finite Element Track Model
Resumo:
A coupled ocean–atmosphere general circulation model is used to investigate the modulation of El Niño–Southern Oscillation (ENSO) variability due to a weakened Atlantic thermohaline circulation (THC). The THC weakening is induced by freshwater perturbations in the North Atlantic, and leads to a well-known sea surface temperature dipole and a southward shift of the intertropical convergence zone (ITCZ) in the tropical Atlantic. Through atmospheric teleconnections and local coupled air–sea feedbacks, a meridionally asymmetric mean state change is generated in the eastern equatorial Pacific, corresponding to a weakened annual cycle, and westerly anomalies develop over the central Pacific. The westerly anomalies are associated with anomalous warming of SST, causing an eastward extension of the west Pacific warm pool particularly in August–February, and enhanced precipitation. These and other changes in the mean state lead in turn to an eastward shift of the zonal wind anomalies associated with El Niño events, and a significant increase in ENSO variability. In response to a 1-Sv (1 Sv ≡ 106 m3 s−1) freshwater input in the North Atlantic, the THC slows down rapidly and it weakens by 86% over years 50–100. The Niño-3 index standard deviation increases by 36% during the first 100-yr simulation relative to the control simulation. Further analysis indicates that the weakened THC not only leads to a stronger ENSO variability, but also leads to a stronger asymmetry between El Niño and La Niña events. This study suggests a role for an atmospheric bridge that rapidly conveys the influence of the Atlantic Ocean to the tropical Pacific and indicates that fluctuations of the THC can mediate not only mean climate globally but also modulate interannual variability. The results may contribute to understanding both the multidecadal variability of ENSO activity during the twentieth century and longer time-scale variability of ENSO, as suggested by some paleoclimate records.
Resumo:
The flow dynamics of crystal-rich high-viscosity magma is likely to be strongly influenced by viscous and latent heat release. Viscous heating is observed to play an important role in the dynamics of fluids with temperature-dependent viscosities. The growth of microlite crystals and the accompanying release of latent heat should play a similar role in raising fluid temperatures. Earlier models of viscous heating in magmas have shown the potential for unstable (thermal runaway) flow as described by a Gruntfest number, using an Arrhenius temperature dependence for the viscosity, but have not considered crystal growth or latent heating. We present a theoretical model for magma flow in an axisymmetric conduit and consider both heating effects using Finite Element Method techniques. We consider a constant mass flux in a 1-D infinitesimal conduit segment with isothermal and adiabatic boundary conditions and Newtonian and non-Newtonian magma flow properties. We find that the growth of crystals acts to stabilize the flow field and make the magma less likely to experience a thermal runaway. The additional heating influences crystal growth and can counteract supercooling from degassing-induced crystallization and drive the residual melt composition back towards the liquidus temperature. We illustrate the models with results generated using parameters appropriate for the andesite lava dome-forming eruption at Soufriere Hills Volcano, Montserrat. These results emphasize the radial variability of the magma. Both viscous and latent heating effects are shown to be capable of playing a significant role in the eruption dynamics of Soufriere Hills Volcano. Latent heating is a factor in the top two kilometres of the conduit and may be responsible for relatively short-term (days) transients. Viscous heating is less restricted spatially, but because thermal runaway requires periods of hundreds of days to be achieved, the process is likely to be interrupted. Our models show that thermal evolution of the conduit walls could lead to an increase in the effective diameter of flow and an increase in flux at constant magma pressure.
Resumo:
In this paper we consider the problem of time-harmonic acoustic scattering in two dimensions by convex polygons. Standard boundary or finite element methods for acoustic scattering problems have a computational cost that grows at least linearly as a function of the frequency of the incident wave. Here we present a novel Galerkin boundary element method, which uses an approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh, with smaller elements closer to the corners of the polygon. We prove that the best approximation from the approximation space requires a number of degrees of freedom to achieve a prescribed level of accuracy that grows only logarithmically as a function of the frequency. Numerical results demonstrate the same logarithmic dependence on the frequency for the Galerkin method solution. Our boundary element method is a discretization of a well-known second kind combined-layer-potential integral equation. We provide a proof that this equation and its adjoint are well-posed and equivalent to the boundary value problem in a Sobolev space setting for general Lipschitz domains.
Resumo:
The and RT0 finite element schemes are among the most promising low order elements for use in unstructured mesh marine and lake models. They are both free of spurious elevation modes, have good dispersive properties and have a relatively low computational cost. In this paper, we derive both finite element schemes in the same unified framework and discuss their respective qualities in terms of conservation, consistency, propagation factor and convergence rate. We also highlight the impact that the local variables placement can have on the model solution. The main conclusion that we can draw is that the choice between elements is highly application dependent. We suggest that the element is better suited to purely hydrodynamical applications while the RT0 element might perform better for hydrological applications that require scalar transport calculations.
Resumo:
A numerical algorithm for the biharmonic equation in domains with piecewise smooth boundaries is presented. It is intended for problems describing the Stokes flow in the situations where one has corners or cusps formed by parts of the domain boundary and, due to the nature of the boundary conditions on these parts of the boundary, these regions have a global effect on the shape of the whole domain and hence have to be resolved with sufficient accuracy. The algorithm combines the boundary integral equation method for the main part of the flow domain and the finite-element method which is used to resolve the corner/cusp regions. Two parts of the solution are matched along a numerical ‘internal interface’ or, as a variant, two interfaces, and they are determined simultaneously by inverting a combined matrix in the course of iterations. The algorithm is illustrated by considering the flow configuration of ‘curtain coating’, a flow where a sheet of liquid impinges onto a moving solid substrate, which is particularly sensitive to what happens in the corner region formed, physically, by the free surface and the solid boundary. The ‘moving contact line problem’ is addressed in the framework of an earlier developed interface formation model which treats the dynamic contact angle as part of the solution, as opposed to it being a prescribed function of the contact line speed, as in the so-called ‘slip models’. Keywords: Dynamic contact angle; finite elements; free surface flows; hybrid numerical technique; Stokes equations.
Resumo:
The entropy budget is calculated of the coupled atmosphere–ocean general circulation model HadCM3. Estimates of the different entropy sources and sinks of the climate system are obtained directly from the diabatic heating terms, and an approximate estimate of the planetary entropy production is also provided. The rate of material entropy production of the climate system is found to be ∼50 mW m−2 K−1, a value intermediate in the range 30–70 mW m−2 K−1 previously reported from different models. The largest part of this is due to sensible and latent heat transport (∼38 mW m−2 K−1). Another 13 mW m−2 K−1 is due to dissipation of kinetic energy in the atmosphere by friction and Reynolds stresses. Numerical entropy production in the atmosphere dynamical core is found to be about 0.7 mW m−2 K−1. The material entropy production within the ocean due to turbulent mixing is ∼1 mW m−2 K−1, a very small contribution to the material entropy production of the climate system. The rate of change of entropy of the model climate system is about 1 mW m−2 K−1 or less, which is comparable with the typical size of the fluctuations of the entropy sources due to interannual variability, and a more accurate closure of the budget than achieved by previous analyses. Results are similar for FAMOUS, which has a lower spatial resolution but similar formulation to HadCM3, while more substantial differences are found with respect to other models, suggesting that the formulation of the model has an important influence on the climate entropy budget. Since this is the first diagnosis of the entropy budget in a climate model of the type and complexity used for projection of twenty-first century climate change, it would be valuable if similar analyses were carried out for other such models.
Resumo:
A scale-invariant moving finite element method is proposed for the adaptive solution of nonlinear partial differential equations. The mesh movement is based on a finite element discretisation of a scale-invariant conservation principle incorporating a monitor function, while the time discretisation of the resulting system of ordinary differential equations is carried out using a scale-invariant time-stepping which yields uniform local accuracy in time. The accuracy and reliability of the algorithm are successfully tested against exact self-similar solutions where available, and otherwise against a state-of-the-art h-refinement scheme for solutions of a two-dimensional porous medium equation problem with a moving boundary. The monitor functions used are the dependent variable and a monitor related to the surface area of the solution manifold. (c) 2005 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
This paper presents the major characteristics of the Institut Pierre Simon Laplace (IPSL) coupled ocean–atmosphere general circulation model. The model components and the coupling methodology are described, as well as the main characteristics of the climatology and interannual variability. The model results of the standard version used for IPCC climate projections, and for intercomparison projects like the Paleoclimate Modeling Intercomparison Project (PMIP 2) are compared to those with a higher resolution in the atmosphere. A focus on the North Atlantic and on the tropics is used to address the impact of the atmosphere resolution on processes and feedbacks. In the North Atlantic, the resolution change leads to an improved representation of the storm-tracks and the North Atlantic oscillation. The better representation of the wind structure increases the northward salt transports, the deep-water formation and the Atlantic meridional overturning circulation. In the tropics, the ocean–atmosphere dynamical coupling, or Bjerknes feedback, improves with the resolution. The amplitude of ENSO (El Niño-Southern oscillation) consequently increases, as the damping processes are left unchanged.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
The implications of whether new surfaces in cutting are formed just by plastic flow past the tool or by some fracturelike separation process involving significant surface work, are discussed. Oblique metalcutting is investigated using the ideas contained in a new algebraic model for the orthogonal machining of metals (Atkins, A. G., 2003, "Modeling Metalcutting Using Modern Ductile Fracture Mechanics: Quantitative Explanations for Some Longstanding Problems," Int. J. Mech. Sci., 45, pp. 373–396) in which significant surface work (ductile fracture toughnesses) is incorporated. The model is able to predict explicit material-dependent primary shear plane angles and provides explanations for a variety of well-known effects in cutting, such as the reduction of at small uncut chip thicknesses; the quasilinear plots of cutting force versus depth of cut; the existence of a positive force intercept in such plots; why, in the size-effect regime of machining, anomalously high values of yield stress are determined; and why finite element method simulations of cutting have to employ a "separation criterion" at the tool tip. Predictions from the new analysis for oblique cutting (including an investigation of Stabler's rule for the relation between the chip flow velocity angle C and the angle of blade inclination i) compare consistently and favorably with experimental results.
Resumo:
Samples of Norway spruce wood were impregnated with a water-soluble melamine formaldehyde resin by using short-term vacuum treatment and long-term immersion, respectively. By means of Fourier transform infrared (FTIR) spectroscopy and UV microspectrophotometry, it was shown that only diffusion during long-term immersion leads to sufficient penetration of melamine resin into the wood structure, the flow of liquids in Norway spruce wood during vacuum treatment being greatly hindered by aspirated pits. After an immersion in aqueous melamine resin solution for 3 days, the resin had penetrated to a depth > 4 mm, which, after polymerization of the resin, resulted in an improvement of hardness comparable to the hardwood beech. A finite element model describing the effect of increasing depth of modification on hardness demonstrated that under the test conditions chosen for this study, a minimum impregnation depth of 2 mm is necessary to achieve an optimum increase in hardness. (C) 2004 Wiley Periodicals, Inc.
Resumo:
This paper presents the results of quasi-static and dynamic testing of glass fiber-reinforced polyester leaf suspension for rail freight vehicles named Euroleaf. The principal elements of the suspension's design and manufacturing process are initially summarized. Comparison between quasi-static tests and finite element predictions are then presented. The Euroleaf suspension have been mounted on a tipper wagon and tested dynamically at tare and full load on a purpose-built shaker rig. A shaker rig dynamic testing methodology has been pioneered for rail vehicles, which follows closely road vehicle suspension dynamic testing methodology. The use and evaluation of this methodology have demonstrated that the Euroleaf suspension is dynamically much softer than steel suspensions even though it is statically much stiffer. As a consequence, the suspension dynamic loading at laden loading conditions is reduced compared to the most advanced steel leaf suspension over shaker rig track tests.
Resumo:
In this study, the authors evaluate the (El Niño–Southern Oscillation) ENSO–Asian monsoon interaction in a version of the Hadley Centre coupled ocean–atmosphere general circulation model (CGCM) known as HadCM3. The main focus is on two evolving anomalous anticyclones: one located over the south Indian Ocean (SIO) and the other over the western North Pacific (WNP). These two anomalous anticyclones are closely related to the developing and decaying phases of the ENSO and play a crucial role in linking the Asian monsoon to ENSO. It is found that the HadCM3 can well simulate the main features of the evolution of both anomalous anticyclones and the related SST dipoles, in association with the different phases of the ENSO cycle. By using the simulated results, the authors examine the relationship between the WNP/SIO anomalous anticyclones and the ENSO cycle, in particular the biennial component of the relationship. It is found that a strong El Niño event tends to be followed by a more rapid decay and is much more likely to become a La Niña event in the subsequent winter. The twin anomalous anticyclones in the western Pacific in the summer of a decaying El Niño are crucial for the transition from an El Niño into a La Niña. The El Niño (La Niña) events, especially the strong ones, strengthen significantly the correspondence between the SIO anticyclonic (cyclonic) anomaly in the preceding autumn and WNP anticyclonic (cyclonic) anomaly in the subsequent spring, and favor the persistence of the WNP anomaly from spring to summer. The present results suggest that both El Niño (La Niña) and the SIO/WNP anticyclonic (cyclonic) anomalies are closely tied with the tropospheric biennial oscillation (TBO). In addition, variability in the East Asian summer monsoon, which is dominated by the internal atmospheric variability, seems to be responsible for the appearance of the WNP anticyclonic anomaly through an upper-tropospheric meridional teleconnection pattern over the western and central Pacific.
Resumo:
Reaction Injection Moulding is a technology that enables the rapid production of complex plastic parts directly from a mixture of two reactive materials of low viscosity. The reactants are mixed in specific quantities and injected into a mould. This process allows large complex parts to be produced without the need for high clamping pressures. This chapter explores the simulation of the complex processes involved in reaction injection moulding. The reaction processes mean that the dynamics of the material in the mould are in constant evolution and an effective model which takes full account of these changing dynamics is introduced and incorporated in to finite element procedures, which are able to provide a complete simulation of the cycle of mould filling and subsequent curing.