918 resultados para Coupled Finite Element Track Model
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
The implications of whether new surfaces in cutting are formed just by plastic flow past the tool or by some fracturelike separation process involving significant surface work, are discussed. Oblique metalcutting is investigated using the ideas contained in a new algebraic model for the orthogonal machining of metals (Atkins, A. G., 2003, "Modeling Metalcutting Using Modern Ductile Fracture Mechanics: Quantitative Explanations for Some Longstanding Problems," Int. J. Mech. Sci., 45, pp. 373–396) in which significant surface work (ductile fracture toughnesses) is incorporated. The model is able to predict explicit material-dependent primary shear plane angles and provides explanations for a variety of well-known effects in cutting, such as the reduction of at small uncut chip thicknesses; the quasilinear plots of cutting force versus depth of cut; the existence of a positive force intercept in such plots; why, in the size-effect regime of machining, anomalously high values of yield stress are determined; and why finite element method simulations of cutting have to employ a "separation criterion" at the tool tip. Predictions from the new analysis for oblique cutting (including an investigation of Stabler's rule for the relation between the chip flow velocity angle C and the angle of blade inclination i) compare consistently and favorably with experimental results.
Resumo:
Samples of Norway spruce wood were impregnated with a water-soluble melamine formaldehyde resin by using short-term vacuum treatment and long-term immersion, respectively. By means of Fourier transform infrared (FTIR) spectroscopy and UV microspectrophotometry, it was shown that only diffusion during long-term immersion leads to sufficient penetration of melamine resin into the wood structure, the flow of liquids in Norway spruce wood during vacuum treatment being greatly hindered by aspirated pits. After an immersion in aqueous melamine resin solution for 3 days, the resin had penetrated to a depth > 4 mm, which, after polymerization of the resin, resulted in an improvement of hardness comparable to the hardwood beech. A finite element model describing the effect of increasing depth of modification on hardness demonstrated that under the test conditions chosen for this study, a minimum impregnation depth of 2 mm is necessary to achieve an optimum increase in hardness. (C) 2004 Wiley Periodicals, Inc.
Resumo:
This paper presents the results of quasi-static and dynamic testing of glass fiber-reinforced polyester leaf suspension for rail freight vehicles named Euroleaf. The principal elements of the suspension's design and manufacturing process are initially summarized. Comparison between quasi-static tests and finite element predictions are then presented. The Euroleaf suspension have been mounted on a tipper wagon and tested dynamically at tare and full load on a purpose-built shaker rig. A shaker rig dynamic testing methodology has been pioneered for rail vehicles, which follows closely road vehicle suspension dynamic testing methodology. The use and evaluation of this methodology have demonstrated that the Euroleaf suspension is dynamically much softer than steel suspensions even though it is statically much stiffer. As a consequence, the suspension dynamic loading at laden loading conditions is reduced compared to the most advanced steel leaf suspension over shaker rig track tests.
Resumo:
In this study, the authors evaluate the (El Niño–Southern Oscillation) ENSO–Asian monsoon interaction in a version of the Hadley Centre coupled ocean–atmosphere general circulation model (CGCM) known as HadCM3. The main focus is on two evolving anomalous anticyclones: one located over the south Indian Ocean (SIO) and the other over the western North Pacific (WNP). These two anomalous anticyclones are closely related to the developing and decaying phases of the ENSO and play a crucial role in linking the Asian monsoon to ENSO. It is found that the HadCM3 can well simulate the main features of the evolution of both anomalous anticyclones and the related SST dipoles, in association with the different phases of the ENSO cycle. By using the simulated results, the authors examine the relationship between the WNP/SIO anomalous anticyclones and the ENSO cycle, in particular the biennial component of the relationship. It is found that a strong El Niño event tends to be followed by a more rapid decay and is much more likely to become a La Niña event in the subsequent winter. The twin anomalous anticyclones in the western Pacific in the summer of a decaying El Niño are crucial for the transition from an El Niño into a La Niña. The El Niño (La Niña) events, especially the strong ones, strengthen significantly the correspondence between the SIO anticyclonic (cyclonic) anomaly in the preceding autumn and WNP anticyclonic (cyclonic) anomaly in the subsequent spring, and favor the persistence of the WNP anomaly from spring to summer. The present results suggest that both El Niño (La Niña) and the SIO/WNP anticyclonic (cyclonic) anomalies are closely tied with the tropospheric biennial oscillation (TBO). In addition, variability in the East Asian summer monsoon, which is dominated by the internal atmospheric variability, seems to be responsible for the appearance of the WNP anticyclonic anomaly through an upper-tropospheric meridional teleconnection pattern over the western and central Pacific.
Resumo:
Reaction Injection Moulding is a technology that enables the rapid production of complex plastic parts directly from a mixture of two reactive materials of low viscosity. The reactants are mixed in specific quantities and injected into a mould. This process allows large complex parts to be produced without the need for high clamping pressures. This chapter explores the simulation of the complex processes involved in reaction injection moulding. The reaction processes mean that the dynamics of the material in the mould are in constant evolution and an effective model which takes full account of these changing dynamics is introduced and incorporated in to finite element procedures, which are able to provide a complete simulation of the cycle of mould filling and subsequent curing.
Resumo:
A mesoscale meteorological model (FOOT3DK) is coupled with a gas exchange model to simulate surface fluxes of CO2 and H2O under field conditions. The gas exchange model consists of a C3 single leaf photosynthesis sub-model and an extended big leaf (sun/shade) sub-model that divides the canopy into sunlit and shaded fractions. Simulated CO2 fluxes of the stand-alone version of the gas exchange model correspond well to eddy-covariance measurements at a test site in a rural area in the west of Germany. The coupled FOOT3DK/gas exchange model is validated for the diurnal cycle at singular grid points, and delivers realistic fluxes with respect to their order of magnitude and to the general daily course. Compared to the Jarvis-based big leaf scheme, simulations of latent heat fluxes with a photosynthesis-based scheme for stomatal conductance are more realistic. As expected, flux averages are strongly influenced by the underlying land cover. While the simulated net ecosystem exchange is highly correlated with leaf area index, this correlation is much weaker for the latent heat flux. Photosynthetic CO2 uptake is associated with transpirational water loss via the stomata, and the resulting opposing surface fluxes of CO2 and H2O are reproduced with the model approach. Over vegetated surfaces it is shown that the coupling of a photosynthesis-based gas exchange model with the land-surface scheme of a mesoscale model results in more realistic simulated latent heat fluxes.
Resumo:
This paper proposes a new reconstruction method for diffuse optical tomography using reduced-order models of light transport in tissue. The models, which directly map optical tissue parameters to optical flux measurements at the detector locations, are derived based on data generated by numerical simulation of a reference model. The reconstruction algorithm based on the reduced-order models is a few orders of magnitude faster than the one based on a finite element approximation on a fine mesh incorporating a priori anatomical information acquired by magnetic resonance imaging. We demonstrate the accuracy and speed of the approach using a phantom experiment and through numerical simulation of brain activation in a rat's head. The applicability of the approach for real-time monitoring of brain hemodynamics is demonstrated through a hypercapnic experiment. We show that our results agree with the expected physiological changes and with results of a similar experimental study. However, by using our approach, a three-dimensional tomographic reconstruction can be performed in ∼3 s per time point instead of the 1 to 2 h it takes when using the conventional finite element modeling approach
Resumo:
The Ultra Weak Variational Formulation (UWVF) is a powerful numerical method for the approximation of acoustic, elastic and electromagnetic waves in the time-harmonic regime. The use of Trefftz-type basis functions incorporates the known wave-like behaviour of the solution in the discrete space, allowing large reductions in the required number of degrees of freedom for a given accuracy, when compared to standard finite element methods. However, the UWVF is not well disposed to the accurate approximation of singular sources in the interior of the computational domain. We propose an adjustment to the UWVF for seismic imaging applications, which we call the Source Extraction UWVF. Differing fields are solved for in subdomains around the source, and matched on the inter-domain boundaries. Numerical results are presented for a domain of constant wavenumber and for a domain of varying sound speed in a model used for seismic imaging.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model.
Resumo:
Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.
Resumo:
This paper details a strategy for modifying the source code of a complex model so that the model may be used in a data assimilation context, {and gives the standards for implementing a data assimilation code to use such a model}. The strategy relies on keeping the model separate from any data assimilation code, and coupling the two through the use of Message Passing Interface (MPI) {functionality}. This strategy limits the changes necessary to the model and as such is rapid to program, at the expense of ultimate performance. The implementation technique is applied in different models with state dimension up to $2.7 \times 10^8$. The overheads added by using this implementation strategy in a coupled ocean-atmosphere climate model are shown to be an order of magnitude smaller than the addition of correlated stochastic random errors necessary for some nonlinear data assimilation techniques.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model
Resumo:
The current study evaluated the influence of two endodontic post systems and the elastic modulus and film thickness of resin cement on stress distribution in a maxillary central incisor (MCI) restored with direct resin composite using finite element analysis (FEA). A three-dimensional model of an MCI with a coronary fracture and supporting structures was performed. A static chewing pressure of 2.16 N/mm(2) was applied to two areas on the palatal surface of the composite restoration. Zirconia ceramic (ZC) and glass fiber (GF) posts were considered. The stress distribution was analyzed in the post, dentin and cement layer when ZC and GF posts were fixed to the root canals using resin cements of different elastic moduli (7.0 and 18.6 GPa) and different layer thicknesses (70 and 200 mu m). The different post materials presented a significant influence on stress distribution with lesser stress concentration when using the GF post. The higher elastic modulus cement created higher stress levels within itself. The cement thicknesses did not present significant changes.