886 resultados para LuGre friction
Resumo:
The influence of a large meridional submarine ridge on the decay of Agulhas rings is investigated with a 1 and 2-layer setup of the isopycnic primitive-equation ocean model MICOM. In the single-layer case we show that the SSH decay of the ring is primarily governed by bottom friction and secondly by the radiation of Rossby waves. When a topographic ridge is present, the effect of the ridge on SSH decay and loss of tracer from the ring is negligible. However, the barotropic ring cannot pass the ridge due to energy and vorticity constraints. In the case of a two-layer ring the initial SSH decay is governed by a mixed barotropic–baroclinic instability of the ring. Again, radiation of barotropic Rossby waves is present. When the ring passes the topographic ridge, it shows a small but significant stagnation of SSH decay, agreeing with satellite altimetry observations. This is found to be due to a reduction of the growth rate of the m = 2 instability, to conversions of kinetic energy to the upper layer, and to a decrease in Rossby-wave radiation. The energy transfer is related to the fact that coherent structures in the lower layer cannot pass the steep ridge due to energy constraints. Furthermore, the loss of tracer from the ring through filamentation is less than for a ring moving over a flat bottom, related to a decrease in propagation speed of the ring. We conclude that ridges like the Walvis Ridge tend to stabilize a multi-layer ring and reduce its decay.
Resumo:
The entropy budget is calculated of the coupled atmosphere–ocean general circulation model HadCM3. Estimates of the different entropy sources and sinks of the climate system are obtained directly from the diabatic heating terms, and an approximate estimate of the planetary entropy production is also provided. The rate of material entropy production of the climate system is found to be ∼50 mW m−2 K−1, a value intermediate in the range 30–70 mW m−2 K−1 previously reported from different models. The largest part of this is due to sensible and latent heat transport (∼38 mW m−2 K−1). Another 13 mW m−2 K−1 is due to dissipation of kinetic energy in the atmosphere by friction and Reynolds stresses. Numerical entropy production in the atmosphere dynamical core is found to be about 0.7 mW m−2 K−1. The material entropy production within the ocean due to turbulent mixing is ∼1 mW m−2 K−1, a very small contribution to the material entropy production of the climate system. The rate of change of entropy of the model climate system is about 1 mW m−2 K−1 or less, which is comparable with the typical size of the fluctuations of the entropy sources due to interannual variability, and a more accurate closure of the budget than achieved by previous analyses. Results are similar for FAMOUS, which has a lower spatial resolution but similar formulation to HadCM3, while more substantial differences are found with respect to other models, suggesting that the formulation of the model has an important influence on the climate entropy budget. Since this is the first diagnosis of the entropy budget in a climate model of the type and complexity used for projection of twenty-first century climate change, it would be valuable if similar analyses were carried out for other such models.
Resumo:
This study presents a numerical method to derive the Darcy- Weisbach friction coefficient for overland flow under partial inundation of surface roughness. To better account for the variable influence of roughness with varying levels of emergence, we model the flow over a network which evolves as the free surface rises. This network is constructed using a height numerical map, based on surface roughness data, and a discrete geometry skeletonization algorithm. By applying a hydraulic model to the flows through this network, local heads, velocities, and Froude and Reynolds numbers over the surface can be estimated. These quantities enable us to analyze the flow and ultimately to derive a bulk friction factor for flow over the entire surface which takes into account local variations in flow quantities. Results demonstrate that although the flow is laminar, head losses are chiefly inertial because of local flow disturbances. The results also emphasize that for conditions of partial inundation, flow resistance varies nonmonotonically but does generally increase with progressive roughness inundation.
Resumo:
A life cycle of the Madden–Julian oscillation (MJO) was constructed, based on 21 years of outgoing long-wave radiation data. Regression maps of NCEP–NCAR reanalysis data for the northern winter show statistically significant upper-tropospheric equatorial wave patterns linked to the tropical convection anomalies, and extratropical wave patterns over the North Pacific, North America, the Atlantic, the Southern Ocean and South America. To assess the cause of the circulation anomalies, a global primitive-equation model was initialized with the observed three-dimensional (3D) winter climatological mean flow and forced with a time-dependent heat source derived from the observed MJO anomalies. A model MJO cycle was constructed from the global response to the heating, and both the tropical and extratropical circulation anomalies generally matched the observations well. The equatorial wave patterns are established in a few days, while it takes approximately two weeks for the extratropical patterns to appear. The model response is robust and insensitive to realistic changes in damping and basic state. The model tropical anomalies are consistent with a forced equatorial Rossby–Kelvin wave response to the tropical MJO heating, although it is shifted westward by approximately 20° longitude relative to observations. This may be due to a lack of damping processes (cumulus friction) in the regions of convective heating. Once this shift is accounted for, the extratropical response is consistent with theories of Rossby wave forcing and dispersion on the climatological flow, and the pattern correlation between the observed and modelled extratropical flow is up to 0.85. The observed tropical and extratropical wave patterns account for a significant fraction of the intraseasonal circulation variance, and this reproducibility as a response to tropical MJO convection has implications for global medium-range weather prediction. Copyright © 2004 Royal Meteorological Society
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.
Resumo:
Satellite observed data for flood events have been used to calibrate and validate flood inundation models, providing valuable information on the spatial extent of the flood. Improvements in the resolution of this satellite imagery have enabled indirect remote sensing of water levels by using an underlying LiDAR DEM to extract the water surface elevation at the flood margin. Further to comparison of the spatial extent, this now allows for direct comparison between modelled and observed water surface elevations. Using a 12.5m ERS-1 image of a flood event in 2006 on the River Dee, North Wales, UK, both of these data types are extracted and each assessed for their value in the calibration of flood inundation models. A LiDAR guided snake algorithm is used to extract an outline of the flood from the satellite image. From the extracted outline a binary grid of wet / dry cells is created at the same resolution as the model, using this the spatial extent of the modelled and observed flood can be compared using a measure of fit between the two binary patterns of flooding. Water heights are extracted using points at intervals of approximately 100m along the extracted outline, and the students T-test is used to compare modelled and observed water surface elevations. A LISFLOOD-FP model of the catchment is set up using LiDAR topographic data resampled to the 12.5m resolution of the satellite image, and calibration of the friction parameter in the model is undertaken using each of the two approaches. Comparison between the two approaches highlights the sensitivity of the spatial measure of fit to uncertainty in the observed data and the potential drawbacks of using the spatial extent when parts of the flood are contained by the topography.
Resumo:
This article analyses the counter-terrorist operations carried out by Captain (later Major General) Orde Wingate in Palestine in 1938, and considers whether these might inform current operations. Wingate's Special Night Squads were formed from British soldiers and Jewish police specifically to counter terrorist and sabotage attacks. Their approach escalated from interdicting terrorist gangs to pre-emptive attacks on suspected terrorist sanctuaries to reprisal attacks after terrorist atrocities. They continued the British practice of using irregular units in counter-insurgency, which was sustained into the postwar era and contributed to the evolution of British Special Forces. Wingate's methods proved effective in pacifying terrorist-infested areas and could be applied again, but only in the face of 'friction' arising from changes in cultural attitudes since the 1930s, and from the political-strategic context of post-2001 counter-insurgent and counter-terrorist operations. In some cases, however, public opinion might not preclude the use of some of Wingate's techniques.
Resumo:
The existence of inertial steady currents that separate from a coast and meander afterward is investigated. By integrating the zonal momentum equation over a suitable area, it is shown that retroflecting currents cannot be steady in a reduced gravity or in a barotropic model of the ocean. Even friction cannot negate this conclusion. Previous literature on this subject, notably the discrepancy between several articles by Nof and Pichevin on the unsteadiness of retroflecting currents and steady solutions presented in other papers, is critically discussed. For more general separating current systems, a local analysis of the zonal momentum balance shows that given a coastal current with a specific zonal momentum structure, an inertial, steady, separating current is unlikely, and the only analytical solution provided in the literature is shown to be inconsistent. In a basin-wide view of these separating current systems, a scaling analysis reveals that steady separation is impossible when the interior flow is nondissipative (e.g., linear Sverdrup-like). These findings point to the possibility that a large part of the variability in the world’s oceans is due to the separation process rather than to instability of a free jet.
Resumo:
Examination of conditional instability of the second kind (CISK) and wind-induced surface heat exchange (WISHE), two proposed mechanisms for tropical cyclone and polar low intensification, suggests that the sensitivity of the intensification rate of these disturbances to surface properties, such as surface friction and moisture supply, will be different for the two mechanisms. These sensitivities were examined by perturbing the surface characteristics in a numerical model with explicit convection. The intensification rate was found to have a strong positive dependence on the heat and moisture transfer coefficients, while remaining largely insensitive to the frictional drag coefficient. CISK does not predict the observed dependence of vortex intensification rate on the heat and moisture transfer coefficients, nor the insensitivity to the frictional drag coefficient since it anticipates that intensification rate is controlled by frictional convergence in the boundary layer. Since neither conditional instability nor boundary moisture content showed any significant sensitivity to the transfer coefficients, this is true of CISK using both the convective closures of Ooyama and of Charney and Eliassen. In comparison, the WISHE intensification mechanism does predict the observed increase in intensification rate with heat and moisture transfer coefficients, while not anticipating a direct influence from surface friction.
Resumo:
The structure and size of the eyes generated in numerically simulated tropical cyclones and polar lows have been studied. A primitive-equation numerical model simulated systems in which the structures of the eyes formed were consistent with available observations. Whilst the tropical cyclone eyes generated were usually rapidly rotating, it appeared impossible for an eye formed in a system with a polar environment to develop this type of structure. The polar low eyes were found to be unable to warm through the subsidence of air with high values of potential temperature, as the environment was approximately statically neutral. Factors affecting the size of the eye were investigated through a series of controlled experiments. In mature tropical cyclone systems the size of the eye was insensitive to small changes in initial conditions, surface friction and latent and sensible heating from the ocean. In contrast, the eye size was strongly dependent on these parameters in the mature polar lows. Consistent with the findings, a mechanism is proposed in which the size of the eye in simulated polar lows is controlled by the strength of subsidence within it.
Resumo:
The assumption that negligible work is involved in the formation of new surfaces in the machining of ductile metals, is re-examined in the light of both current Finite Element Method (FEM) simulations of cutting and modern ductile fracture mechanics. The work associated with separation criteria in FEM models is shown to be in the kJ/m2 range rather than the few J/m2 of the surface energy (surface tension) employed by Shaw in his pioneering study of 1954 following which consideration of surface work has been omitted from analyses of metal cutting. The much greater values of surface specific work are not surprising in terms of ductile fracture mechanics where kJ/m2 values of fracture toughness are typical of the ductile metals involved in machining studies. This paper shows that when even the simple Ernst–Merchant analysis is generalised to include significant surface work, many of the experimental observations for which traditional ‘plasticity and friction only’ analyses seem to have no quantitative explanation, are now given meaning. In particular, the primary shear plane angle φ becomes material-dependent. The experimental increase of φ up to a saturated level, as the uncut chip thickness is increased, is predicted. The positive intercepts found in plots of cutting force vs. depth of cut, and in plots of force resolved along the primary shear plane vs. area of shear plane, are shown to be measures of the specific surface work. It is demonstrated that neglect of these intercepts in cutting analyses is the reason why anomalously high values of shear yield stress are derived at those very small uncut chip thicknesses at which the so-called size effect becomes evident. The material toughness/strength ratio, combined with the depth of cut to form a non-dimensional parameter, is shown to control ductile cutting mechanics. The toughness/strength ratio of a given material will change with rate, temperature, and thermomechanical treatment and the influence of such changes, together with changes in depth of cut, on the character of machining is discussed. Strength or hardness alone is insufficient to describe machining. The failure of the Ernst–Merchant theory seems less to do with problems of uniqueness and the validity of minimum work, and more to do with the problem not being properly posed. The new analysis compares favourably and consistently with the wide body of experimental results available in the literature. Why considerable progress in the understanding of metal cutting has been achieved without reference to significant surface work is also discussed.
Resumo:
An exploratory model for cutting is presented which incorporates fracture toughness as well as the commonly considered effects of plasticity and friction. The periodic load fluctuations Been in cutting force dynamometer tests are predicted, and considerations of chatter and surface finish follow. A non-dimensional group is put forward to classify different regimes of material response to machining. It leads to tentative explanations for the difficulties of cutting materials such as ceramics and brittlo polymers, and also relates to the formation of discontinuous chips. Experiments on a range of solids with widely varying toughness/strength ratios generally agree with the analysis.
Resumo:
A review is given of the mechanics of cutting, ranging from the slicing of thin floppy offcuts (where there is negligible elasticity and no permanent deformation of the offcut) to the machining of ductile metals (where there is severe permanent distortion of the offcut/chip). Materials scientists employ the former conditions to determine the fracture toughness of ‘soft’ solids such as biological materials and foodstuffs. In contrast, traditional analyses of metalcutting are based on plasticity and friction only, and do not incorporate toughness. The machining theories are inadequate in a number of ways but a recent paper has shown that when ductile work of fracture is included many, if not all, of the shortcomings are removed. Support for the new analysis is given by examination of FEM simulations of metalcutting which reveal that a ‘separation criterion’ has to be employed at the tool tip. Some consideration shows that the separation criteria are versions of void-initiation-growth-and-coalescence models employed in ductile fracture mechanics. The new analysis shows that cutting forces for ductile materials depend upon the fracture toughness as well as plasticity and friction, and reveals a simple way of determining both toughness and flow stress from cutting experiments. Examples are given for a wide range of materials including metals, polymers and wood, and comparison is made with the same properties independently determined using conventional testpieces. Because cutting can be steady state, a new way is presented for simultaneously measuring toughness and flow stress at controlled speeds and strain rates.