84 resultados para Empirical Flow Models
em CentAUR: Central Archive University of Reading - UK
Resumo:
A significant challenge in the prediction of climate change impacts on ecosystems and biodiversity is quantifying the sources of uncertainty that emerge within and between different models. Statistical species niche models have grown in popularity, yet no single best technique has been identified reflecting differing performance in different situations. Our aim was to quantify uncertainties associated with the application of 2 complimentary modelling techniques. Generalised linear mixed models (GLMM) and generalised additive mixed models (GAMM) were used to model the realised niche of ombrotrophic Sphagnum species in British peatlands. These models were then used to predict changes in Sphagnum cover between 2020 and 2050 based on projections of climate change and atmospheric deposition of nitrogen and sulphur. Over 90% of the variation in the GLMM predictions was due to niche model parameter uncertainty, dropping to 14% for the GAMM. After having covaried out other factors, average variation in predicted values of Sphagnum cover across UK peatlands was the next largest source of variation (8% for the GLMM and 86% for the GAMM). The better performance of the GAMM needs to be weighed against its tendency to overfit the training data. While our niche models are only a first approximation, we used them to undertake a preliminary evaluation of the relative importance of climate change and nitrogen and sulphur deposition and the geographic locations of the largest expected changes in Sphagnum cover. Predicted changes in cover were all small (generally <1% in an average 4 m2 unit area) but also highly uncertain. Peatlands expected to be most affected by climate change in combination with atmospheric pollution were Dartmoor, Brecon Beacons and the western Lake District.
Resumo:
Real estate depreciation continues to be a critical issue for investors and the appraisal profession in the UK in the 1990s. Depreciation-sensitive cash flow models have been developed, but there is a real need to develop further empirical methodologies to determine rental depreciation rates for input into these models. Although building quality has been found to be an important explanatory variable in depreciation it is very difficult to incorporate it into such models or to analyse it retrospectively. It is essential to examine previous depreciation research from real estate and economics in the USA and UK to understand the issues in constructing a valid and pragmatic way of calculating rental depreciation. Distinguishing between 'depreciation' and 'obsolescence' is important, and the pattern of depreciation in any study can be influenced by such factors as the type (longitudinal or crosssectional) and timing of the study, and the market state. Longitudinal studies can analyse change more directly than cross-sectional studies. Any methodology for calculating rental depreciation rate should be formulated in the context of such issues as 'censored sample bias', 'lemons' and 'filtering', which have been highlighted in key US literature from the field of economic depreciation. Property depreciation studies in the UK have tended to overlook this literature, however. Although data limitations and constraints reduce the ability of empirical property depreciation work in the UK to consider these issues fully, 'averaging' techniques and ordinary least squares (OLS) regression can both provide a consistent way of calculating rental depreciation rates within a 'cohort' framework.
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
Heat waves are expected to increase in frequency and magnitude with climate change. The first part of a study to produce projections of the effect of future climate change on heat-related mortality is presented. Separate city-specific empirical statistical models that quantify significant relationships between summer daily maximum temperature (T max) and daily heat-related deaths are constructed from historical data for six cities: Boston, Budapest, Dallas, Lisbon, London, and Sydney. ‘Threshold temperatures’ above which heat-related deaths begin to occur are identified. The results demonstrate significantly lower thresholds in ‘cooler’ cities exhibiting lower mean summer temperatures than in ‘warmer’ cities exhibiting higher mean summer temperatures. Analysis of individual ‘heat waves’ illustrates that a greater proportion of mortality is due to mortality displacement in cities with less sensitive temperature–mortality relationships than in those with more sensitive relationships, and that mortality displacement is no longer a feature more than 12 days after the end of the heat wave. Validation techniques through residual and correlation analyses of modelled and observed values and comparisons with other studies indicate that the observed temperature–mortality relationships are represented well by each of the models. The models can therefore be used with confidence to examine future heat-related deaths under various climate change scenarios for the respective cities (presented in Part 2).
Resumo:
An important test of the quality of a computational model is its ability to reproduce standard test cases or benchmarks. For steady open–channel flow based on the Saint Venant equations some benchmarks exist for simple geometries from the work of Bresse, Bakhmeteff and Chow but these are tabulated in the form of standard integrals. This paper provides benchmark solutions for a wider range of cases, which may have a nonprismatic cross section, nonuniform bed slope, and transitions between subcritical and supercritical flow. This makes it possible to assess the underlying quality of computational algorithms in more difficult cases, including those with hydraulic jumps. Several new test cases are given in detail and the performance of a commercial steady flow package is evaluated against two of them. The test cases may also be used as benchmarks for both steady flow models and unsteady flow models in the steady limit.
Resumo:
Rapid rates of urbanization have resulted into increased concerns of urban environment. Amongst them, wind and thermal comfort levels for pedestrians have attracted research interest. In this regards, urban wind environment is seen as a crucial components that can lead to improved thermal comfort levels for pedestrian population. High rise building in modern urban setting causes high levels of turbulence that renders discomfort to pedestrians. Additionally, a higher frequency of high ris e buildings at a particular region acts as a shield against the wind flow to the lower buildings beyond them resulting into higher levels of discomfort to users or residents. Studies conducted on developing wind flow models using Computational Fluid Dynami cs (CFD) simulations have revealed improvement in interval to height ratios can results into improved wind flow within the simulation grid. However, high value and demand for land in urban areas renders expansion to be an impractical solution. Nonetheless, innovative utilization of architectural concepts can be imagined to improve the pedestrian comfort levels through improved wind permeability. This paper assesses the possibility of through-building gaps being a solution to improve pedestrian comfort levels.
Resumo:
Simulation models are widely employed to make probability forecasts of future conditions on seasonal to annual lead times. Added value in such forecasts is reflected in the information they add, either to purely empirical statistical models or to simpler simulation models. An evaluation of seasonal probability forecasts from the Development of a European Multimodel Ensemble system for seasonal to inTERannual prediction (DEMETER) and ENSEMBLES multi-model ensemble experiments is presented. Two particular regions are considered: Nino3.4 in the Pacific and the Main Development Region in the Atlantic; these regions were chosen before any spatial distribution of skill was examined. The ENSEMBLES models are found to have skill against the climatological distribution on seasonal time-scales. For models in ENSEMBLES that have a clearly defined predecessor model in DEMETER, the improvement from DEMETER to ENSEMBLES is discussed. Due to the long lead times of the forecasts and the evolution of observation technology, the forecast-outcome archive for seasonal forecast evaluation is small; arguably, evaluation data for seasonal forecasting will always be precious. Issues of information contamination from in-sample evaluation are discussed and impacts (both positive and negative) of variations in cross-validation protocol are demonstrated. Other difficulties due to the small forecast-outcome archive are identified. The claim that the multi-model ensemble provides a ‘better’ probability forecast than the best single model is examined and challenged. Significant forecast information beyond the climatological distribution is also demonstrated in a persistence probability forecast. The ENSEMBLES probability forecasts add significantly more information to empirical probability forecasts on seasonal time-scales than on decadal scales. Current operational forecasts might be enhanced by melding information from both simulation models and empirical models. Simulation models based on physical principles are sometimes expected, in principle, to outperform empirical models; direct comparison of their forecast skill provides information on progress toward that goal.
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
The implications of polar cap expansions, contractions and movements for empirical models of high-latitude plasma convection are examined. Some of these models have been generated by directly averaging flow measurements from large numbers of satellite passes or radar scans; others have employed more complex means to combine data taken at different times into large-scale patterns of flow. In all cases, the models have implicitly adopted the assumption that the polar cap is in steady state: they have all characterized the ionospheric flow in terms of the prevailing conditions (e.g. the interplanetary magnetic field and/or some index of terrestrial magnetic activity) without allowance for their history. On long enough time scales, the polar cap is indeed in steady state but on time scales shorter than a few hours it is not and can oscillate in size and position. As a result, the method used to combine the data can influence the nature of the convection reversal boundary and the transpolar voltage in the derived model. This paper discusses a variety of effects due to time-dependence in relation to some ionospheric convection models which are widely applied. The effects are shown to be varied and to depend upon the procedure adopted to compile the model.
Phosphorus dynamics and export in streams draining micro-catchments: Development of empirical models
Resumo:
Annual total phosphorus (TP) export data from 108 European micro-catchments were analyzed against descriptive catchment data on climate (runoff), soil types, catchment size, and land use. The best possible empirical model developed included runoff, proportion of agricultural land and catchment size as explanatory variables but with a low explanation of the variance in the dataset (R-2 = 0.37). Improved country specific empirical models could be developed in some cases. The best example was from Norway where an analysis of TP-export data from 12 predominantly agricultural micro-catchments revealed a relationship explaining 96% of the variance in TP-export. The explanatory variables were in this case soil-P status (P-AL), proportion of organic soil, and the export of suspended sediment. Another example is from Denmark where an empirical model was established for the basic annual average TP-export from 24 catchments with percentage sandy soils, percentage organic soils, runoff, and application of phosphorus in fertilizer and animal manure as explanatory variables (R-2 = 0.97).
Resumo:
Across Europe, elevated phosphorus (P) concentrations in lowland rivers have made them particularly susceptible to eutrophication. This is compounded in southern and central UK by increasing pressures on water resources, which may be further enhanced by the potential effects of climate change. The EU Water Framework Directive requires an integrated approach to water resources management at the catchment scale and highlights the need for modelling tools that can distinguish relative contributions from multiple nutrient sources and are consistent with the information content of the available data. Two such models are introduced and evaluated within a stochastic framework using daily flow and total phosphorus concentrations recorded in a clay catchment typical of many areas of the lowland UK. Both models disaggregate empirical annual load estimates, derived from land use data, as a function of surface/near surface runoff, generated using a simple conceptual rainfall-runoff model. Estimates of the daily load from agricultural land, together with those from baseflow and point sources, feed into an in-stream routing algorithm. The first model assumes constant concentrations in runoff via surface/near surface pathways and incorporates an additional P store in the river-bed sediments, depleted above a critical discharge, to explicitly simulate resuspension. The second model, which is simpler, simulates P concentrations as a function of surface/near surface runoff, thus emphasising the influence of non-point source loads during flow peaks and mixing of baseflow and point sources during low flows. The temporal consistency of parameter estimates and thus the suitability of each approach is assessed dynamically following a new approach based on Monte-Carlo analysis. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
A comparison of the models of Vitti et al. (2000, J. Anim. Sci. 78, 2706-2712) and Fernandez (1995c, Livest. Prod. Sci. 41, 255-261) was carried out using two data sets on growing pigs as input. The two models compared were based on similar basic principles, although their aims and calculations differed. The Vitti model employs the rate:state formalism and describes phosphorus (P) flow between four pools representing P content in gut, blood, bone and soft tissue in growing goats. The Fernandez model describes flow and fractional recirculation between P pools in gut, blood and bone in growing pigs. The results from both models showed similar trends for P absorption from gut to blood and net retention in bone with increasing P intake, with the exception of the 65 kg results from Date Set 2 calculated using the FernAndez model. Endogenous loss from blood back to gut increased faster with increasing P intake in the FernAndez than in the Vitti model for Data Set 1. However, for Data Set 2, endogenous loss increased with increasing P intake using the Vitti model, but decreased when calculated using the FernAndez model. Incorporation of P into bone was not influenced by intake in the FernAndez model, while in the Vitti model there was an increasing trend. The FernAndez model produced a pattern of decreasing resorption in bone with increasing P intake, with one of the data sets, which was not observed when using the Vitti model. The pigs maintained their P homeostasis in blood by regulation of P excretion in urine. (c) 2005 Elsevier Ltd. All rights reserved.