970 resultados para strategic uncertainty
Resumo:
Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.
Resumo:
21st century climate change is projected to result in an intensification of the global hydrological cycle, but there is substantial uncertainty in how this will impact freshwater availability. A relatively overlooked aspect of this uncertainty pertains to how different methods of estimating potential evapotranspiration (PET) respond to changing climate. Here we investigate the global response of six different PET methods to a 2 °C rise in global mean temperature. All methods suggest an increase in PET associated with a warming climate. However, differences in PET climate change signal of over 100% are found between methods. Analysis of a precipitation/PET aridity index and regional water surplus indicates that for certain regions and GCMs, choice of PET method can actually determine the direction of projections of future water resources. As such, method dependence of the PET climate change signal is an important source of uncertainty in projections of future freshwater availability.
Resumo:
In the UK, the recycling of sewage sludge to land is expected to double by 2006 but the security of this route is threatened by environmental concerns and health scares. Strategic investment is needed to ensure sustainable and secure sludge recycling outlets. At present, the security of this landbank for sludge recycling is determined by legislation relating to nutrient rather than potentially toxic elements (PTEs) applications to land - especially the environmental risk linked to soil phosphorus (P) saturation. We believe that not all land has an equal risk of contributing nutrients derived from applications to land to receiving waters. We are currently investigating whether it is possible to minimise nutrient loss by applying sludge to land outside Critical Source Areas (CSAs) regardless of soil P Index status. Research is underway to develop a predictive and spatially-sensitive, semi-distributed model of critical thresholds for sludge application that goes beyond traditional 'end-of-pipe" or "edge-of-field" modelling, to include hydrological flow paths and delivery mechanisms to receiving waters from non-point sources at the catchment scale.
Resumo:
A new dynamic model of water quality, Q(2), has recently been developed, capable of simulating large branched river systems. This paper describes the application of a generalized sensitivity analysis (GSA) to Q(2) for single reaches of the River Thames in southern England. Focusing on the simulation of dissolved oxygen (DO) (since this may be regarded as a proxy for the overall health of a river); the GSA is used to identify key parameters controlling model behavior and provide a probabilistic procedure for model calibration. It is shown that, in the River Thames at least, it is more important to obtain high quality forcing functions than to obtain improved parameter estimates once approximate values have been estimated. Furthermore, there is a need to ensure reasonable simulation of a range of water quality determinands, since a focus only on DO increases predictive uncertainty in the DO simulations. The Q(2) model has been applied here to the River Thames, but it has a broad utility for evaluating other systems in Europe and around the world.
Resumo:
A multi-scale framework for decision support is presented that uses a combination of experiments, models, communication, education and decision support tools to arrive at a realistic strategy to minimise diffuse pollution. Effective partnerships between researchers and stakeholders play a key part in successful implementation of this strategy. The Decision Support Matrix (DSM) is introduced as a set of visualisations that can be used at all scales, both to inform decision making and as a communication tool in stakeholder workshops. A demonstration farm is presented and one of its fields is taken as a case study. Hydrological and nutrient flow path models are used for event based simulation (TOPCAT), catchment scale modelling (INCA) and field scale flow visualisation (TopManage). One of the DSMs; The Phosphorus Export Risk Matrix (PERM) is discussed in detail. The PERM was developed iteratively as a point of discussion in stakeholder workshops, as a decision support and education tool. The resulting interactive PERM contains a set of questions and proposed remediation measures that reflect both expert and local knowledge. Education and visualisation tools such as GIS, risk indicators, TopManage and the PERM are found to be invaluable in communicating improved farming practice to stakeholders. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe. The implementation of these policies involves large expenditures, and it is reasonable for policymakers to ask what degree of certainty can be attached to the underlying critical load and exceedance estimates. This paper is a literature review of studies which attempt to estimate the uncertainty attached to critical loads. Critical load models and uncertainty analysis are briefly outlined. Most studies have used Monte Carlo analysis of some form to investigate the propagation of uncertainties in the definition of the input parameters through to uncertainties in critical loads. Though the input parameters are often poorly known, the critical load uncertainties are typically surprisingly small because of a "compensation of errors" mechanism. These results depend on the quality of the uncertainty estimates of the input parameters, and a "pedigree" classification for these is proposed. Sensitivity analysis shows that some input parameters are more important in influencing critical load uncertainty than others, but there have not been enough studies to form a general picture. Methods used for dealing with spatial variation are briefly discussed. Application of alternative models to the same site or modifications of existing models can lead to widely differing critical loads, indicating that research into the underlying science needs to continue.
Resumo:
This paper reports an uncertainty analysis of critical loads for acid deposition for a site in southern England, using the Steady State Mass Balance Model. The uncertainty bounds, distribution type and correlation structure for each of the 18 input parameters was considered explicitly, and overall uncertainty estimated by Monte Carlo methods. Estimates of deposition uncertainty were made from measured data and an atmospheric dispersion model, and hence the uncertainty in exceedance could also be calculated. The uncertainties of the calculated critical loads were generally much lower than those of the input parameters due to a "compensation of errors" mechanism - coefficients of variation ranged from 13% for CLmaxN to 37% for CL(A). With 1990 deposition, the probability that the critical load was exceeded was > 0.99; to reduce this probability to 0.50, a 63% reduction in deposition is required; to 0.05, an 82% reduction. With 1997 deposition, which was lower than that in 1990, exceedance probabilities declined and uncertainties in exceedance narrowed as deposition uncertainty had less effect. The parameters contributing most to the uncertainty in critical loads were weathering rates, base cation uptake rates, and choice of critical chemical value, indicating possible research priorities. However, the different critical load parameters were to some extent sensitive to different input parameters. The application of such probabilistic results to environmental regulation is discussed.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe and elsewhere. They are assessed by several elaborate and ingenious models, each of which requires many parameters, and have to be applied on a spatially-distributed basis. Often the values of the input parameters are poorly known, calling into question the validity of the calculated critical loads. This paper attempts to quantify the uncertainty in the critical loads due to this "parameter uncertainty", using examples from the UK. Models used for calculating critical loads for deposition of acidity and nitrogen in forest and heathland ecosystems were tested at four contrasting sites. Uncertainty was assessed by Monte Carlo methods. Each input parameter or variable was assigned a value, range and distribution in an objective a fashion as possible. Each model was run 5000 times at each site using parameters sampled from these input distributions. Output distributions of various critical load parameters were calculated. The results were surprising. Confidence limits of the calculated critical loads were typically considerably narrower than those of most of the input parameters. This may be due to a "compensation of errors" mechanism. The range of possible critical load values at a given site is however rather wide, and the tails of the distributions are typically long. The deposition reductions required for a high level of confidence that the critical load is not exceeded are thus likely to be large. The implication for pollutant regulation is that requiring a high probability of non-exceedance is likely to carry high costs. The relative contribution of the input variables to critical load uncertainty varied from site to site: any input variable could be important, and thus it was not possible to identify variables as likely targets for research into narrowing uncertainties. Sites where a number of good measurements of input parameters were available had lower uncertainties, so use of in situ measurement could be a valuable way of reducing critical load uncertainty at particularly valuable or disputed sites. From a restricted number of samples, uncertainties in heathland critical loads appear comparable to those of coniferous forest, and nutrient nitrogen critical loads to those of acidity. It was important to include correlations between input variables in the Monte Carlo analysis, but choice of statistical distribution type was of lesser importance. Overall, the analysis provided objective support for the continued use of critical loads in policy development. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.