978 resultados para Uncertainty evaluation
Resumo:
This paper investigates the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, that simple and simplistic models may produce similar outputs to more robust and disaggregated models.
Resumo:
The evaluation of the quality and usefulness of climate modeling systems is dependent upon an assessment of both the limited predictability of the climate system and the uncertainties stemming from model formulation. In this study a methodology is presented that is suited to assess the performance of a regional climate model (RCM), based on its ability to represent the natural interannual variability on monthly and seasonal timescales. The methodology involves carrying out multiyear ensemble simulations (to assess the predictability bounds within which the model can be evaluated against observations) and multiyear sensitivity experiments using different model formulations (to assess the model uncertainty). As an example application, experiments driven by assimilated lateral boundary conditions and sea surface temperatures from the ECMWF Reanalysis Project (ERA-15, 1979–1993) were conducted. While the ensemble experiment demonstrates that the predictability of the regional climate varies strongly between different seasons and regions, being weakest during the summer and over continental regions, important sensitivities of the modeling system to parameterization choices are uncovered. In particular, compensating mechanisms related to the long-term representation of the water cycle are revealed, in which summer dry and hot conditions at the surface, resulting from insufficient evaporation, can persist despite insufficient net solar radiation (a result of unrealistic cloud-radiative feedbacks).
Resumo:
There is large uncertainty about the magnitude of warming and how rainfall patterns will change in response to any given scenario of future changes in atmospheric composition and land use. The models used for future climate projections were developed and calibrated using climate observations from the past 40 years. The geologic record of environmental responses to climate changes provides a unique opportunity to test model performance outside this limited climate range. Evaluation of model simulations against palaeodata shows that models reproduce the direction and large-scale patterns of past changes in climate, but tend to underestimate the magnitude of regional changes. As part of the effort to reduce model-related uncertainty and produce more reliable estimates of twenty-first century climate, the Palaeoclimate Modelling Intercomparison Project is systematically applying palaeoevaluation techniques to simulations of the past run with the models used to make future projections. This evaluation will provide assessments of model performance, including whether a model is sufficiently sensitive to changes in atmospheric composition, as well as providing estimates of the strength of biosphere and other feedbacks that could amplify the model response to these changes and modify the characteristics of climate variability.
Resumo:
In order to evaluate the future potential benefits of emission regulation on regional air quality, while taking into account the effects of climate change, off-line air quality projection simulations are driven using weather forcing taken from regional climate models. These regional models are themselves driven by simulations carried out using global climate models (GCM) and economical scenarios. Uncertainties and biases in climate models introduce an additional “climate modeling” source of uncertainty that is to be added to all other types of uncertainties in air quality modeling for policy evaluation. In this article we evaluate the changes in air quality-related weather variables induced by replacing reanalyses-forced by GCM-forced regional climate simulations. As an example we use GCM simulations carried out in the framework of the ERA-interim programme and of the CMIP5 project using the Institut Pierre-Simon Laplace climate model (IPSLcm), driving regional simulations performed in the framework of the EURO-CORDEX programme. In summer, we found compensating deficiencies acting on photochemistry: an overestimation by GCM-driven weather due to a positive bias in short-wave radiation, a negative bias in wind speed, too many stagnant episodes, and a negative temperature bias. In winter, air quality is mostly driven by dispersion, and we could not identify significant differences in either wind or planetary boundary layer height statistics between GCM-driven and reanalyses-driven regional simulations. However, precipitation appears largely overestimated in GCM-driven simulations, which could significantly affect the simulation of aerosol concentrations. The identification of these biases will help interpreting results of future air quality simulations using these data. Despite these, we conclude that the identified differences should not lead to major difficulties in using GCM-driven regional climate simulations for air quality projections.
Resumo:
An urban energy and water balance model is presented which uses a small number of commonly measured meteorological variables and information about the surface cover. Rates of evaporation-interception for a single layer with multiple surface types (paved, buildings, coniferous trees and/or shrubs, deciduous trees and/or shrubs, irrigated grass, non-irrigated grass and water) are calculated. Below each surface type, except water, there is a single soil layer. At each time step the moisture state of each surface is calculated. Horizontal water movements at the surface and in the soil are incorporated. Particular attention is given to the surface conductance used to model evaporation and its parameters. The model is tested against direct flux measurements carried out over a number of years in Vancouver, Canada and Los Angeles, USA. At all measurement sites the model is able to simulate the net all-wave radiation and turbulent sensible and latent heat well (RMSE = 25–47 W m−2, 30–64 and 20–56 W m−2, respectively). The model reproduces the diurnal cycle of the turbulent fluxes but typically underestimates latent heat flux and overestimates sensible heat flux in the day time. The model tracks measured surface wetness and simulates the variations in soil moisture content. It is able to respond correctly to short-term events as well as annual changes. The largest uncertainty relates to the determination of surface conductance. The model has the potential be used for multiple applications; for example, to predict effects of regulation on urban water use, landscaping and planning scenarios, or to assess climate mitigation strategies.
Resumo:
Earth system models are increasing in complexity and incorporating more processes than their predecessors, making them important tools for studying the global carbon cycle. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes, with coupled climate-carbon cycle models that represent land-use change simulating total land carbon stores by 2100 that vary by as much as 600 Pg C given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous model evaluation methodologies. Here we assess the state-of-the-art with respect to evaluation of Earth system models, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeo data and (ii) metrics for evaluation, and discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute towards the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but it is also a challenge, as more knowledge about data uncertainties is required in order to determine robust evaluation methodologies that move the field of ESM evaluation from "beauty contest" toward the development of useful constraints on model behaviour.
Resumo:
Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.
Resumo:
The correlation between the coronal source flux F_{S} and the total solar irradiance I_{TS} is re-evaluated in the light of an additional 5 years' data from the rising phase of solar cycle 23 and also by using cosmic ray fluxes detected at Earth. Tests on monthly averages show that the correlation with F_{S} deduced from the interplanetary magnetic field (correlation coefficient, r = 0.62) is highly significant (99.999%), but that there is insufficient data for the higher correlation with annual means (r = 0.80) to be considered significant. Anti-correlations between I_{TS} and cosmic ray fluxes are found in monthly data for all stations and geomagnetic rigidity cut-offs (r ranging from −0.63 to −0.74) and these have significance levels between 85% and 98%. In all cases, the t is poorest for the earliest data (i.e., prior to 1982). Excluding these data improves the anticorrelation with cosmic rays to r = −0:93 for one-year running means. Both the interplanetary magnetic field data and the cosmic ray fluxes indicate that the total solar irradiance lags behind the open solar flux with a delay that is estimated to have an optimum value of 2.8 months (and is within the uncertainty range 0.8-8.0 months at the 90% level).
Resumo:
Assessments concerning the effects of climate change, water resource availability and water deprivation in West Africa have not frequently considered the positive contribution to be derived from collecting and reusing water for domestic purposes. Where the originating water is taken from a clean water source and has been used the first time for washing or bathing, this water is commonly called “greywater”. Greywater is a prolific resource that is generated wherever people live. Treated greywater can be used for domestic cleaning, for flushing toilets where appropriate, for washing cars, sometimes for watering kitchen gardens, and for clothes washing prior to rinsing. Therefore, a large theoretical potential exists to increase total water resource availability if greywater were to be widely reused. Locally treated greywater reduces the distribution network requirement, lower construction effort and cost and, wherever possible, minimising the associated carbon footprint. Such locally treated greywater offers significant practical opportunities for increasing the total available water resources at a local level. The reuse of treated greywater is one important action that will help to mitigate the reducing availability of clean water supplies in some areas, and the expected mitigation required in future aligns well with WHO/UNICEF (2012) aspirations. The evaluation of potential opportunities for prioritising greywater systems to support water reuse takes into account the availability of water resources, water use indicators and published estimates in order to understand typical patterns of water demand. The approach supports knowledge acquisition regarding local conditions for enabling capacity building for greywater reuse, the understanding of systems that are most likely to encourage greywater reuse, and practices and future actions to stimulate greywater infrastructure planning, design and implementation. Although reuse might be considered to increase the uncertainty of achieving a specified quality of the water supply, robust methods and technologies are available for local treatment. Resource strategies for greywater reuse have the potential to consistently improve water efficiency and availability in water impoverished and water stressed regions of Ghana and West Africa. Untreated greywater is referred to as “greywater”; treated greywater is referred to as “treated greywater” in this paper.
Resumo:
This paper investigates the feasibility of using approximate Bayesian computation (ABC) to calibrate and evaluate complex individual-based models (IBMs). As ABC evolves, various versions are emerging, but here we only explore the most accessible version, rejection-ABC. Rejection-ABC involves running models a large number of times, with parameters drawn randomly from their prior distributions, and then retaining the simulations closest to the observations. Although well-established in some fields, whether ABC will work with ecological IBMs is still uncertain. Rejection-ABC was applied to an existing 14-parameter earthworm energy budget IBM for which the available data consist of body mass growth and cocoon production in four experiments. ABC was able to narrow the posterior distributions of seven parameters, estimating credible intervals for each. ABC’s accepted values produced slightly better fits than literature values do. The accuracy of the analysis was assessed using cross-validation and coverage, currently the best available tests. Of the seven unnarrowed parameters, ABC revealed that three were correlated with other parameters, while the remaining four were found to be not estimable given the data available. It is often desirable to compare models to see whether all component modules are necessary. Here we used ABC model selection to compare the full model with a simplified version which removed the earthworm’s movement and much of the energy budget. We are able to show that inclusion of the energy budget is necessary for a good fit to the data. We show how our methodology can inform future modelling cycles, and briefly discuss how more advanced versions of ABC may be applicable to IBMs. We conclude that ABC has the potential to represent uncertainty in model structure, parameters and predictions, and to embed the often complex process of optimizing an IBM’s structure and parameters within an established statistical framework, thereby making the process more transparent and objective.
Resumo:
Structural differences among models account for much of the uncertainty in projected climate changes, at least until the mid-twenty-first century. Recent observations encompass too limited a range of climate variability to provide a robust test of the ability to simulate climate changes. Past climate changes provide a unique opportunity for out-of-sample evaluation of model performance. Palaeo-evaluation has shown that the large-scale changes seen in twenty-first-century projections, including enhanced land–sea temperature contrast, latitudinal amplification, changes in temperature seasonality and scaling of precipitation with temperature, are likely to be realistic. Although models generally simulate changes in large-scale circulation sufficiently well to shift regional climates in the right direction, they often do not predict the correct magnitude of these changes. Differences in performance are only weakly related to modern-day biases or climate sensitivity, and more sophisticated models are not better at simulating climate changes. Although models correctly capture the broad patterns of climate change, improvements are required to produce reliable regional projections.
Resumo:
Quantitative palaeoclimate reconstructions are widely used to evaluate climatemodel performance. Here, as part of an effort to provide such a data set for Australia, we examine the impact of analytical decisions and sampling assumptions on modern-analogue reconstructions using a continent-wide pollen data set. There is a high degree of correlation between temperature variables in the modern climate of Australia, but there is sufficient orthogonality in the variations of precipitation, summer and winter temperature and plant–available moisture to allow independent reconstructions of these four variables to be made. The method of analogue selection does not affect the reconstructions, although bootstrap resampling provides a more reliable technique for obtaining robust measures of uncertainty. The number of analogues used affects the quality of the reconstructions: the most robust reconstructions are obtained using 5 analogues. The quality of reconstructions based on post-1850 CE pollen samples differ little from those using samples from between 1450 and 1849 CE, showing that European post settlement modification of vegetation has no impact on the fidelity of the reconstructions although it substantially increases the availability of potential analogues. Reconstructions based on core top samples are more realistic than those using surface samples, but only using core top samples would substantially reduce the number of available analogues and therefore increases the uncertainty of the reconstructions. Spatial and/or temporal averaging of pollen assemblages prior to analysis negatively affects the subsequent reconstructions for some variables and increases the associated uncertainties. In addition, the quality of the reconstructions is affected by the degree of spatial smoothing of the original climate data, with the best reconstructions obtained using climate data froma 0.5° resolution grid, which corresponds to the typical size of the pollen catchment. This study provides a methodology that can be used to provide reliable palaeoclimate reconstructions for Australia, which will fill in a major gap in the data sets used to evaluate climate models.
Resumo:
Despite the importance of dust aerosol in the Earth system, state-of-the-art models show a large variety for North African dust emission. This study presents a systematic evaluation of dust emitting-winds in 30 years of the historical model simulation with the UK Met Office Earth-system model HadGEM2-ES for the Coupled Model Intercomparison Project Phase 5. Isolating the effect of winds on dust emission and using an automated detection for nocturnal low-level jets (NLLJs) allow an in-depth evaluation of the model performance for dust emission from a meteorological perspective. The findings highlight that NLLJs are a key driver for dust emission in HadGEM2-ES in terms of occurrence frequency and strength. The annually and spatially averaged occurrence frequency of NLLJs is similar in HadGEM2-ES and ERA-Interim from the European Centre for Medium-Range Weather Forecasts. Compared to ERA-Interim, a stronger pressure ridge over northern Africa in winter and the southward displaced heat low in summer result in differences in location and strength of NLLJs. Particularly the larger geostrophic winds associated with the stronger ridge have a strengthening effect on NLLJs over parts of West Africa in winter. Stronger NLLJs in summer may rather result from an artificially increased mixing coefficient under stable stratification that is weaker in HadGEM2-ES. NLLJs in the Bodélé Depression are affected by stronger synoptic-scale pressure gradients in HadGEM2-ES. Wintertime geostrophic winds can even be so strong that the associated vertical wind shear prevents the formation of NLLJs. These results call for further model improvements in the synoptic-scale dynamics and the physical parametrization of the nocturnal stable boundary layer to better represent dust-emitting processes in the atmospheric model. The new approach could be used for identifying systematic behavior in other models with respect to meteorological processes for dust emission. This would help to improve dust emission simulations and contribute to decreasing the currently large uncertainty in climate change projections with respect to dust aerosol.
Resumo:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision. Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes. The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
Resumo:
Due to the increase in water demand and hydropower energy, it is getting more important to operate hydraulic structures in an efficient manner while sustaining multiple demands. Especially, companies, governmental agencies, consultant offices require effective, practical integrated tools and decision support frameworks to operate reservoirs, cascades of run-of-river plants and related elements such as canals by merging hydrological and reservoir simulation/optimization models with various numerical weather predictions, radar and satellite data. The model performance is highly related with the streamflow forecast, related uncertainty and its consideration in the decision making. While deterministic weather predictions and its corresponding streamflow forecasts directly restrict the manager to single deterministic trajectories, probabilistic forecasts can be a key solution by including uncertainty in flow forecast scenarios for dam operation. The objective of this study is to compare deterministic and probabilistic streamflow forecasts on an earlier developed basin/reservoir model for short term reservoir management. The study is applied to the Yuvacık Reservoir and its upstream basin which is the main water supply of Kocaeli City located in the northwestern part of Turkey. The reservoir represents a typical example by its limited capacity, downstream channel restrictions and high snowmelt potential. Mesoscale Model 5 and Ensemble Prediction System data are used as a main input and the flow forecasts are done for 2012 year using HEC-HMS. Hydrometeorological rule-based reservoir simulation model is accomplished with HEC-ResSim and integrated with forecasts. Since EPS based hydrological model produce a large number of equal probable scenarios, it will indicate how uncertainty spreads in the future. Thus, it will provide risk ranges in terms of spillway discharges and reservoir level for operator when it is compared with deterministic approach. The framework is fully data driven, applicable, useful to the profession and the knowledge can be transferred to other similar reservoir systems.