974 resultados para Orion DBMS, Database, Uncertainty, Uncertain values, Benchmark
Resumo:
Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models1, 3 are difficult4. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development andpolicymaking.
Resumo:
Plant traits – the morphological, anatomical, physiological, biochemical and phenological characteristics of plants and their organs – determine how primary producers respond to environmental factors, affect other trophic levels, influence ecosystem processes and services and provide a link from species richness to ecosystem functional diversity. Trait data thus represent the raw material for a wide range of research from evolutionary biology, community and functional ecology to biogeography. Here we present the global database initiative named TRY, which has united a wide range of the plant trait research community worldwide and gained an unprecedented buy-in of trait data: so far 93 trait databases have been contributed. The data repository currently contains almost three million trait entries for 69 000 out of the world's 300 000 plant species, with a focus on 52 groups of traits characterizing the vegetative and regeneration stages of the plant life cycle, including growth, dispersal, establishment and persistence. A first data analysis shows that most plant traits are approximately log-normally distributed, with widely differing ranges of variation across traits. Most trait variation is between species (interspecific), but significant intraspecific variation is also documented, up to 40% of the overall variation. Plant functional types (PFTs), as commonly used in vegetation models, capture a substantial fraction of the observed variation – but for several traits most variation occurs within PFTs, up to 75% of the overall variation. In the context of vegetation models these traits would better be represented by state variables rather than fixed parameter values. The improved availability of plant trait data in the unified global database is expected to support a paradigm shift from species to trait-based ecology, offer new opportunities for synthetic plant trait research and enable a more realistic and empirically grounded representation of terrestrial vegetation in Earth system models.
Resumo:
Geomagnetic activity has long been known to exhibit approximately 27 day periodicity, resulting from solar wind structures repeating each solar rotation. Thus a very simple near-Earth solar wind forecast is 27 day persistence, wherein the near-Earth solar wind conditions today are assumed to be identical to those 27 days previously. Effective use of such a persistence model as a forecast tool, however, requires the performance and uncertainty to be fully characterized. The first half of this study determines which solar wind parameters can be reliably forecast by persistence and how the forecast skill varies with the solar cycle. The second half of the study shows how persistence can provide a useful benchmark for more sophisticated forecast schemes, namely physics-based numerical models. Point-by-point assessment methods, such as correlation and mean-square error, find persistence skill comparable to numerical models during solar minimum, despite the 27 day lead time of persistence forecasts, versus 2–5 days for numerical schemes. At solar maximum, however, the dynamic nature of the corona means 27 day persistence is no longer a good approximation and skill scores suggest persistence is out-performed by numerical models for almost all solar wind parameters. But point-by-point assessment techniques are not always a reliable indicator of usefulness as a forecast tool. An event-based assessment method, which focusses key solar wind structures, finds persistence to be the most valuable forecast throughout the solar cycle. This reiterates the fact that the means of assessing the “best” forecast model must be specifically tailored to its intended use.
Resumo:
In this paper we provide an alternative explanation for why illegal immigration can exhibit substantial fluctuation. We develop a model economy in which migrants make decisions in the face of uncertain border enforcement and lump-sum transfers from the host country. The uncertainty is extrinsic in nature, a sunspot, and arises as a result of ambiguity regarding the commodity price of money. Migrants are restricted from participating in state-contingent insurance markets in the host country, whereas host country natives are not. Volatility in migration flows stems from two distinct sources: the tension between transfers inducing migration and enforcement discouraging it and secondly the existence of a sunspot. Finally, we examine the impact of a change in tax/transfer policies by the government on migration.
Resumo:
The semi-distributed, dynamic INCA-N model was used to simulate the behaviour of dissolved inorganic nitrogen (DIN) in two Finnish research catchments. Parameter sensitivity and model structural uncertainty were analysed using generalized sensitivity analysis. The Mustajoki catchment is a forested upstream catchment, while the Savijoki catchment represents intensively cultivated lowlands. In general, there were more influential parameters in Savijoki than Mustajoki. Model results were sensitive to N-transformation rates, vegetation dynamics, and soil and river hydrology. Values of the sensitive parameters were based on long-term measurements covering both warm and cold years. The highest measured DIN concentrations fell between minimum and maximum values estimated during the uncertainty analysis. The lowest measured concentrations fell outside these bounds, suggesting that some retention processes may be missing from the current model structure. The lowest concentrations occurred mainly during low flow periods; so effects on total loads were small.
Resumo:
A number of studies have addressed the relationship between intra-personal uncertainty and inter-personal disagreement about the future values of economic variables such as output growth and inflation using the SPF. By making use of the SPF respondents' probability forecasts of declines in output, we are able to construct a quarterly series of output growth uncertainty to supplement the annual series that are often used in such analyses. We also consider the relationship between disagreement and uncertainty for probability forecasts of declines in output.
Resumo:
We consider the impact of data revisions on the forecast performance of a SETAR regime-switching model of U.S. output growth. The impact of data uncertainty in real-time forecasting will affect a model's forecast performance via the effect on the model parameter estimates as well as via the forecast being conditioned on data measured with error. We find that benchmark revisions do affect the performance of the non-linear model of the growth rate, and that the performance relative to a linear comparator deteriorates in real-time compared to a pseudo out-of-sample forecasting exercise.
Resumo:
The incorporation of numerical weather predictions (NWP) into a flood warning system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and can lead to a high number of false or missed warnings. Weather forecasts using multiple NWPs from various weather centres implemented on catchment hydrology can provide significantly improved early flood warning. The availability of global ensemble weather prediction systems through the ‘THORPEX Interactive Grand Global Ensemble’ (TIGGE) offers a new opportunity for the development of state-of-the-art early flood forecasting systems. This paper presents a case study using the TIGGE database for flood warning on a meso-scale catchment (4062 km2) located in the Midlands region of England. For the first time, a research attempt is made to set up a coupled atmospheric-hydrologic-hydraulic cascade system driven by the TIGGE ensemble forecasts. A probabilistic discharge and flood inundation forecast is provided as the end product to study the potential benefits of using the TIGGE database. The study shows that precipitation input uncertainties dominate and propagate through the cascade chain. The current NWPs fall short of representing the spatial precipitation variability on such a comparatively small catchment, which indicates need to improve NWPs resolution and/or disaggregating techniques to narrow down the spatial gap between meteorology and hydrology. The spread of discharge forecasts varies from centre to centre, but it is generally large and implies a significant level of uncertainties. Nevertheless, the results show the TIGGE database is a promising tool to forecast flood inundation, comparable with that driven by raingauge observation.
Resumo:
This paper investigates the feasibility of using approximate Bayesian computation (ABC) to calibrate and evaluate complex individual-based models (IBMs). As ABC evolves, various versions are emerging, but here we only explore the most accessible version, rejection-ABC. Rejection-ABC involves running models a large number of times, with parameters drawn randomly from their prior distributions, and then retaining the simulations closest to the observations. Although well-established in some fields, whether ABC will work with ecological IBMs is still uncertain. Rejection-ABC was applied to an existing 14-parameter earthworm energy budget IBM for which the available data consist of body mass growth and cocoon production in four experiments. ABC was able to narrow the posterior distributions of seven parameters, estimating credible intervals for each. ABC’s accepted values produced slightly better fits than literature values do. The accuracy of the analysis was assessed using cross-validation and coverage, currently the best available tests. Of the seven unnarrowed parameters, ABC revealed that three were correlated with other parameters, while the remaining four were found to be not estimable given the data available. It is often desirable to compare models to see whether all component modules are necessary. Here we used ABC model selection to compare the full model with a simplified version which removed the earthworm’s movement and much of the energy budget. We are able to show that inclusion of the energy budget is necessary for a good fit to the data. We show how our methodology can inform future modelling cycles, and briefly discuss how more advanced versions of ABC may be applicable to IBMs. We conclude that ABC has the potential to represent uncertainty in model structure, parameters and predictions, and to embed the often complex process of optimizing an IBM’s structure and parameters within an established statistical framework, thereby making the process more transparent and objective.
Resumo:
Land cover data derived from satellites are commonly used to prescribe inputs to models of the land surface. Since such data inevitably contains errors, quantifying how uncertainties in the data affect a model’s output is important. To do so, a spatial distribution of possible land cover values is required to propagate through the model’s simulation. However, at large scales, such as those required for climate models, such spatial modelling can be difficult. Also, computer models often require land cover proportions at sites larger than the original map scale as inputs, and it is the uncertainty in these proportions that this article discusses. This paper describes a Monte Carlo sampling scheme that generates realisations of land cover proportions from the posterior distribution as implied by a Bayesian analysis that combines spatial information in the land cover map and its associated confusion matrix. The technique is computationally simple and has been applied previously to the Land Cover Map 2000 for the region of England and Wales. This article demonstrates the ability of the technique to scale up to large (global) satellite derived land cover maps and reports its application to the GlobCover 2009 data product. The results show that, in general, the GlobCover data possesses only small biases, with the largest belonging to non–vegetated surfaces. In vegetated surfaces, the most prominent area of uncertainty is Southern Africa, which represents a complex heterogeneous landscape. It is also clear from this study that greater resources need to be devoted to the construction of comprehensive confusion matrices.
Resumo:
Random effect models have been widely applied in many fields of research. However, models with uncertain design matrices for random effects have been little investigated before. In some applications with such problems, an expectation method has been used for simplicity. This method does not include the extra information of uncertainty in the design matrix is not included. The closed solution for this problem is generally difficult to attain. We therefore propose an two-step algorithm for estimating the parameters, especially the variance components in the model. The implementation is based on Monte Carlo approximation and a Newton-Raphson-based EM algorithm. As an example, a simulated genetics dataset was analyzed. The results showed that the proportion of the total variance explained by the random effects was accurately estimated, which was highly underestimated by the expectation method. By introducing heuristic search and optimization methods, the algorithm can possibly be developed to infer the 'model-based' best design matrix and the corresponding best estimates.
Resumo:
The reliable evaluation of the flood forecasting is a crucial problem for assessing flood risk and consequent damages. Different hydrological models (distributed, semi-distributed or lumped) have been proposed in order to deal with this issue. The choice of the proper model structure has been investigated by many authors and it is one of the main sources of uncertainty for a correct evaluation of the outflow hydrograph. In addition, the recent increasing of data availability makes possible to update hydrological models as response of real-time observations. For these reasons, the aim of this work it is to evaluate the effect of different structure of a semi-distributed hydrological model in the assimilation of distributed uncertain discharge observations. The study was applied to the Bacchiglione catchment, located in Italy. The first methodological step was to divide the basin in different sub-basins according to topographic characteristics. Secondly, two different structures of the semi-distributed hydrological model were implemented in order to estimate the outflow hydrograph. Then, synthetic observations of uncertain value of discharge were generated, as a function of the observed and simulated value of flow at the basin outlet, and assimilated in the semi-distributed models using a Kalman Filter. Finally, different spatial patterns of sensors location were assumed to update the model state as response of the uncertain discharge observations. The results of this work pointed out that, overall, the assimilation of uncertain observations can improve the hydrologic model performance. In particular, it was found that the model structure is an important factor, of difficult characterization, since can induce different forecasts in terms of outflow discharge. This study is partly supported by the FP7 EU Project WeSenseIt.
Resumo:
A procedure for characterizing global uncertainty of a rainfall-runoff simulation model based on using grey numbers is presented. By using the grey numbers technique the uncertainty is characterized by an interval; once the parameters of the rainfall-runoff model have been properly defined as grey numbers, by using the grey mathematics and functions it is possible to obtain simulated discharges in the form of grey numbers whose envelope defines a band which represents the vagueness/uncertainty associated with the simulated variable. The grey numbers representing the model parameters are estimated in such a way that the band obtained from the envelope of simulated grey discharges includes an assigned percentage of observed discharge values and is at the same time as narrow as possible. The approach is applied to a real case study highlighting that a rigorous application of the procedure for direct simulation through the rainfall-runoff model with grey parameters involves long computational times. However, these times can be significantly reduced using a simplified computing procedure with minimal approximations in the quantification of the grey numbers representing the simulated discharges. Relying on this simplified procedure, the conceptual rainfall-runoff grey model is thus calibrated and the uncertainty bands obtained both downstream of the calibration process and downstream of the validation process are compared with those obtained by using a well-established approach, like the GLUE approach, for characterizing uncertainty. The results of the comparison show that the proposed approach may represent a valid tool for characterizing the global uncertainty associable with the output of a rainfall-runoff simulation model.
Resumo:
An underwater gas pipeline is the portion of the pipeline that crosses a river beneath its bottom. Underwater gas pipelines are subject to increasing dangers as time goes by. An accident at an underwater gas pipeline can lead to technological and environmental disaster on the scale of an entire region. Therefore, timely troubleshooting of all underwater gas pipelines in order to prevent any potential accidents will remain a pressing task for the industry. The most important aspect of resolving this challenge is the quality of the automated system in question. Now the industry doesn't have any automated system that fully meets the needs of the experts working in the field maintaining underwater gas pipelines. Principle Aim of this Research: This work aims to develop a new system of automated monitoring which would simplify the process of evaluating the technical condition and decision making on planning and preventive maintenance and repair work on the underwater gas pipeline. Objectives: Creation a shared model for a new, automated system via IDEF3; Development of a new database system which would store all information about underwater gas pipelines; Development a new application that works with database servers, and provides an explanation of the results obtained from the server; Calculation of the values MTBF for specified pipelines based on quantitative data obtained from tests of this system. Conclusion: The new, automated system PodvodGazExpert has been developed for timely and qualitative determination of the physical conditions of underwater gas pipeline; The basis of the mathematical analysis of this new, automated system uses principal component analysis method; The process of determining the physical condition of an underwater gas pipeline with this new, automated system increases the MTBF by a factor of 8.18 above the existing system used today in the industry.
Resumo:
In this paper we apply the theory of declsion making with expected utility and non-additive priors to the choice of optimal portfolio. This theory describes the behavior of a rational agent who i5 averse to pure 'uncertainty' (as well as, possibly, to 'risk'). We study the agent's optimal allocation of wealth between a safe and an uncertain asset. We show that there is a range of prices at which the agent neither buys not sells short the uncertain asset. In contrast the standard theory of expected utility predicts that there is exactly one such price. We also provide a definition of an increase in uncertainty aversion and show that it causes the range of prices to increase.