833 resultados para Probabilistic methodology
Resumo:
The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.
Resumo:
Many urban surface energy balance models now exist. These vary in complexity from simple schemes that represent the city as a concrete slab, to those which incorporate detailed representations of momentum and energy fluxes distributed within the atmospheric boundary layer. While many of these schemes have been evaluated against observations, with some models even compared with the same data sets, such evaluations have not been undertaken in a controlled manner to enable direct comparison. For other types of climate model, for instance the Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS) experiments (Henderson-Sellers et al., 1993), such controlled comparisons have been shown to provide important insights into both the mechanics of the models and the physics of the real world. This paper describes the progress that has been made to date on a systematic and controlled comparison of urban surface schemes. The models to be considered, and their key attributes, are described, along with the methodology to be used for the evaluation.
Resumo:
Hydrological ensemble prediction systems (HEPS) have in recent years been increasingly used for the operational forecasting of floods by European hydrometeorological agencies. The most obvious advantage of HEPS is that more of the uncertainty in the modelling system can be assessed. In addition, ensemble prediction systems generally have better skill than deterministic systems both in the terms of the mean forecast performance and the potential forecasting of extreme events. Research efforts have so far mostly been devoted to the improvement of the physical and technical aspects of the model systems, such as increased resolution in time and space and better description of physical processes. Developments like these are certainly needed; however, in this paper we argue that there are other areas of HEPS that need urgent attention. This was also the result from a group exercise and a survey conducted to operational forecasters within the European Flood Awareness System (EFAS) to identify the top priorities of improvement regarding their own system. They turned out to span a range of areas, the most popular being to include verification of an assessment of past forecast performance, a multi-model approach for hydrological modelling, to increase the forecast skill on the medium range (>3 days) and more focus on education and training on the interpretation of forecasts. In light of limited resources, we suggest a simple model to classify the identified priorities in terms of their cost and complexity to decide in which order to tackle them. This model is then used to create an action plan of short-, medium- and long-term research priorities with the ultimate goal of an optimal improvement of EFAS in particular and to spur the development of operational HEPS in general.
Resumo:
We analyse by simulation the impact of model-selection strategies (sometimes called pre-testing) on forecast performance in both constant-and non-constant-parameter processes. Restricted, unrestricted and selected models are compared when either of the first two might generate the data. We find little evidence that strategies such as general-to-specific induce significant over-fitting, or thereby cause forecast-failure rejection rates to greatly exceed nominal sizes. Parameter non-constancies put a premium on correct specification, but in general, model-selection effects appear to be relatively small, and progressive research is able to detect the mis-specifications.
Resumo:
Climate model ensembles are widely heralded for their potential to quantify uncertainties and generate probabilistic climate projections. However, such technical improvements to modeling science will do little to deliver on their ultimate promise of improving climate policymaking and adaptation unless the insights they generate can be effectively communicated to decision makers. While some of these communicative challenges are unique to climate ensembles, others are common to hydrometeorological modeling more generally, and to the tensions arising between the imperatives for saliency, robustness, and richness in risk communication. The paper reviews emerging approaches to visualizing and communicating climate ensembles and compares them to the more established and thoroughly evaluated communication methods used in the numerical weather prediction domains of day-to-day weather forecasting (in particular probabilities of precipitation), hurricane and flood warning, and seasonal forecasting. This comparative analysis informs recommendations on best practice for climate modelers, as well as prompting some further thoughts on key research challenges to improve the future communication of climate change uncertainties.
Resumo:
In probabilistic decision tasks, an expected value (EV) of a choice is calculated, and after the choice has been made, this can be updated based on a temporal difference (TD) prediction error between the EV and the reward magnitude (RM) obtained. The EV is measured as the probability of obtaining a reward x RM. To understand the contribution of different brain areas to these decision-making processes, functional magnetic resonance imaging activations related to EV versus RM (or outcome) were measured in a probabilistic decision task. Activations in the medial orbitofrontal cortex were correlated with both RM and with EV and confirmed in a conjunction analysis to extend toward the pregenual cingulate cortex. From these representations, TD reward prediction errors could be produced. Activations in areas that receive from the orbitofrontal cortex including the ventral striatum, midbrain, and inferior frontal gyrus were correlated with the TD error. Activations in the anterior insula were correlated negatively with EV, occurring when low reward outcomes were expected, and also with the uncertainty of the reward, implicating this region in basic and crucial decision-making parameters, low expected outcomes, and uncertainty.
Resumo:
Anticoagulants rodenticides have already known for over half a century, as effective and safe method of rodent control. However, discovered in 1958 anticoagulant resistance has given us a very important problem for their future long-term use. Laboratory tests provide the main method for identification the different types of anticoagulant resistances, quantify the magnitude of their effect and help us to choose the best pest control strategy. The main important tests are lethal feeding period (LFP) and blood clotting response (BCR) tests. These tests can now be used to quantify the likely effect of the resistance on treatment outcome by providing an estimate of the ‘resistance factor’. In 2004 the gene responsible for anticoagulant resistance (VKORC1) was identified and sequenced. As a result, a new molecular resistance testing methodology has been developed, and a number of resistance mutations, particularly in Norway rats and house mice. Three mutations of the VKORC1 gene in Norway rats have been identified to date that confer a degree of resistance to bromadiolone and difenacoum, sufficient to affect treatment outcome in the field.
Resumo:
Although over a hundred thermal indices can be used for assessing thermal health hazards, many ignore the human heat budget, physiology and clothing. The Universal Thermal Climate Index (UTCI) addresses these shortcomings by using an advanced thermo-physiological model. This paper assesses the potential of using the UTCI for forecasting thermal health hazards. Traditionally, such hazard forecasting has had two further limitations: it has been narrowly focused on a particular region or nation and has relied on the use of single ‘deterministic’ forecasts. Here, the UTCI is computed on a global scale,which is essential for international health-hazard warnings and disaster preparedness, and it is provided as a probabilistic forecast. It is shown that probabilistic UTCI forecasts are superior in skill to deterministic forecasts and that despite global variations, the UTCI forecast is skilful for lead times up to 10 days. The paper also demonstrates the utility of probabilistic UTCI forecasts on the example of the 2010 heat wave in Russia.
Resumo:
Windstorms are a main feature of the European climate and exert strong socioeconomic impacts. Large effort has been made in developing and enhancing models to simulate the intensification of windstorms, resulting footprints, and associated impacts. Simulated wind or gust speeds usually differ from observations, as regional climate models have biases and cannot capture all local effects. An approach to adjust regional climate model (RCM) simulations of wind and wind gust toward observations is introduced. For this purpose, 100 windstorms are selected and observations of 173 (111) test sites of the German Weather Service are considered for wind (gust) speed. Theoretical Weibull distributions are fitted to observed and simulated wind and gust speeds, and the distribution parameters of the observations are interpolated onto the RCM computational grid. A probability mapping approach is applied to relate the distributions and to correct the modeled footprints. The results are not only achieved for single test sites but for an area-wide regular grid. The approach is validated using root-mean-square errors on event and site basis, documenting that the method is generally able to adjust the RCM output toward observations. For gust speeds, an improvement on 88 of 100 events and at about 64% of the test sites is reached. For wind, 99 of 100 improved events and ~84% improved sites can be obtained. This gives confidence on the potential of the introduced approach for many applications, in particular those considering wind data.
Resumo:
This paper makes a theoretical case for using these two systems approaches together. The theoretical and methodological assumptions of system dynamics (SD) and soft system methodology (SSM) are briefly described and a partial critique is presented. SSM generates and represents diverse perspectives on a problem situation and addresses the socio-political elements of an intervention. However, it is weak in ensuring `dynamic coherence'. consistency between the intuitive behaviour resulting from proposed changes and behaviour deduced from ideas on causal structure. Conversely, SD examines causal structures and dynamic behaviours. However, whilst emphasising the need for a clear issue focus, it has little theory for generating and representing diverse issues. Also, there is no theory for facilitating sensitivity to socio-political elements. A synthesis of the two called ‘Holon Dynamics' is proposed. After an SSM intervention, a second stage continues the socio-political analysis and also operates within a new perspective which values dynamic coherence of the mental construct - the holon - which is capable of expressing the proposed changes. A model of this holon is constructed using SD and the changes are thus rendered `systemically desirable' in the additional sense that dynamic consistency has been confirmed. The paper closes with reflections on the proposal and the need for theoretical consistency when mixing tools is emphasised.
Resumo:
This article reviews the experiences of a practising business consultancy division. It discusses the reasons for the failure of the traditional, expert consultancy approach and states the requirements for a more suitable consultancy methodology. An approach called ‘Modelling as Learning’ is introduced, its three defining aspects being: client ownership of all analytical work performed, consultant acting as facilitator and sensitivity to soft issues within and surrounding a problem. The goal of such an approach is set as the acceleration of the client's learning about the business. The tools that are used within this methodological framework are discussed and some case studies of the methodology are presented. It is argued that a learning experience was necessary before arriving at the new methodology but that it is now a valuable and significant component of the division's work.
Resumo:
We present a new parameterisation that relates surface mass balance (SMB: the sum of surface accumulation and surface ablation) to changes in surface elevation of the Greenland ice sheet (GrIS) for the MAR (Modèle Atmosphérique Régional: Fettweis, 2007) regional climate model. The motivation is to dynamically adjust SMB as the GrIS evolves, allowing us to force ice sheet models with SMB simulated by MAR while incorporating the SMB–elevation feedback, without the substantial technical challenges of coupling ice sheet and climate models. This also allows us to assess the effect of elevation feedback uncertainty on the GrIS contribution to sea level, using multiple global climate and ice sheet models, without the need for additional, expensive MAR simulations. We estimate this relationship separately below and above the equilibrium line altitude (ELA, separating negative and positive SMB) and for regions north and south of 77� N, from a set of MAR simulations in which we alter the ice sheet surface elevation. These give four “SMB lapse rates”, gradients that relate SMB changes to elevation changes. We assess uncertainties within a Bayesian framework, estimating probability distributions for each gradient from which we present best estimates and credibility intervals (CI) that bound 95% of the probability. Below the ELA our gradient estimates are mostly positive, because SMB usually increases with elevation: 0.56 (95% CI: −0.22 to 1.33) kgm−3 a−1 for the north, and 1.91 (1.03 to 2.61) kgm−3 a−1 for the south. Above the ELA, the gradients are much smaller in magnitude: 0.09 (−0.03 to 0.23) kgm−3 a−1 in the north, and 0.07 (−0.07 to 0.59) kgm−3 a−1 in the south, because SMB can either increase or decrease in response to increased elevation. Our statistically founded approach allows us to make probabilistic assessments for the effect of elevation feedback uncertainty on sea level projections (Edwards et al., 2014).
Resumo:
The WFDEI meteorological forcing data set has been generated using the same methodology as the widely used WATCH Forcing Data (WFD) by making use of the ERA-Interim reanalysis data. We discuss the specifics of how changes in the reanalysis and processing have led to improvement over the WFD. We attribute improvements in precipitation and wind speed to the latest reanalysis basis data and improved downward shortwave fluxes to the changes in the aerosol corrections. Covering 1979–2012, the WFDEI will allow more thorough comparisons of hydrological and Earth System model outputs with hydrologically and phenologically relevant satellite products than using the WFD.
Resumo:
There is an on-going debate on the environmental effects of genetically modified crops to which this paper aims to contribute. First, data on environmental impacts of genetically modified (GM) and conventional crops are collected from peer-reviewed journals, and secondly an analysis is conducted in order to examine which crop type is less harmful for the environment. Published data on environmental impacts are measured using an array of indicators, and their analysis requires their normalisation and aggregation. Taking advantage of composite indicators literature, this paper builds composite indicators to measure the impact of GM and conventional crops in three dimensions: (1) non-target key species richness, (2) pesticide use, and (3) aggregated environmental impact. The comparison between the three composite indicators for both crop types allows us to establish not only a ranking to elucidate which crop is more convenient for the environment but the probability that one crop type outperforms the other from an environmental perspective. Results show that GM crops tend to cause lower environmental impacts than conventional crops for the analysed indicators.