70 resultados para Timed and Probabilistic Automata


Relevância:

30.00% 30.00%

Publicador:

Resumo:

At the most recent session of the Conference of the Parties (COP19) in Warsaw (November 2013) the Warsaw international mechanism for loss and damage associated with climate change impacts was established under the United Nations Framework Convention on Climate Change (UNFCCC). The mechanism aims at promoting the implementation of approaches to address loss and damage associated with the adverse effects of climate change. Specifically, it aims to enhance understanding of risk management approaches to address loss and damage. Understanding risks associated with impacts due to highly predictable (slow onset) events like sea-level rise is relatively straightforward whereas assessing the effects of climate change on extreme weather events and their impacts is much more difficult. However, extreme weather events are a significant cause of loss of life and livelihoods, particularly in vulnerable countries and communities in Africa. The emerging science of probabilistic event attribution is relevant as it provides scientific evidence on the contribution of anthropogenic climate change to changes in risk of extreme events. It thus provides the opportunity to explore scientifically-backed assessments of the human influence on such events. However, different ways of framing attribution questions can lead to very different assessments of change in risk. Here we explain the methods of, and implications of different approaches to attributing extreme weather events with a focus on Africa. Crucially, it demonstrates that defining the most appropriate attribution question to ask is not a science decision but needs to be made in dialogue with those stakeholders who will use the answers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulation models are widely employed to make probability forecasts of future conditions on seasonal to annual lead times. Added value in such forecasts is reflected in the information they add, either to purely empirical statistical models or to simpler simulation models. An evaluation of seasonal probability forecasts from the Development of a European Multimodel Ensemble system for seasonal to inTERannual prediction (DEMETER) and ENSEMBLES multi-model ensemble experiments is presented. Two particular regions are considered: Nino3.4 in the Pacific and the Main Development Region in the Atlantic; these regions were chosen before any spatial distribution of skill was examined. The ENSEMBLES models are found to have skill against the climatological distribution on seasonal time-scales. For models in ENSEMBLES that have a clearly defined predecessor model in DEMETER, the improvement from DEMETER to ENSEMBLES is discussed. Due to the long lead times of the forecasts and the evolution of observation technology, the forecast-outcome archive for seasonal forecast evaluation is small; arguably, evaluation data for seasonal forecasting will always be precious. Issues of information contamination from in-sample evaluation are discussed and impacts (both positive and negative) of variations in cross-validation protocol are demonstrated. Other difficulties due to the small forecast-outcome archive are identified. The claim that the multi-model ensemble provides a ‘better’ probability forecast than the best single model is examined and challenged. Significant forecast information beyond the climatological distribution is also demonstrated in a persistence probability forecast. The ENSEMBLES probability forecasts add significantly more information to empirical probability forecasts on seasonal time-scales than on decadal scales. Current operational forecasts might be enhanced by melding information from both simulation models and empirical models. Simulation models based on physical principles are sometimes expected, in principle, to outperform empirical models; direct comparison of their forecast skill provides information on progress toward that goal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evaluation of forecast performance plays a central role both in the interpretation and use of forecast systems and in their development. Different evaluation measures (scores) are available, often quantifying different characteristics of forecast performance. The properties of several proper scores for probabilistic forecast evaluation are contrasted and then used to interpret decadal probability hindcasts of global mean temperature. The Continuous Ranked Probability Score (CRPS), Proper Linear (PL) score, and IJ Good’s logarithmic score (also referred to as Ignorance) are compared; although information from all three may be useful, the logarithmic score has an immediate interpretation and is not insensitive to forecast busts. Neither CRPS nor PL is local; this is shown to produce counter intuitive evaluations by CRPS. Benchmark forecasts from empirical models like Dynamic Climatology place the scores in context. Comparing scores for forecast systems based on physical models (in this case HadCM3, from the CMIP5 decadal archive) against such benchmarks is more informative than internal comparison systems based on similar physical simulation models with each other. It is shown that a forecast system based on HadCM3 out performs Dynamic Climatology in decadal global mean temperature hindcasts; Dynamic Climatology previously outperformed a forecast system based upon HadGEM2 and reasons for these results are suggested. Forecasts of aggregate data (5-year means of global mean temperature) are, of course, narrower than forecasts of annual averages due to the suppression of variance; while the average “distance” between the forecasts and a target may be expected to decrease, little if any discernible improvement in probabilistic skill is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using lessons from idealised predictability experiments, we discuss some issues and perspectives on the design of operational seasonal to inter-annual Arctic sea-ice prediction systems. We first review the opportunities to use a hierarchy of different types of experiment to learn about the predictability of Arctic climate. We also examine key issues for ensemble system design, such as: measuring skill, the role of ensemble size and generation of ensemble members. When assessing the potential skill of a set of prediction experiments, using more than one metric is essential as different choices can significantly alter conclusions about the presence or lack of skill. We find that increasing both the number of hindcasts and ensemble size is important for reliably assessing the correlation and expected error in forecasts. For other metrics, such as dispersion, increasing ensemble size is most important. Probabilistic measures of skill can also provide useful information about the reliability of forecasts. In addition, various methods for generating the different ensemble members are tested. The range of techniques can produce surprisingly different ensemble spread characteristics. The lessons learnt should help inform the design of future operational prediction systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study investigates the parsing of pre-nominal relative clauses (RCs) in children for the first time with a realtime methodology that reveals moment-to-moment processing patterns as the sentence unfolds. A self-paced listening experiment with Turkish-speaking children (aged 5–8) and adults showed that both groups display a sign of processing cost both in subject and object RCs at different points through the flow of the utterance when integrating the cues that are uninformative (i.e., ambiguous in function) and that are structurally and probabilistically unexpected. Both groups show a processing facilitation as soon as the morphosyntactic dependencies are completed and parse the unbounded dependencies rapidly using the morphosyntactic cues rather than waiting for the clause-final filler. These findings show that five-year-old children show similar patterns to adults in processing the morphosyntactic cues incrementally and in forming expectations about the rest of the utterance on the basis of the probabilistic model of their language.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An ability to quantify the reliability of probabilistic flood inundation predictions is a requirement not only for guiding model development but also for their successful application. Probabilistic flood inundation predictions are usually produced by choosing a method of weighting the model parameter space, but previous study suggests that this choice leads to clear differences in inundation probabilities. This study aims to address the evaluation of the reliability of these probabilistic predictions. However, a lack of an adequate number of observations of flood inundation for a catchment limits the application of conventional methods of evaluating predictive reliability. Consequently, attempts have been made to assess the reliability of probabilistic predictions using multiple observations from a single flood event. Here, a LISFLOOD-FP hydraulic model of an extreme (>1 in 1000 years) flood event in Cockermouth, UK, is constructed and calibrated using multiple performance measures from both peak flood wrack mark data and aerial photography captured post-peak. These measures are used in weighting the parameter space to produce multiple probabilistic predictions for the event. Two methods of assessing the reliability of these probabilistic predictions using limited observations are utilized; an existing method assessing the binary pattern of flooding, and a method developed in this paper to assess predictions of water surface elevation. This study finds that the water surface elevation method has both a better diagnostic and discriminatory ability, but this result is likely to be sensitive to the unknown uncertainties in the upstream boundary condition

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2013 the Warsaw International Mechanism (WIM) for loss and damage (L&D) associated with climate change impacts was established under the United Nations Framework Convention on Climate Change (UNFCCC). For scientists, L&D raises ques- tions around the extent that such impacts can be attributed to anthropogenic climate change, which may generate complex results and be controversial in the policy arena. This is particularly true in the case of probabilistic event attribution (PEA) science, a new and rapidly evolving field that assesses whether changes in the probabilities of extreme events are attributable to GHG emissions. If the potential applications of PEA are to be considered responsibly, dialogue between scientists and policy makers is fundamental. Two key questions are considered here through a literature review and key stakeholder interviews with representatives from the science and policy sectors underpinning L&D. These provided the opportunity for in-depth insights into stakeholders’ views on firstly, how much is known and understood about PEA by those associated with the L&D debate? Secondly, how might PEA inform L&D and wider climate policy? Results show debate within the climate science community, and limited understanding among other stakeholders, around the sense in which extreme events can be attributed to climate change. However, stake- holders do identify and discuss potential uses for PEA in the WIM and wider policy, but it remains difficult to explore precise applications given the ambiguity surrounding L&D. This implies a need for stakeholders to develop greater understandings of alternative conceptions of L&D and the role of science, and also identify how PEA can best be used to support policy, and address associated challenges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecastuncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called “How much are you prepared to pay for a forecast?”. The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydrometeorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants’ willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.