187 resultados para Hazard prediction
Resumo:
Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20-45%) of the global land grid points, particularly in areas where the hydro-graph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5-30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies.
Resumo:
The more information is available, and the more predictable are events, the better forecasts ought to be. In this paper forecasts by bookmakers, prediction markets and tipsters are evaluated for a range of events with varying degrees of predictability and information availability. All three types of forecast represent different structures of information processing and as such would be expected to perform differently. By and large, events that are more predictable, and for which more information is available, do tend to be forecast better.
Resumo:
The inclusion of the direct and indirect radiative effects of aerosols in high-resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three-dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing long-wave radiation over West Africa due to a better representation of dust. However, uncertainties in dust optical properties propagate to its direct effect and the subsequent model response. Inclusion of the indirect aerosol effects improves surface radiation biases at the North Slope of Alaska ARM site due to lower cloud amounts in high-latitude clean-air regions. This leads to improved temperature and height forecasts in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short-range forecasts. However, the indirect aerosol effect leads to a strengthening of the low-level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. Regional impacts on the African Easterly Jet (AEJ) are also presented with the large dust loading in the aerosol climatology enhancing of the heat low over West Africa and weakening the AEJ. This study highlights the importance of including a more realistic treatment of aerosol–cloud interactions in global NWP models and the potential for improved global environmental prediction systems through the incorporation of more complex aerosol schemes.
Resumo:
Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and vulnerability modules, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.
Resumo:
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are ‘toughest to beat’ and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.
Resumo:
The assessment of chess players is an increasingly attractive opportunity and an unfortunate necessity. The chess community needs to limit potential reputational damage by inhibiting cheating and unjustified accusations of cheating: there has been a recent rise in both. A number of counter-intuitive discoveries have been made by benchmarking the intrinsic merit of players’ moves: these call for further investigation. Is Capablanca actually, objectively the most accurate World Champion? Has ELO rating inflation not taken place? Stimulated by FIDE/ACP, we revisit the fundamentals of the subject to advance a framework suitable for improved standards of computational experiment and more precise results. Other domains look to chess as the demonstrator of good practice, including the rating of professionals making high-value decisions under pressure, personnel evaluation by Multichoice Assessment and the organization of crowd-sourcing in citizen science projects. The ‘3P’ themes of performance, prediction and profiling pervade all these domains.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
There has been a significant increase in the skill and resolution of numerical weather prediction models (NWPs) in recent decades, extending the time scales of useful weather predictions. The land-surface models (LSMs) of NWPs are often employed in hydrological applications, which raises the question of how hydrologically representative LSMs really are. In this paper, precipitation (P), evaporation (E) and runoff (R) from the European Centre for Medium-Range Weather Forecasts (ECMWF) global models were evaluated against observational products. The forecasts differ substantially from observed data for key hydrological variables. In addition, imbalanced surface water budgets, mostly caused by data assimilation, were found on both global (P-E) and basin scales (P-E-R), with the latter being more important. Modeled surface fluxes should be used with care in hydrological applications and further improvement in LSMs in terms of process descriptions, resolution and estimation of uncertainties is needed to accurately describe the land-surface water budgets.
Resumo:
Weather, climate, water and related environmental conditions, including air quality, all have profound effects on cities. A growing importance is being attached to understanding and predicting atmospheric conditions and their interactions with other components of the Earth System in cities, at multiple scales. We highlight the need for: (1) development of high-resolution coupled environmental prediction models that include realistic city-specific processes, boundary conditions and fluxes; (2) enhanced observational systems to support (force, constrain, evaluate) these models to provide high quality forecasts for new urban services; (3) provision of meteorological and related environmental variables to aid protection of human health and the environment; (4) new targeted and customized delivery platforms using modern communication techniques, developed with users to ensure that services, advice and warnings result in appropriate action; and (5) development of new skill and capacity to make best use of technologies to deliver new services in complex, challenging and evolving city environments. We highlight the importance of a coordinated and strategic approach that draws on, but does not replicate, past work to maximize benefits to stakeholders.
Resumo:
Methods to explicitly represent uncertainties in weather and climate models have been developed and refined over the past decade, and have reduced biases and improved forecast skill when implemented in the atmospheric component of models. These methods have not yet been applied to the land surface component of models. Since the land surface is strongly coupled to the atmospheric state at certain times and in certain places (such as the European summer of 2003), improvements in the representation of land surface uncertainty may potentially lead to improvements in atmospheric forecasts for such events. Here we analyse seasonal retrospective forecasts for 1981–2012 performed with the European Centre for Medium-Range Weather Forecasts’ (ECMWF) coupled ensemble forecast model. We consider two methods of incorporating uncertainty into the land surface model (H-TESSEL): stochastic perturbation of tendencies, and static perturbation of key soil parameters. We find that the perturbed parameter approach considerably improves the forecast of extreme air temperature for summer 2003, through better representation of negative soil moisture anomalies and upward sensible heat flux. Averaged across all the reforecasts the perturbed parameter experiment shows relatively little impact on the mean bias, suggesting perturbations of at least this magnitude can be applied to the land surface without any degradation of model climate. There is also little impact on skill averaged across all reforecasts and some evidence of overdispersion for soil moisture. The stochastic tendency experiments show a large overdispersion for the soil temperature fields, indicating that the perturbation here is too strong. There is also some indication that the forecast of the 2003 warm event is improved for the stochastic experiments, however the improvement is not as large as observed for the perturbed parameter experiment.
Resumo:
The role and function of a given protein is dependent on its structure. In recent years, however, numerous studies have highlighted the importance of unstructured, or disordered regions in governing a protein’s function. Disordered proteins have been found to play important roles in pivotal cellular functions, such as DNA binding and signalling cascades. Studying proteins with extended disordered regions is often problematic as they can be challenging to express, purify and crystallise. This means that interpretable experimental data on protein disorder is hard to generate. As a result, predictive computational tools have been developed with the aim of predicting the level and location of disorder within a protein. Currently, over 60 prediction servers exist, utilizing different methods for classifying disorder and different training sets. Here we review several good performing, publicly available prediction methods, comparing their application and discussing how disorder prediction servers can be used to aid the experimental solution of protein structure. The use of disorder prediction methods allows us to adopt a more targeted approach to experimental studies by accurately identifying the boundaries of ordered protein domains so that they may be investigated separately, thereby increasing the likelihood of their successful experimental solution.
Resumo:
Using lessons from idealised predictability experiments, we discuss some issues and perspectives on the design of operational seasonal to inter-annual Arctic sea-ice prediction systems. We first review the opportunities to use a hierarchy of different types of experiment to learn about the predictability of Arctic climate. We also examine key issues for ensemble system design, such as: measuring skill, the role of ensemble size and generation of ensemble members. When assessing the potential skill of a set of prediction experiments, using more than one metric is essential as different choices can significantly alter conclusions about the presence or lack of skill. We find that increasing both the number of hindcasts and ensemble size is important for reliably assessing the correlation and expected error in forecasts. For other metrics, such as dispersion, increasing ensemble size is most important. Probabilistic measures of skill can also provide useful information about the reliability of forecasts. In addition, various methods for generating the different ensemble members are tested. The range of techniques can produce surprisingly different ensemble spread characteristics. The lessons learnt should help inform the design of future operational prediction systems.