167 resultados para Flood Mapping
Resumo:
Operational medium range flood forecasting systems are increasingly moving towards the adoption of ensembles of numerical weather predictions (NWP), known as ensemble prediction systems (EPS), to drive their predictions. We review the scientific drivers of this shift towards such ‘ensemble flood forecasting’ and discuss several of the questions surrounding best practice in using EPS in flood forecasting systems. We also review the literature evidence of the ‘added value’ of flood forecasts based on EPS and point to remaining key challenges in using EPS successfully.
Resumo:
Floods are a major threat to human existence and historically have both caused the collapse of civilizations and forced the emergence of new cultures. The physical processes of flooding are complex. Increased population, climate variability, change in catchment and channel management, modified landuse and land cover, and natural change of floodplains and river channels all lead to changes in flood dynamics, and as a direct or indirect consequence, social welfare of humans. Section 5.16.1 explores the risks and benefits brought about by floods and reviews the responses of floods and floodplains to climate and landuse change. Section 5.08.2 reviews the existing modeling tools, and the top–down and bottom–up modeling frameworks that are used to assess impacts on future floods. Section 5.08.3 discusses changing flood risk and socioeconomic vulnerability based on current trends in emerging or developing countries and presents an alternative paradigm as a pathway to resilience. Section 5.08.4 concludes the chapter by stating a portfolio of integrated concepts, measures, and avant-garde thinking that would be required to sustainably manage future flood risk.
Resumo:
Hydrological ensemble prediction systems (HEPS) have in recent years been increasingly used for the operational forecasting of floods by European hydrometeorological agencies. The most obvious advantage of HEPS is that more of the uncertainty in the modelling system can be assessed. In addition, ensemble prediction systems generally have better skill than deterministic systems both in the terms of the mean forecast performance and the potential forecasting of extreme events. Research efforts have so far mostly been devoted to the improvement of the physical and technical aspects of the model systems, such as increased resolution in time and space and better description of physical processes. Developments like these are certainly needed; however, in this paper we argue that there are other areas of HEPS that need urgent attention. This was also the result from a group exercise and a survey conducted to operational forecasters within the European Flood Awareness System (EFAS) to identify the top priorities of improvement regarding their own system. They turned out to span a range of areas, the most popular being to include verification of an assessment of past forecast performance, a multi-model approach for hydrological modelling, to increase the forecast skill on the medium range (>3 days) and more focus on education and training on the interpretation of forecasts. In light of limited resources, we suggest a simple model to classify the identified priorities in terms of their cost and complexity to decide in which order to tackle them. This model is then used to create an action plan of short-, medium- and long-term research priorities with the ultimate goal of an optimal improvement of EFAS in particular and to spur the development of operational HEPS in general.
Resumo:
As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.
Resumo:
Flood simulation models and hazard maps are only as good as the underlying data against which they are calibrated and tested. However, extreme flood events are by definition rare, so the observational data of flood inundation extent are limited in both quality and quantity. The relative importance of these observational uncertainties has increased now that computing power and accurate lidar scans make it possible to run high-resolution 2D models to simulate floods in urban areas. However, the value of these simulations is limited by the uncertainty in the true extent of the flood. This paper addresses that challenge by analyzing a point dataset of maximum water extent from a flood event on the River Eden at Carlisle, United Kingdom, in January 2005. The observation dataset is based on a collection of wrack and water marks from two postevent surveys. A smoothing algorithm for identifying, quantifying, and reducing localized inconsistencies in the dataset is proposed and evaluated showing positive results. The proposed smoothing algorithm can be applied in order to improve flood inundation modeling assessment and the determination of risk zones on the floodplain.
Resumo:
A holistic perspective on changing rainfall-driven flood risk is provided for the late 20th and early 21st centuries. Economic losses from floods have greatly increased, principally driven by the expanding exposure of assets at risk. It has not been possible to attribute rain-generated peak streamflow trends to anthropogenic climate change over the past several decades. Projected increases in the frequency and intensity of heavy rainfall, based on climate models, should contribute to increases in precipitation-generated local flooding (e.g. flash flooding and urban flooding). This article assesses the literature included in the IPCC SREX report and new literature published since, and includes an assessment of changes in flood risk in seven of the regions considered in the recent IPCC SREX report—Africa, Asia, Central and South America, Europe, North America, Oceania and Polar regions. Also considering newer publications, this article is consistent with the recent IPCC SREX assessment finding that the impacts of climate change on flood characteristics are highly sensitive to the detailed nature of those changes and that presently we have only low confidence1 in numerical projections of changes in flood magnitude or frequency resulting from climate change.
Resumo:
The Cévennes–Vivarais Mediterranean Hydrometeorological Observatory (OHM-CV) is a research initiative aimed at improving the understanding and modeling of the Mediterranean intense rain events that frequently result in devastating flash floods in southern France. A primary objective is to bring together the skills of meteorologists and hydrologists, modelers and instrumentalists, researchers and practitioners, to cope with these rather unpredictable events. In line with previously published flash-flood monographs, the present paper aims at documenting the 8–9 September 2002 catastrophic event, which resulted in 24 casualties and an economic damage evaluated at 1.2 billion euros (i.e., about 1 billion U.S. dollars) in the Gard region, France. A description of the synoptic meteorological situation is first given and shows that no particular precursor indicated the imminence of such an extreme event. Then, radar and rain gauge analyses are used to assess the magnitude of the rain event, which was particularly remarkable for its spatial extent with rain amounts greater than 200 mm in 24 h over 5500 km2. The maximum values of 600–700 mm observed locally are among the highest daily records in the region. The preliminary results of the postevent hydrological investigation show that the hydrologic response of the upstream watersheds of the Gard and Vidourle Rivers is consistent with the marked space–time structure of the rain event. It is noteworthy that peak specific discharges were very high over most of the affected areas (5–10 m3 s−1 km−2) and reached locally extraordinary values of more than 20 m3 s−1 km−2. A preliminary analysis indicates contrasting hydrological behaviors that seem to be related to geomorphological factors, notably the influence of karst in part of the region. An overview of the ongoing meteorological and hydrological research projects devoted to this case study within the OHM-CV is finally presented.
Resumo:
This paper presents an assessment of the implications of climate change for global river flood risk. It is based on the estimation of flood frequency relationships at a grid resolution of 0.5 × 0.5°, using a global hydrological model with climate scenarios derived from 21 climate models, together with projections of future population. Four indicators of the flood hazard are calculated; change in the magnitude and return period of flood peaks, flood-prone population and cropland exposed to substantial change in flood frequency, and a generalised measure of regional flood risk based on combining frequency curves with generic flood damage functions. Under one climate model, emissions and socioeconomic scenario (HadCM3 and SRES A1b), in 2050 the current 100-year flood would occur at least twice as frequently across 40 % of the globe, approximately 450 million flood-prone people and 430 thousand km2 of flood-prone cropland would be exposed to a doubling of flood frequency, and global flood risk would increase by approximately 187 % over the risk in 2050 in the absence of climate change. There is strong regional variability (most adverse impacts would be in Asia), and considerable variability between climate models. In 2050, the range in increased exposure across 21 climate models under SRES A1b is 31–450 million people and 59 to 430 thousand km2 of cropland, and the change in risk varies between −9 and +376 %. The paper presents impacts by region, and also presents relationships between change in global mean surface temperature and impacts on the global flood hazard. There are a number of caveats with the analysis; it is based on one global hydrological model only, the climate scenarios are constructed using pattern-scaling, and the precise impacts are sensitive to some of the assumptions in the definition and application.
Resumo:
Taxonomic free sorting (TFS) is a fast, reliable and new technique in sensory science. The method extends the typical free sorting task where stimuli are grouped according to similarities, by asking respondents to combine their groups two at a time to produce a hierarchy. Previously, TFS has been used for the visual assessment of packaging whereas this study extends the range of potential uses of the technique to incorporate full sensory analysis by the target consumer, which, when combined with hedonic liking scores, was used to generate a novel preference map. Furthermore, to fully evaluate the efficacy of using the sorting method, the technique was evaluated with a healthy older adult consumer group. Participants sorted eight products into groups and described their reason at each stage as they combined those groups, producing a consumer-specific vocabulary. This vocabulary was combined with hedonic data from a separate group of older adults, to give the external preference map. Taxonomic sorting is a simple, fast and effective method for use with older adults, and its combination with liking data can yield a preference map constructed entirely from target consumer data.