144 resultados para flood forecasting model
Resumo:
Effective disaster risk management relies on science-based solutions to close the gap between prevention and preparedness measures. The consultation on the United Nations post-2015 framework for disaster risk reduction highlights the need for cross-border early warning systems to strengthen the preparedness phases of disaster risk management, in order to save lives and property and reduce the overall impact of severe events. Continental and global scale flood forecasting systems provide vital early flood warning information to national and international civil protection authorities, who can use this information to make decisions on how to prepare for upcoming floods. Here the potential monetary benefits of early flood warnings are estimated based on the forecasts of the continental-scale European Flood Awareness System (EFAS) using existing flood damage cost information and calculations of potential avoided flood damages. The benefits are of the order of 400 Euro for every 1 Euro invested. A sensitivity analysis is performed in order to test the uncertainty in the method and develop an envelope of potential monetary benefits of EFAS warnings. The results provide clear evidence that there is likely a substantial monetary benefit in this cross-border continental-scale flood early warning system. This supports the wider drive to implement early warning systems at the continental or global scale to improve our resilience to natural hazards.
Resumo:
The incorporation of numerical weather predictions (NWP) into a flood warning system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and can lead to a high number of false or missed warnings. Weather forecasts using multiple NWPs from various weather centres implemented on catchment hydrology can provide significantly improved early flood warning. The availability of global ensemble weather prediction systems through the ‘THORPEX Interactive Grand Global Ensemble’ (TIGGE) offers a new opportunity for the development of state-of-the-art early flood forecasting systems. This paper presents a case study using the TIGGE database for flood warning on a meso-scale catchment (4062 km2) located in the Midlands region of England. For the first time, a research attempt is made to set up a coupled atmospheric-hydrologic-hydraulic cascade system driven by the TIGGE ensemble forecasts. A probabilistic discharge and flood inundation forecast is provided as the end product to study the potential benefits of using the TIGGE database. The study shows that precipitation input uncertainties dominate and propagate through the cascade chain. The current NWPs fall short of representing the spatial precipitation variability on such a comparatively small catchment, which indicates need to improve NWPs resolution and/or disaggregating techniques to narrow down the spatial gap between meteorology and hydrology. The spread of discharge forecasts varies from centre to centre, but it is generally large and implies a significant level of uncertainties. Nevertheless, the results show the TIGGE database is a promising tool to forecast flood inundation, comparable with that driven by raingauge observation.
Resumo:
The Convective Storm Initiation Project (CSIP) is an international project to understand precisely where, when, and how convective clouds form and develop into showers in the mainly maritime environment of southern England. A major aim of CSIP is to compare the results of the very high resolution Met Office weather forecasting model with detailed observations of the early stages of convective clouds and to use the newly gained understanding to improve the predictions of the model. A large array of ground-based instruments plus two instrumented aircraft, from the U.K. National Centre for Atmospheric Science (NCAS) and the German Institute for Meteorology and Climate Research (IMK), Karlsruhe, were deployed in southern England, over an area centered on the meteorological radars at Chilbolton, during the summers of 2004 and 2005. In addition to a variety of ground-based remote-sensing instruments, numerous rawin-sondes were released at one- to two-hourly intervals from six closely spaced sites. The Met Office weather radar network and Meteosat satellite imagery were used to provide context for the observations made by the instruments deployed during CSIP. This article presents an overview of the CSIP field campaign and examples from CSIP of the types of convective initiation phenomena that are typical in the United Kingdom. It shows the way in which certain kinds of observational data are able to reveal these phenomena and gives an explanation of how the analyses of data from the field campaign will be used in the development of an improved very high resolution NWP model for operational use.
Resumo:
For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The validity of convective parametrization breaks down at the resolution of mesoscale models, and the success of parametrized versus explicit treatments of convection is likely to depend on the large-scale environment. In this paper we examine the hypothesis that a key feature determining the sensitivity to the environment is whether the forcing of convection is sufficiently homogeneous and slowly varying that the convection can be considered to be in equilibrium. Two case studies of mesoscale convective systems over the UK, one where equilibrium conditions are expected and one where equilibrium is unlikely, are simulated using a mesoscale forecasting model. The time evolution of area-average convective available potential energy and the time evolution and magnitude of the timescale of convective adjustment are consistent with the hypothesis of equilibrium for case 1 and non-equilibrium for case 2. For each case, three experiments are performed with different partitionings between parametrized and explicit convection: fully parametrized convection, fully explicit convection and a simulation with significant amounts of both. In the equilibrium case, bulk properties of the convection such as area-integrated rain rates are insensitive to the treatment of convection. However, the detailed structure of the precipitation field changes; the simulation with parametrized convection behaves well and produces a smooth field that follows the forcing region, and the simulation with explicit convection has a small number of localized intense regions of precipitation that track with the mid-levelflow. For the non-equilibrium case, bulk properties of the convection such as area-integrated rain rates are sensitive to the treatment of convection. The simulation with explicit convection behaves similarly to the equilibrium case with a few localized precipitation regions. In contrast, the cumulus parametrization fails dramatically and develops intense propagating bows of precipitation that were not observed. The simulations with both parametrized and explicit convection follow the pattern seen in the other experiments, with a transition over the duration of the run from parametrized to explicit precipitation. The impact of convection on the large-scaleflow, as measured by upper-level wind and potential-vorticity perturbations, is very sensitive to the partitioning of convection for both cases. © Royal Meteorological Society, 2006. Contributions by P. A. Clark and M. E. B. Gray are Crown Copyright.
Resumo:
Moist convection is well known to be generally more intense over continental than maritime regions, with larger updraft velocities, graupel, and lightning production. This study explores the transition from maritime to continental convection by comparing the trends in Tropical Rainfall Measuring Mission (TRMM) radar and microwave (37 and 85 GHz) observations over islands of increasing size to those simulated by a cloud-resolving model. The observed storms were essentially maritime over islands of <100 km2 and continental over islands >10 000 km2, with a gradual transition in between. Equivalent radar and microwave quantities were simulated from cloud-resolving runs of the Weather Research and Forecasting model via offline radiation codes. The model configuration was idealized, with islands represented by regions of uniform surface heat flux without orography, using a range of initial sounding conditions without strong horizontal winds or aerosols. Simulated storm strength varied with initial sounding, as expected, but also increased sharply with island size in a manner similar to observations. Stronger simulated storms were associated with higher concentrations of large hydrometeors. Although biases varied with different ice microphysical schemes, the trend was similar for all three schemes tested and was also seen in 2D and 3D model configurations. The successful reproduction of the trend with such idealized forcing supports previous suggestions that mesoscale variation in surface heating—rather than any difference in humidity, aerosol, or other aspects of the atmospheric state—is the main reason that convection is more intense over continents and large islands than over oceans. Some dynamical storm aspects, notably the peak rainfall and minimum surface pressure low, were more sensitive to surface forcing than to the atmospheric sounding or ice scheme. Large hydrometeor concentrations and simulated microwave and radar signatures, however, were at least as sensitive to initial humidity levels as to surface forcing and were more sensitive to the ice scheme. Issues with running the TRMM simulator on 2D simulations are discussed, but they appear to be less serious than sensitivities to model microphysics, which were similar in 2D and 3D. This supports the further use of 2D simulations to economically explore modeling uncertainties.
Resumo:
In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.
Resumo:
The parameterization of surface heat-flux variability in urban areas relies on adequate representation of surface characteristics. Given the horizontal resolutions (e.g. ≈0.1–1km) currently used in numerical weather prediction (NWP) models, properties of the urban surface (e.g. vegetated/built surfaces, street-canyon geometries) often have large spatial variability. Here, a new approach based on Urban Zones to characterize Energy partitioning (UZE) is tested within a NWP model (Weather Research and Forecasting model;WRF v3.2.1) for Greater London. The urban land-surface scheme is the Noah/Single-Layer Urban Canopy Model (SLUCM). Detailed surface information (horizontal resolution 1 km)in central London shows that the UZE offers better characterization of surface properties and their variability compared to default WRF-SLUCM input parameters. In situ observations of the surface energy fluxes and near-surface meteorological variables are used to select the radiation and turbulence parameterization schemes and to evaluate the land-surface scheme
Resumo:
The impact of 1973–2005 land use–land cover (LULC) changes on near-surface air temperatures during four recent summer extreme heat events (EHEs) are investigated for the arid Phoenix, Arizona, metropolitan area using the Weather Research and Forecasting Model (WRF) in conjunction with the Noah Urban Canopy Model. WRF simulations were carried out for each EHE using LULC for the years 1973, 1985, 1998, and 2005. Comparison of measured near-surface air temperatures and wind speeds for 18 surface stations in the region show a good agreement between observed and simulated data for all simulation periods. The results indicate consistent significant contributions of urban development and accompanying LULC changes to extreme temperatures for the four EHEs. Simulations suggest new urban developments caused an intensification and expansion of the area experiencing extreme temperatures but mainly influenced nighttime temperatures with an increase of up to 10 K. Nighttime temperatures in the existing urban core showed changes of up to 2 K with the ongoing LULC changes. Daytime temperatures were not significantly affected where urban development replaced desert land (increase by 1 K); however, maximum temperatures increased by 2–4 K when irrigated agricultural land was converted to suburban development. According to the model simulations, urban landscaping irrigation contributed to cooling by 0.5–1 K in maximum daytime as well as minimum nighttime 2-m air temperatures in most parts of the urban region. Furthermore, urban development led to a reduction of the already relatively weak nighttime winds and therefore a reduction in advection of cooler air into the city.
Resumo:
The CWRF is developed as a climate extension of the Weather Research and Forecasting model (WRF) by incorporating numerous improvements in the representation of physical processes and integration of external (top, surface, lateral) forcings that are crucial to climate scales, including interactions between land, atmosphere, and ocean; convection and microphysics; and cloud, aerosol, and radiation; and system consistency throughout all process modules. This extension inherits all WRF functionalities for numerical weather prediction while enhancing the capability for climate modeling. As such, CWRF can be applied seamlessly to weather forecast and climate prediction. The CWRF is built with a comprehensive ensemble of alternative parameterization schemes for each of the key physical processes, including surface (land, ocean), planetary boundary layer, cumulus (deep, shallow), microphysics, cloud, aerosol, and radiation, and their interactions. This facilitates the use of an optimized physics ensemble approach to improve weather or climate prediction along with a reliable uncertainty estimate. The CWRF also emphasizes the societal service capability to provide impactrelevant information by coupling with detailed models of terrestrial hydrology, coastal ocean, crop growth, air quality, and a recently expanded interactive water quality and ecosystem model. This study provides a general CWRF description and basic skill evaluation based on a continuous integration for the period 1979– 2009 as compared with that of WRF, using a 30-km grid spacing over a domain that includes the contiguous United States plus southern Canada and northern Mexico. In addition to advantages of greater application capability, CWRF improves performance in radiation and terrestrial hydrology over WRF and other regional models. Precipitation simulation, however, remains a challenge for all of the tested models.
Resumo:
This paper examines the lead–lag relationship between the FTSE 100 index and index futures price employing a number of time series models. Using 10-min observations from June 1996–1997, it is found that lagged changes in the futures price can help to predict changes in the spot price. The best forecasting model is of the error correction type, allowing for the theoretical difference between spot and futures prices according to the cost of carry relationship. This predictive ability is in turn utilised to derive a trading strategy which is tested under real-world conditions to search for systematic profitable trading opportunities. It is revealed that although the model forecasts produce significantly higher returns than a passive benchmark, the model was unable to outperform the benchmark after allowing for transaction costs.
Resumo:
The Weather Research and Forecasting model was applied to analyze variations in the planetary boundary layer (PBL) structure over Southeast England including central and suburban London. The parameterizations and predictive skills of two nonlocal mixing PBL schemes, YSU and ACM2, and two local mixing PBL schemes, MYJ and MYNN2, were evaluated over a variety of stability conditions, with model predictions at a 3 km grid spacing. The PBL height predictions, which are critical for scaling turbulence and diffusion in meteorological and air quality models, show significant intra-scheme variance (> 20%), and the reasons are presented. ACM2 diagnoses the PBL height thermodynamically using the bulk Richardson number method, which leads to a good agreement with the lidar data for both unstable and stable conditions. The modeled vertical profiles in the PBL, such as wind speed, turbulent kinetic energy (TKE), and heat flux, exhibit large spreads across the PBL schemes. The TKE predicted by MYJ were found to be too small and show much less diurnal variation as compared with observations over London. MYNN2 produces better TKE predictions at low levels than MYJ, but its turbulent length scale increases with height in the upper part of the strongly convective PBL, where it should decrease. The local PBL schemes considerably underestimate the entrainment heat fluxes for convective cases. The nonlocal PBL schemes exhibit stronger mixing in the mean wind fields under convective conditions than the local PBL schemes and agree better with large-eddy simulation (LES) studies.
Resumo:
This paper describes a fast and reliable method for redistributing a computational mesh in three dimensions which can generate a complex three dimensional mesh without any problems due to mesh tangling. The method relies on a three dimensional implementation of the parabolic Monge–Ampère (PMA) technique, for finding an optimally transported mesh. The method for implementing PMA is described in detail and applied to both static and dynamic mesh redistribution problems, studying both the convergence and the computational cost of the algorithm. The algorithm is applied to a series of problems of increasing complexity. In particular very regular meshes are generated to resolve real meteorological features (derived from a weather forecasting model covering the UK area) in grids with over 2×107 degrees of freedom. The PMA method computes these grids in times commensurate with those required for operational weather forecasting.
Resumo:
Flood forecasting increasingly relies on numerical weather prediction forecasts to achieve longer lead times. One of the key difficulties that is emerging in constructing a decision framework for these flood forecasts is what to dowhen consecutive forecasts are so different that they lead to different conclusions regarding the issuing of warnings or triggering other action. In this opinion paper we explore some of the issues surrounding such forecast inconsistency (also known as "Jumpiness", "Turning points", "Continuity" or number of "Swings"). In thsi opinion paper we define forecast inconsistency; discuss the reasons why forecasts might be inconsistent; how we should analyse inconsistency; and what we should do about it; how we should communicate it and whether it is a totally undesirable property. The property of consistency is increasingly emerging as a hot topic in many forecasting environments.
Resumo:
Mathematical relationships between Scoring Parameters can be used in Economic Scoring Formulas (ESF) in tendering to distribute the score among bidders in the economic part of a proposal. Each contracting authority must set an ESF when publishing tender specifications and the strategy of each bidder will differ depending on the ESF selected and the weight of the overall proposal scoring. This paper introduces the various mathematical relationships and density distributions that describe and inter-relate not only the main Scoring Parameters but the main Forecasting Parameters in any capped tender (those whose price is upper-limited). Forecasting Parameters, as variables that can be known in advance before the deadline of a tender is reached, together with Scoring Parameters constitute the basis of a future Bid Tender Forecasting Model.