191 resultados para Anaphora resolution
Effects of temporal resolution of input precipitation on the performance of hydrological forecasting
Resumo:
Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.
Resumo:
The dependence of the annual mean tropical precipitation on horizontal resolution is investigated in the atmospheric version of the Hadley Centre General Environment Model (HadGEM1). Reducing the grid spacing from about 350 km to 110 km improves the precipitation distribution in most of the tropics. In particular, characteristic dry biases over South and Southeast Asia including the Maritime Continent as well as wet biases over the western tropical oceans are reduced. The annual-mean precipitation bias is reduced by about one third over the Maritime Continent and the neighbouring ocean basins associated with it via the Walker circulation. Sensitivity experiments show that much of the improvement with resolution in the Maritime Continent region is due to the specification of better resolved surface boundary conditions (land fraction, soil and vegetation parameters) at the higher resolution. It is shown that in particular the formulation of the coastal tiling scheme may cause resolution sensitivity of the mean simulated climate. The improvement in the tropical mean precipitation in this region is not primarily associated with the better representation of orography at the higher resolution, nor with changes in the eddy transport of moisture. Sizeable sensitivity to changes in the surface fields may be one of the reasons for the large variation of the mean tropical precipitation distribution seen across climate models.
Resumo:
Flooding is a particular hazard in urban areas worldwide due to the increased risks to life and property in these regions. Synthetic Aperture Radar (SAR) sensors are often used to image flooding because of their all-weather day-night capability, and now possess sufficient resolution to image urban flooding. The flood extents extracted from the images may be used for flood relief management and improved urban flood inundation modelling. A difficulty with using SAR for urban flood detection is that, due to its side-looking nature, substantial areas of urban ground surface may not be visible to the SAR due to radar layover and shadow caused by buildings and taller vegetation. This paper investigates whether urban flooding can be detected in layover regions (where flooding may not normally be apparent) using double scattering between the (possibly flooded) ground surface and the walls of adjacent buildings. The method estimates double scattering strengths using a SAR image in conjunction with a high resolution LiDAR (Light Detection and Ranging) height map of the urban area. A SAR simulator is applied to the LiDAR data to generate maps of layover and shadow, and estimate the positions of double scattering curves in the SAR image. Observations of double scattering strengths were compared to the predictions from an electromagnetic scattering model, for both the case of a single image containing flooding, and a change detection case in which the flooded image was compared to an un-flooded image of the same area acquired with the same radar parameters. The method proved successful in detecting double scattering due to flooding in the single-image case, for which flooded double scattering curves were detected with 100% classification accuracy (albeit using a small sample set) and un-flooded curves with 91% classification accuracy. The same measures of success were achieved using change detection between flooded and un-flooded images. Depending on the particular flooding situation, the method could lead to improved detection of flooding in urban areas.
Resumo:
Considerable effort is presently being devoted to producing high-resolution sea surface temperature (SST) analyses with a goal of spatial grid resolutions as low as 1 km. Because grid resolution is not the same as feature resolution, a method is needed to objectively determine the resolution capability and accuracy of SST analysis products. Ocean model SST fields are used in this study as simulated “true” SST data and subsampled based on actual infrared and microwave satellite data coverage. The subsampled data are used to simulate sampling errors due to missing data. Two different SST analyses are considered and run using both the full and the subsampled model SST fields, with and without additional noise. The results are compared as a function of spatial scales of variability using wavenumber auto- and cross-spectral analysis. The spectral variance at high wavenumbers (smallest wavelengths) is shown to be attenuated relative to the true SST because of smoothing that is inherent to both analysis procedures. Comparisons of the two analyses (both having grid sizes of roughly ) show important differences. One analysis tends to reproduce small-scale features more accurately when the high-resolution data coverage is good but produces more spurious small-scale noise when the high-resolution data coverage is poor. Analysis procedures can thus generate small-scale features with and without data, but the small-scale features in an SST analysis may be just noise when high-resolution data are sparse. Users must therefore be skeptical of high-resolution SST products, especially in regions where high-resolution (~5 km) infrared satellite data are limited because of cloud cover.
Resumo:
We perform simulations of several convective events over the southern UK with the Met Office Unified Model (UM) at horizontal grid lengths ranging from 1.5 km to 200 m. Comparing the simulated storms on these days with the Met Office rainfall radar network allows us to apply a statistical approach to evaluate the properties and evolution of the simulated storms over a range of conditions. Here we present results comparing the storm morphology in the model and reality which show that the simulated storms become smaller as grid length decreases and that the grid length that fits the observations best changes with the size of the observed cells. We investigate the sensitivity of storm morphology in the model to the mixing length used in the subgrid turbulence scheme. As the subgrid mixing length is decreased, the number of small storms with high area-averaged rain rates increases. We show that by changing the mixing length we can produce a lower resolution simulation that produces similar morphologies to a higher resolution simulation.
Resumo:
High resolution surface wind fields covering the global ocean, estimated from remotely sensed wind data and ECMWF wind analyses, have been available since 2005 with a spatial resolution of 0.25 degrees in longitude and latitude, and a temporal resolution of 6h. Their quality is investigated through various comparisons with surface wind vectors from 190 buoys moored in various oceanic basins, from research vessels and from QuikSCAT scatterometer data taken during 2005-2006. The NCEP/NCAR and NCDC blended wind products are also considered. The comparisons performed during January-December 2005 show that speeds and directions compare well to in-situ observations, including from moored buoys and ships, as well as to the remotely sensed data. The root-mean-squared differences of the wind speed and direction for the new blended wind data are lower than 2m/s and 30 degrees, respectively. These values are similar to those estimated in the comparisons of hourly buoy measurements and QuickSCAT near real time retrievals. At global scale, it is found that the new products compare well with the wind speed and wind vector components observed by QuikSCAT. No significant dependencies on the QuikSCAT wind speed or on the oceanic region considered are evident.Evaluation of high-resolution surface wind products at global and regional scales
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
In plankton ecology, it is a fundamental question as to how a large number of competing phytoplankton species coexist in marine ecosystems under a seemingly-limited variety of resources. This ever-green question was first proposed by Hutchinson [Hutchinson, G.E., 1961. The paradox of the plankton. Am. Nat. 95, 137–145] as ‘the paradox of the plankton’. Starting from Hutchinson [Hutchinson, G.E., 1961. The paradox of the plankton. Am. Nat. 95, 137–145], over more than four decades several investigators have put forward varieties of mechanisms for the extreme diversity of phytoplankton species. In this article, within the boundary of our knowledge, we review the literature of the proposed solutions and give a brief overview of the mechanisms proposed so far. The proposed mechanisms that we discuss mainly include spatial and temporal heterogeneity in physical and biological environment, externally imposed or self-generated spatial segregation, horizontal mesoscale turbulence of ocean characterized by coherent vortices, oscillation and chaos generated by several internal and external causes, stable coexistence and compensatory dynamics under fluctuating temperature in resource competition, and finally the role of toxin-producing phytoplankton in maintaining the coexistence and biodiversity of the overall plankton population that we have proposed recently. We find that, although the different mechanisms proposed so far is potentially applicable to specific ecosystems, a universally accepted theory for explaining plankton diversity in natural waters is still an unachieved goal.
Resumo:
Data assimilation (DA) systems are evolving to meet the demands of convection-permitting models in the field of weather forecasting. On 19 April 2013 a special interest group meeting of the Royal Meteorological Society brought together UK researchers looking at different aspects of the data assimilation problem at high resolution, from theory to applications, and researchers creating our future high resolution observational networks. The meeting was chaired by Dr Sarah Dance of the University of Reading and Dr Cristina Charlton-Perez from the MetOffice@Reading. The purpose of the meeting was to help define the current state of high resolution data assimilation in the UK. The workshop assembled three main types of scientists: observational network specialists, operational numerical weather prediction researchers and those developing the fundamental mathematical theory behind data assimilation and the underlying models. These three working areas are intrinsically linked; therefore, a holistic view must be taken when discussing the potential to make advances in high resolution data assimilation.
Resumo:
This study assesses the influence of the El Niño–Southern Oscillation (ENSO) on global tropical cyclone activity using a 150-yr-long integration with a high-resolution coupled atmosphere–ocean general circulation model [High-Resolution Global Environmental Model (HiGEM); with N144 resolution: ~90 km in the atmosphere and ~40 km in the ocean]. Tropical cyclone activity is compared to an atmosphere-only simulation using the atmospheric component of HiGEM (HiGAM). Observations of tropical cyclones in the International Best Track Archive for Climate Stewardship (IBTrACS) and tropical cyclones identified in the Interim ECMWF Re-Analysis (ERA-Interim) are used to validate the models. Composite anomalies of tropical cyclone activity in El Niño and La Niña years are used. HiGEM is able to capture the shift in tropical cyclone locations to ENSO in the Pacific and Indian Oceans. However, HiGEM does not capture the expected ENSO–tropical cyclone teleconnection in the North Atlantic. HiGAM shows more skill in simulating the global ENSO–tropical cyclone teleconnection; however, variability in the Pacific is overpronounced. HiGAM is able to capture the ENSO–tropical cyclone teleconnection in the North Atlantic more accurately than HiGEM. An investigation into the large-scale environmental conditions, known to influence tropical cyclone activity, is used to further understand the response of tropical cyclone activity to ENSO in the North Atlantic and western North Pacific. The vertical wind shear response over the Caribbean is not captured in HiGEM compared to HiGAM and ERA-Interim. Biases in the mean ascent at 500 hPa in HiGEM remain in HiGAM over the western North Pacific; however, a more realistic low-level vorticity in HiGAM results in a more accurate ENSO–tropical cyclone teleconnection.
Resumo:
This paper outlines the results of a programme of radiocarbon dating and Bayesian modelling relating to an Early Bronze Age barrow cemetery at Over, Cambridgeshire. In total, 43 dates were obtained, enabling the first high-resolution independent chronology (relating to both burial and architectural events) to be constructed for a site of this kind. The results suggest that the three main turf-mound barrows were probably constructed and used successively rather than simultaneously, that the shift from inhumation to cremation seen on the site was not a straightforward progression, and that the four main ‘types’ of cremation burial in evidence were used throughout the life of the site. Overall, variability in terms of burial practice appears to have been a key feature of the site. The paper also considers the light that the fine-grained chronology developed can shed on recent much wider discussions of memory and time within Early Bronze Age barrows
Resumo:
The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project, using PRACE (Partnership for Advanced Computing in Europe) resources, constructed and ran an ensemble of atmosphere-only global climate model simulations, using the Met Office Unified Model GA3 configuration. Each simulation is 27 years in length for both the present climate and an end-of-century future climate, at resolutions of N96 (130 km), N216 (60 km) and N512 (25 km), in order to study the impact of model resolution on high impact climate features such as tropical cyclones. Increased model resolution is found to improve the simulated frequency of explicitly tracked tropical cyclones, and correlations of interannual variability in the North Atlantic and North West Pacific lie between 0.6 and 0.75. Improvements in the deficit of genesis in the eastern North Atlantic as resolution increases appear to be related to the representation of African Easterly Waves and the African Easterly Jet. However, the intensity of the modelled tropical cyclones as measured by 10 m wind speed remain weak, and there is no indication of convergence over this range of resolutions. In the future climate ensemble, there is a reduction of 50% in the frequency of Southern Hemisphere tropical cyclones, while in the Northern Hemisphere there is a reduction in the North Atlantic, and a shift in the Pacific with peak intensities becoming more common in the Central Pacific. There is also a change in tropical cyclone intensities, with the future climate having fewer weak storms and proportionally more stronger storms
Resumo:
The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) fields in two types of experiments, using climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TC frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.
Resumo:
The extent of the surface area sunlit is critical for radiative energy exchanges and therefore for a wide range of applications that require urban land surface models (ULSM), ranging from human comfort to weather forecasting. Here a computational demanding shadow casting algorithm is used to assess the capability of a simple single-layer urban canopy model, which assumes an infinitely long rotating canyon (ILC), to reproduce sunlit areas on roof and roads over central London. Results indicate that the sunlit roads areas are well-represented but somewhat smaller using an ILC, while sunlit roofs areas are consistently larger, especially for dense urban areas. The largest deviations from real world sunlit areas are found for roofs during mornings and evenings. Indications that sunlit fractions on walls are overestimated using an ILC during mornings and evenings are found. The implications of these errors are dependent on the application targeted. For example, (independent of albedo) ULSMs used in numerical weather prediction applying ILC representation of the urban form will overestimate outgoing shortwave radiation from roofs due to the overestimation of sunlit fraction of the roofs. Complications of deriving height to width ratios from real world data are also discussed.