790 resultados para Datasets
Effects of temporal resolution of input precipitation on the performance of hydrological forecasting
Resumo:
Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.
Resumo:
We assess the robustness of previous findings on the determinants of terrorism. Using extreme bound analysis, the three most comprehensive terrorism datasets, and focusing on the three most commonly analyzed aspects of terrorist activity, i.e., location, victim, and perpetrator, we re-assess the effect of 65 proposed correlates. Evaluating around 13.4 million regressions, we find 18 variables to be robustly associated with the number of incidents occurring in a given country-year, 15 variables with attacks against citizens from a particular country in a given year, and six variables with attacks perpetrated by citizens of a particular country in a given year.
Resumo:
This chapter applies rigorous statistical analysis to existing datasets of medieval exchange rates quoted in merchants’ letters sent from Barcelona, Bruges and Venice between 1380 and 1310, which survive in the archive of Francesco di Marco Datini of Prato. First, it tests the exchange rates for stationarity. Second, it uses regression analysis to examine the seasonality of exchange rates at the three financial centres and compares them against contemporary descriptions by the merchant Giovanni di Antonio da Uzzano. Third, it tests for structural breaks in the exchange rate series.
Resumo:
This paper introduces and evaluates DryMOD, a dynamic water balance model of the key hydrological process in drylands that is based on free, public-domain datasets. The rainfall model of DryMOD makes optimal use of spatially disaggregated Tropical Rainfall Measuring Mission (TRMM) datasets to simulate hourly rainfall intensities at a spatial resolution of 1-km. Regional-scale applications of the model in seasonal catchments in Tunisia and Senegal characterize runoff and soil moisture distribution and dynamics in response to varying rainfall data inputs and soil properties. The results highlight the need for hourly-based rainfall simulation and for correcting TRMM 3B42 rainfall intensities for the fractional cover of rainfall (FCR). Without FCR correction and disaggregation to 1 km, TRMM 3B42 based rainfall intensities are too low to generate surface runoff and to induce substantial changes to soil moisture storage. The outcomes from the sensitivity analysis show that topsoil porosity is the most important soil property for simulation of runoff and soil moisture. Thus, we demonstrate the benefit of hydrological investigations at a scale, for which reliable information on soil profile characteristics exists and which is sufficiently fine to account for the heterogeneities of these. Where such information is available, application of DryMOD can assist in the spatial and temporal planning of water harvesting according to runoff-generating areas and the runoff ratio, as well as in the optimization of agricultural activities based on realistic representation of soil moisture conditions.
Resumo:
Observations of Earth from space have been made for over 40 years and have contributed to advances in many aspects of climate science. However, attempts to exploit this wealth of data are often hampered by a lack of homogeneity and continuity and by insufficient understanding of the products and their uncertainties. There is, therefore, a need to reassess and reprocess satellite datasets to maximize their usefulness for climate science. The European Space Agency has responded to this need by establishing the Climate Change Initiative (CCI). The CCI will create new climate data records for (currently) 13 essential climate variables (ECVs) and make these open and easily accessible to all. Each ECV project works closely with users to produce time series from the available satellite observations relevant to users' needs. A climate modeling users' group provides a climate system perspective and a forum to bring the data and modeling communities together. This paper presents the CCI program. It outlines its benefit and presents approaches and challenges for each ECV project, covering clouds, aerosols, ozone, greenhouse gases, sea surface temperature, ocean color, sea level, sea ice, land cover, fire, glaciers, soil moisture, and ice sheets. It also discusses how the CCI approach may contribute to defining and shaping future developments in Earth observation for climate science.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP) but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.
Resumo:
Tagging provides support for retrieval and categorization of online content depending on users' tag choice. A number of models of tagging behaviour have been proposed to identify factors that are considered to affect taggers, such as users' tagging history. In this paper, we use Semiotics Analysis and Activity theory, to study the effect the system designer has over tagging behaviour. The framework we use shows the components that comprise the tagging system and how they interact together to direct tagging behaviour. We analysed two collaborative tagging systems: CiteULike and Delicious by studying their components by applying our framework. Using datasets from both systems, we found that 35% of CiteULike users did not provide tags compared to only 0.1% of Delicious users. This was directly linked to the type of tools used by the system designer to support tagging.
Resumo:
The Bollène-2002 Experiment was aimed at developing the use of a radar volume-scanning strategy for conducting radar rainfall estimations in the mountainous regions of France. A developmental radar processing system, called Traitements Régionalisés et Adaptatifs de Données Radar pour l’Hydrologie (Regionalized and Adaptive Radar Data Processing for Hydrological Applications), has been built and several algorithms were specifically produced as part of this project. These algorithms include 1) a clutter identification technique based on the pulse-to-pulse variability of reflectivity Z for noncoherent radar, 2) a coupled procedure for determining a rain partition between convective and widespread rainfall R and the associated normalized vertical profiles of reflectivity, and 3) a method for calculating reflectivity at ground level from reflectivities measured aloft. Several radar processing strategies, including nonadaptive, time-adaptive, and space–time-adaptive variants, have been implemented to assess the performance of these new algorithms. Reference rainfall data were derived from a careful analysis of rain gauge datasets furnished by the Cévennes–Vivarais Mediterranean Hydrometeorological Observatory. The assessment criteria for five intense and long-lasting Mediterranean rain events have proven that good quantitative precipitation estimates can be obtained from radar data alone within 100-km range by using well-sited, well-maintained radar systems and sophisticated, physically based data-processing systems. The basic requirements entail performing accurate electronic calibration and stability verification, determining the radar detection domain, achieving efficient clutter elimination, and capturing the vertical structure(s) of reflectivity for the target event. Radar performance was shown to depend on type of rainfall, with better results obtained with deep convective rain systems (Nash coefficients of roughly 0.90 for point radar–rain gauge comparisons at the event time step), as opposed to shallow convective and frontal rain systems (Nash coefficients in the 0.6–0.8 range). In comparison with time-adaptive strategies, the space–time-adaptive strategy yields a very significant reduction in the radar–rain gauge bias while the level of scatter remains basically unchanged. Because the Z–R relationships have not been optimized in this study, results are attributed to an improved processing of spatial variations in the vertical profile of reflectivity. The two main recommendations for future work consist of adapting the rain separation method for radar network operations and documenting Z–R relationships conditional on rainfall type.
Resumo:
Purpose – Price indices for commercial real estate markets are difficult to construct because assets are heterogeneous, they are spatially dispersed and they are infrequently traded. Appraisal-based indices are one response to these problems, but may understate volatility or fail to capture turning points in a timely manner. This paper estimates “transaction linked indices” for major European markets to see whether these offer a different perspective on market performance. The paper aims to discuss these issues. Design/methodology/approach – The assessed value method is used to construct the indices. This has been recently applied to commercial real estate datasets in the USA and UK. The underlying data comprise appraisals and sale prices for assets monitored by Investment Property Databank (IPD). The indices are compared to appraisal-based series for the countries concerned for Q4 2001 to Q4 2012. Findings – Transaction linked indices show stronger growth and sharper declines over the course of the cycle, but they do not notably lead their appraisal-based counterparts. They are typically two to four times more volatile. Research limitations/implications – Only country-level indicators can be constructed in many cases owing to low trading volumes in the period studied, and this same issue prevented sample selection bias from being analysed in depth. Originality/value – Discussion of the utility of transaction-based price indicators is extended to European commercial real estate markets. The indicators offer alternative estimates of real estate market volatility that may be useful in asset allocation and risk modelling, including in a regulatory context.
Resumo:
Global hydrographic and air–sea freshwater flux datasets are used to investigate ocean salinity changes over 1950–2010 in relation to surface freshwater flux. On multi-decadal timescales, surface salinity increases (decreases) in evaporation (precipitation) dominated regions, the Atlantic–Pacific salinity contrast increases, and the upper thermocline salinity maximum increases while the salinity minimum of intermediate waters decreases. Potential trends in E–P are examined for 1950–2010 (using two reanalyses) and 1979–2010 (using four reanalyses and two blended products). Large differences in the 1950–2010 E–P trend patterns are evident in several regions, particularly the North Atlantic. For 1979–2010 some coherency in the spatial change patterns is evident but there is still a large spread in trend magnitude and sign between the six E–P products. However, a robust pattern of increased E–P in the southern hemisphere subtropical gyres is seen in all products. There is also some evidence in the tropical Pacific for a link between the spatial change patterns of salinity and E–P associated with ENSO. The water cycle amplification rate over specific regions is subsequently inferred from the observed 3-D salinity change field using a salt conservation equation in variable isopycnal volumes, implicitly accounting for the migration of isopycnal surfaces. Inferred global changes of E–P over 1950–2010 amount to an increase of 1 ± 0.6 % in net evaporation across the subtropics and an increase of 4.2 ± 2 % in net precipitation across subpolar latitudes. Amplification rates are approximately doubled over 1979–2010, consistent with accelerated broad-scale warming but also coincident with much improved salinity sampling over the latter period.
Resumo:
This paper presents a neuroscience inspired information theoretic approach to motion segmentation. Robust motion segmentation represents a fundamental first stage in many surveillance tasks. As an alternative to widely adopted individual segmentation approaches, which are challenged in different ways by imagery exhibiting a wide range of environmental variation and irrelevant motion, this paper presents a new biologically-inspired approach which computes the multivariate mutual information between multiple complementary motion segmentation outputs. Performance evaluation across a range of datasets and against competing segmentation methods demonstrates robust performance.
Resumo:
By the mid-1930s the major Hollywood studios had developed extensive networks of distribution subsidiaries across five continents. This article focuses on the operation of American film distributors in Australia – one of Hollywood's largest foreign markets. Drawing on two unique primary datasets, the article compares and investigates film distribution in Sydney's first-run and suburban-run markets. It finds that the subsidiaries of US film companies faced a greater liability of foreignness in the city centre market than in the suburban one. Our data support the argument that film audiences in local or suburban cinema markets were more receptive to Hollywood entertainment than those in metropolitan centres.
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
Surface temperature is a key aspect of weather and climate, but the term may refer to different quantities that play interconnected roles and are observed by different means. In a community-based activity in June 2012, the EarthTemp Network brought together 55 researchers from five continents to improve the interaction between scientific communities who focus on surface temperature in particular domains, to exploit the strengths of different observing systems and to better meet the needs of different communities. The workshop identified key needs for progress towards meeting scientific and societal requirements for surface temperature understanding and information, which are presented in this community paper. A "whole-Earth" perspective is required with more integrated, collaborative approaches to observing and understanding Earth's various surface temperatures. It is necessary to build understanding of the relationships between different surface temperatures, where presently inadequate, and undertake large-scale systematic intercomparisons. Datasets need to be easier to obtain and exploit for a wide constituency of users, with the differences and complementarities communicated in readily understood terms, and realistic and consistent uncertainty information provided. Steps were also recommended to curate and make available data that are presently inaccessible, develop new observing systems and build capacities to accelerate progress in the accuracy and usability of surface temperature datasets.
Resumo:
African societies are dependent on rainfall for agricultural and other water-dependent activities, yet rainfall is extremely variable in both space and time and reoccurring water shocks, such as drought, can have considerable social and economic impacts. To help improve our knowledge of the rainfall climate, we have constructed a 30-year (1983–2012), temporally consistent rainfall dataset for Africa known as TARCAT (TAMSAT African Rainfall Climatology And Time-series) using archived Meteosat thermal infra-red (TIR) imagery, calibrated against rain gauge records collated from numerous African agencies. TARCAT has been produced at 10-day (dekad) scale at a spatial resolution of 0.0375°. An intercomparison of TARCAT from 1983 to 2010 with six long-term precipitation datasets indicates that TARCAT replicates the spatial and seasonal rainfall patterns and interannual variability well, with correlation coefficients of 0.85 and 0.70 with the Climate Research Unit (CRU) and Global Precipitation Climatology Centre (GPCC) gridded-gauge analyses respectively in the interannual variability of the Africa-wide mean monthly rainfall. The design of the algorithm for drought monitoring leads to TARCAT underestimating the Africa-wide mean annual rainfall on average by −0.37 mm day−1 (21%) compared to other datasets. As the TARCAT rainfall estimates are historically calibrated across large climatically homogeneous regions, the data can provide users with robust estimates of climate related risk, even in regions where gauge records are inconsistent in time.