71 resultados para aggregated multicast


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Providing probabilistic forecasts using Ensemble Prediction Systems has become increasingly popular in both the meteorological and hydrological communities. Compared to conventional deterministic forecasts, probabilistic forecasts may provide more reliable forecasts of a few hours to a number of days ahead, and hence are regarded as better tools for taking uncertainties into consideration and hedging against weather risks. It is essential to evaluate performance of raw ensemble forecasts and their potential values in forecasting extreme hydro-meteorological events. This study evaluates ECMWF’s medium-range ensemble forecasts of precipitation over the period 2008/01/01-2012/09/30 on a selected mid-latitude large scale river basin, the Huai river basin (ca. 270,000 km2) in central-east China. The evaluation unit is sub-basin in order to consider forecast performance in a hydrologically relevant way. The study finds that forecast performance varies with sub-basin properties, between flooding and non-flooding seasons, and with the forecast properties of aggregated time steps and lead times. Although the study does not evaluate any hydrological applications of the ensemble precipitation forecasts, its results have direct implications in hydrological forecasts should these ensemble precipitation forecasts be employed in hydrology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The domestic (residential) sector accounts for 30% of the world’s energy consumption hence plays a substantial role in energy management and CO2 emissions reduction efforts. Energy models have been generally developed to mitigate the impact of climate change and for the sustainable management and planning of energy resources. Although there are different models and model categories, they are generally categorised into top down and bottom up. Significantly, top down models are based on aggregated data while bottom up models are based on disaggregated data. These approaches create fundamental differences which have been the centre of debate since the 1970’s. These differences have led to noticeable discrepancies in results which have led to authors arguing that the models are of a more complementary than a substituting nature. As a result developing methods suggest that there is the need to integrate either the two models (bottom up − top down) or aspects that combine two bottom up models or an upgrade of top down models to compensate for the documented limitations. Diverse schools of thought argue in favour of these integrations – currently known as hybrid models. In this paper complexities of identifying country specific and/or generic domestic energy models and their applications in different countries have been critically reviewed. Predominantly from the review it is evident that most of these methods have been adapted and used in the ‘western world’ with practically no such applications in Africa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tropical deep convection exhibits a variety of levels of aggregation over a wide range of scales. Based on a multisatellite analysis, the present study shows at mesoscale that different levels of aggregation are statistically associated with differing large-scale atmospheric states, despite similar convective intensity and large-scale forcings. The more aggregated the convection, the dryer and less cloudy the atmosphere, the stronger the outgoing longwave radiation, and the lower the planetary albedo. This suggests that mesoscale convective aggregation has the potential to affect couplings between moisture and convection and between convection, radiation, and large-scale ascent. In so doing, aggregation may play a role in phenomena such as “hot spots” or the Madden-Julian Oscillation. These findings support the need for the representation of mesoscale organization in cumulus parameterizations; most parameterizations used in current climate models lack any such representation. The ability of a cloud system-resolving model to reproduce observed relationships suggests that such models may be useful to guide attempts at parameterizations of convective aggregation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Airborne dust affects the Earth's energy balance — an impact that is measured in terms of the implied change in net radiation (or radiative forcing, in W m-2) at the top of the atmosphere. There remains considerable uncertainty in the magnitude and sign of direct forcing by airborne dust under current climate. Much of this uncertainty stems from simplified assumptions about mineral dust-particle size, composition and shape, which are applied in remote sensing retrievals of dust characteristics and dust-cycle models. Improved estimates of direct radiative forcing by dust will require improved characterization of the spatial variability in particle characteristics to provide reliable information dust optical properties. This includes constraints on: (1) particle-size distribution, including discrimination of particle subpopulations and quantification of the amount of dust in the sub-10 µm to <0.1 µm mass fraction; (2) particle composition, specifically the abundance of iron oxides, and whether particles consist of single or multi-mineral grains; (3) particle shape, including degree of sphericity and surface roughness, as a function of size and mineralogy; and (4) the degree to which dust particles are aggregated together. The use of techniques that measure the size, composition and shape of individual particles will provide a better basis for optical modelling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much UK research and market practice on portfolio strategy and performance benchmarking relies on a sector‐geography subdivision of properties. Prior tests of the appropriateness of such divisions have generally relied on aggregated or hypothetical return data. However, the results found in aggregate may not hold when individual buildings are considered. This paper makes use of a dataset of individual UK property returns. A series of multivariate exploratory statistical techniques are utilised to test whether the return behaviour of individual properties conforms to their a priori grouping. The results suggest strongly that neither standard sector nor regional classifications provide a clear demarcation of individual building performance. This has important implications for both portfolio strategy and performance measurement and benchmarking. However, there do appear to be size and yield effects that help explain return behaviour at the property level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy storage is a potential alternative to conventional network reinforcementof the low voltage (LV) distribution network to ensure the grid’s infrastructure remainswithin its operating constraints. This paper presents a study on the control of such storagedevices, owned by distribution network operators. A deterministic model predictive control (MPC) controller and a stochastic receding horizon controller (SRHC) are presented, wherethe objective is to achieve the greatest peak reduction in demand, for a given storagedevice specification, taking into account the high level of uncertainty in the prediction of LV demand. The algorithms presented in this paper are compared to a standard set-pointcontroller and bench marked against a control algorithm with a perfect forecast. A specificcase study, using storage on the LV network, is presented, and the results of each algorithmare compared. A comprehensive analysis is then carried out simulating a large number of LV networks of varying numbers of households. The results show that the performance of each algorithm is dependent on the number of aggregated households. However, on a typical aggregation, the novel SRHC algorithm presented in this paper is shown to outperform each of the comparable storage control techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Distribution Network Operators (DNOs) role is becoming more difficult as electric vehicles and electric heating penetrate the network, increasing the demand. As a result it becomes harder for the distribution networks infrastructure to remain within its operating constraints. Energy storage is a potential alternative to conventional network reinforcement such as upgrading cables and transformers. The research presented here in this paper shows that due to the volatile nature of the LV network, the control approach used for energy storage has a significant impact on performance. This paper presents and compares control methodologies for energy storage where the objective is to get the greatest possible peak demand reduction across the day from a pre-specified storage device. The results presented show the benefits and detriments of specific types of control on a storage device connected to a single phase of an LV network, using aggregated demand profiles based on real smart meter data from individual homes. The research demonstrates an important relationship between how predictable an aggregation is and the best control methodology required to achieve the objective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces an ontology-based knowledge model for knowledge management. This model can facilitate knowledge discovery that provides users with insight for decision making. The users requiring the insight normally play different roles with different requirements in an organisation. To meet the requirements, insights are created by purposely aggregated transnational data. This involves a semantic data integration process. In this paper, we present a knowledge management system which is capable of representing knowledge requirements in a domain context and enabling the semantic data integration through ontology modeling. The knowledge domain context of United Bible Societies is used to illustrate the features of the knowledge management capabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sustainable Intensification (SI) of agriculture has recently received widespread political attention, in both the UK and internationally. The concept recognises the need to simultaneously raise yields, increase input use efficiency and reduce the negative environmental impacts of farming systems to secure future food production and to sustainably use the limited resources for agriculture. The objective of this paper is to outline a policy-making tool to assess SI at a farm level. Based on the method introduced by Kuosmanen and Kortelainen (2005), we use an adapted Data Envelopment Analysis (DEA) to consider the substitution possibilities between economic value and environmental pressures generated by farming systems in an aggregated index of Eco-Efficiency. Farm level data, specifically General Cropping Farms (GCFs) from the East Anglian River Basin Catchment (EARBC), UK were used as the basis for this analysis. The assignment of weights to environmental pressures through linear programming techniques, when optimising the relative Eco-Efficiency score, allows the identification of appropriate production technologies and practices (integrating pest management, conservation farming, precision agriculture, etc.) for each farm and therefore indicates specific improvements that can be undertaken towards SI. Results are used to suggest strategies for the integration of farming practices and environmental policies in the framework of SI of agriculture. Paths for improving the index of Eco-Efficiency and therefore reducing environmental pressures are also outlined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the last decades, several windstorm series hit Europe leading to large aggregated losses. Such storm series are examples of serial clustering of extreme cyclones, presenting a considerable risk for the insurance industry. Clustering of events and return periods of storm series for Germany are quantified based on potential losses using empirical models. Two reanalysis data sets and observations from German weather stations are considered for 30 winters. Histograms of events exceeding selected return levels (1-, 2- and 5-year) are derived. Return periods of historical storm series are estimated based on the Poisson and the negative binomial distributions. Over 4000 years of general circulation model (GCM) simulations forced with current climate conditions are analysed to provide a better assessment of historical return periods. Estimations differ between distributions, for example 40 to 65 years for the 1990 series. For such less frequent series, estimates obtained with the Poisson distribution clearly deviate from empirical data. The negative binomial distribution provides better estimates, even though a sensitivity to return level and data set is identified. The consideration of GCM data permits a strong reduction of uncertainties. The present results support the importance of considering explicitly clustering of losses for an adequate risk assessment for economical applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With a rapidly increasing fraction of electricity generation being sourced from wind, extreme wind power generation events such as prolonged periods of low (or high) generation and ramps in generation, are a growing concern for the efficient and secure operation of national power systems. As extreme events occur infrequently, long and reliable meteorological records are required to accurately estimate their characteristics. Recent publications have begun to investigate the use of global meteorological “reanalysis” data sets for power system applications, many of which focus on long-term average statistics such as monthly-mean generation. Here we demonstrate that reanalysis data can also be used to estimate the frequency of relatively short-lived extreme events (including ramping on sub-daily time scales). Verification against 328 surface observation stations across the United Kingdom suggests that near-surface wind variability over spatiotemporal scales greater than around 300 km and 6 h can be faithfully reproduced using reanalysis, with no need for costly dynamical downscaling. A case study is presented in which a state-of-the-art, 33 year reanalysis data set (MERRA, from NASA-GMAO), is used to construct an hourly time series of nationally-aggregated wind power generation in Great Britain (GB), assuming a fixed, modern distribution of wind farms. The resultant generation estimates are highly correlated with recorded data from National Grid in the recent period, both for instantaneous hourly values and for variability over time intervals greater than around 6 h. This 33 year time series is then used to quantify the frequency with which different extreme GB-wide wind power generation events occur, as well as their seasonal and inter-annual variability. Several novel insights into the nature of extreme wind power generation events are described, including (i) that the number of prolonged low or high generation events is well approximated by a Poission-like random process, and (ii) whilst in general there is large seasonal variability, the magnitude of the most extreme ramps is similar in both summer and winter. An up-to-date version of the GB case study data as well as the underlying model are freely available for download from our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is an on-going debate on the environmental effects of genetically modified crops to which this paper aims to contribute. First, data on environmental impacts of genetically modified (GM) and conventional crops are collected from peer-reviewed journals, and secondly an analysis is conducted in order to examine which crop type is less harmful for the environment. Published data on environmental impacts are measured using an array of indicators, and their analysis requires their normalisation and aggregation. Taking advantage of composite indicators literature, this paper builds composite indicators to measure the impact of GM and conventional crops in three dimensions: (1) non-target key species richness, (2) pesticide use, and (3) aggregated environmental impact. The comparison between the three composite indicators for both crop types allows us to establish not only a ranking to elucidate which crop is more convenient for the environment but the probability that one crop type outperforms the other from an environmental perspective. Results show that GM crops tend to cause lower environmental impacts than conventional crops for the analysed indicators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing methods of dive analysis, developed for fully aquatic animals, tend to focus on frequency of behaviors rather than transitions between them. They, therefore, do not account for the variability of behavior of semiaquatic animals, and the switching between terrestrial and aquatic environments. This is the first study to use hidden Markov models (HMM) to divide dives of a semiaquatic animal into clusters and thus identify the environmental predictors of transition between behavioral modes. We used 18 existing data sets of the dives of 14 American mink (Neovison vison) fitted with time-depth recorders in lowland England. Using HMM, we identified 3 behavioral states (1, temporal cluster of dives; 2, more loosely aggregated diving within aquatic activity; and 3, terminal dive of a cluster or a single, isolated dive). Based on the higher than expected proportion of dives in State 1, we conclude that mink tend to dive in clusters. We found no relationship between temperature and the proportion of dives in each state or between temperature and the rate of transition between states, meaning that in our study area, mink are apparently not adopting different diving strategies at different temperatures. Transition analysis between states has shown that there is no correlation between ambient temperature and the likelihood of mink switching from one state to another, that is, changing foraging modes. The variables provided good discrimination and grouped into consistent states well, indicating promise for further application of HMM and other state transition analyses in studies of semiaquatic animals.