911 resultados para aggregated multicast


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Low density lipoprotein (LDL) has recently been shown to be oxidised by iron within the lysosomes of macrophages and this is a novel potential mechanism for LDL oxidation in atherosclerosis. Our aim was to characterise the chemical and physical changes induced in LDL by iron at lysosomal pH and to investigate the effects of iron chelators and α-tocopherol on this process. LDL was oxidised by iron at pH 4.5 and 37°C and its oxidation monitored by spectrophotometry and HPLC. LDL was oxidised effectively by FeSO4 (5-50 µM) and became highly aggregated at pH 4.5, but not at pH 7.4. Cholesteryl esters decreased and after a pronounced lag 7-ketocholesterol increased greatly. Total hydroperoxides (measured by tri-iodide assay) increased up to 24 h and then decreased only slowly. The lipid composition after 12 h at pH 4.5 and 37°C was similar to that of LDL oxidised by copper at pH 7.4 and 4°C, i.e. rich in hydroperoxides but low in oxysterols. Previously oxidised LDL aggregated rapidly and spontaneously at pH 4.5, but not at pH 7.4. Ferrous was much more effective than ferric iron at oxidising LDL when added after the oxidation was already underway. The iron chelators diethylenetriaminepentaacetic acid and, to a lesser extent, desferrioxamine inhibited LDL oxidation when added during its initial stages, but were unable to prevent LDL aggregating after it had been partially oxidised. Surprisingly, desferrioxamine increased the rate of LDL modification when added late in the oxidation process. α-Tocopherol enrichment of LDL initially increased the oxidation of LDL, but inhibited it later. The presence of oxidised and highly aggregated lipid within lysosomes has the potential to perturb the function of these organelles and to promote atherosclerosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The issue of diversification in direct real estate investment portfolios has been widely studied in academic and practitioner literature. Most work, however, has been done using either partially aggregated data or data for small samples of individual properties. This paper reports results from tests of both risk reduction and diversification that use the records of 10,000+ UK properties tracked by Investment Property Databank. It provides, for the first time, robust estimates of the diversification gains attainable given the returns, risks and cross‐correlations across the individual properties available to fund managers. The results quantify the number of assets and amount of money needed to construct both ‘balanced’ and ‘specialist’ property portfolios by direct investment. Target numbers will vary according to the objectives of investors and the degree to which tracking error is tolerated. The top‐level results are consistent with previous work, showing that a large measure of risk reduction can be achieved with portfolios of 30–50 properties, but full diversification of specific risk can only be achieved in very large portfolios. However, the paper extends previous work by demonstrating on a single, large dataset the implications of different methods of calculating risk reduction, and also by showing more disaggregated results relevant to the construction of specialist, sector‐focussed funds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Treponema have been implicated recently in the pathogenesis of digital dermatitis (DID) and contagious ovine digital dermatitis (CODD) that are infectious diseases of bovine and ovine foot tissues, respectively. Previous analyses of treponemal 16S rDNA sequences, PCR-amplified directly from DID or CODD lesions, have suggested relatedness of animal Treponema to some human oral Treponema species isolated from periodontal tissues. In this study a range of adhesion and virulence-related properties of three animal Treponema isolates have been compared with representative human oral strains of Treponema denticola and Treponema vincentii. In adhesion assays using biotinylated treponemal cells, T denticola cells bound in consistently higher numbers to fibronectin, laminin, collagen type 1, gelatin, keratin and lactoferrin than did T. vincentii or animal Treponema isolates. However, animal DID strains adhered to fibrinogen at equivalent or greater levels than T denticola. All Treponema strains bound to the amino-terminal heparin l/fibrin I domain of fibronectin. 16S rDNA sequence analyses placed ovine strain UB1090 and bovine strain UB1467 within a cluster that was phylogenetically related to T vincentii, while ovine strain UB1466 appeared more closely related to T denticola. These observations correlated with phenotypic properties. Thus, T denticola ATCC 35405, GM-1, and Treponema UB1466 had similar outer-membrane protein profiles, produced chymotrypsin-like protease (CTLP), trypsin-like protease and high levels of proline iminopeptidase, and co-aggregated with human oral bacteria Porphyromonas gingivalis and Streptococcus crista. Conversely, T vincentii ATCC 35580, D2A-2, and animal strains UB1090 and UB1467 did not express CTLP or trypsin-like protease and did not co-aggregate with P. gingivalis or S. crista. Taken collectively, these results suggest that human oral-related Treponema have broad host specificity and that similar control or preventive strategies might be developed for human and animal Treponema-associated infections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bangladesh Rural Advancement Committee (BRAC), a non-governmental organisation (NGO), runs a large number of non-formal primary schools in Bangladesh which target out-of-school children from poor families. These schools are well-known for their effectiveness in closing the gender gap in primary school enrolment. On the other hand, registered non-government secondary madrasas (or Islamic schools) today enrol one girl against every boy student. In this article, we document a positive spillover effect of BRAC schools on female secondary enrolment in registered madrasas. Drawing upon school enrolment data aggregated at the region level, we first show that regions that had more registered madrasas experienced greater secondary female enrolment growth during 1999–2003, holding the number of secular secondary schools constant. In this context we test the impact of BRAC-run primary schools on female enrolment in registered madrasas. We deal with the potential endogeneity of placement of BRAC schools using an instrumental variable approach. Controlling for factors such as local-level poverty, road access and distance from major cities, we show that regions with a greater presence of BRAC schools have higher female enrolment growth in secondary madrasas. The effect is much bigger when compared to that on secondary schools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wine production is strongly affected by weather and climate and thus highly vulnerable to climate change. In Portugal, viticulture and wine production are an important economic activity. In the present study, current bioclimatic zoning in Portugal (1950–2000) and its projected changes under future climate conditions (2041–2070) are assessed through the analysis of an aggregated, categorized bioclimatic index (CatI) at a very high spatial resolution (near 1 km). CatI incorporates the most relevant bioclimatic characteristics of a given region, thus allowing the direct comparison between different regions. Future viticultural zoning is achieved using data from 13 climate model transient experiments following the A1B emission scenario. These data are downscaled using a two-step method of spatial pattern downscaling. This downscaling approach allows characterizing mesoclimatic influences on viticulture throughout Portugal. Results for the recent past depict the current spatial variability of Portuguese viticultural regions. Under future climate conditions, the current viticultural zoning is projected to undergo significant changes, which may represent important challenges for the Portuguese winemaking sector. The changes are quite robust across the different climate models. A lower bioclimatic diversity is also projected, resulting from a more homogeneous warm and dry climate in most of the wine regions. This will lead to changes in varietal suitability and wine characteristics of each region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is widely accepted that some of the most accurate Value-at-Risk (VaR) estimates are based on an appropriately specified GARCH process. But when the forecast horizon is greater than the frequency of the GARCH model, such predictions have typically required time-consuming simulations of the aggregated returns distributions. This paper shows that fast, quasi-analytic GARCH VaR calculations can be based on new formulae for the first four moments of aggregated GARCH returns. Our extensive empirical study compares the Cornish–Fisher expansion with the Johnson SU distribution for fitting distributions to analytic moments of normal and Student t, symmetric and asymmetric (GJR) GARCH processes to returns data on different financial assets, for the purpose of deriving accurate GARCH VaR forecasts over multiple horizons and significance levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Providing probabilistic forecasts using Ensemble Prediction Systems has become increasingly popular in both the meteorological and hydrological communities. Compared to conventional deterministic forecasts, probabilistic forecasts may provide more reliable forecasts of a few hours to a number of days ahead, and hence are regarded as better tools for taking uncertainties into consideration and hedging against weather risks. It is essential to evaluate performance of raw ensemble forecasts and their potential values in forecasting extreme hydro-meteorological events. This study evaluates ECMWF’s medium-range ensemble forecasts of precipitation over the period 2008/01/01-2012/09/30 on a selected mid-latitude large scale river basin, the Huai river basin (ca. 270,000 km2) in central-east China. The evaluation unit is sub-basin in order to consider forecast performance in a hydrologically relevant way. The study finds that forecast performance varies with sub-basin properties, between flooding and non-flooding seasons, and with the forecast properties of aggregated time steps and lead times. Although the study does not evaluate any hydrological applications of the ensemble precipitation forecasts, its results have direct implications in hydrological forecasts should these ensemble precipitation forecasts be employed in hydrology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The domestic (residential) sector accounts for 30% of the world’s energy consumption hence plays a substantial role in energy management and CO2 emissions reduction efforts. Energy models have been generally developed to mitigate the impact of climate change and for the sustainable management and planning of energy resources. Although there are different models and model categories, they are generally categorised into top down and bottom up. Significantly, top down models are based on aggregated data while bottom up models are based on disaggregated data. These approaches create fundamental differences which have been the centre of debate since the 1970’s. These differences have led to noticeable discrepancies in results which have led to authors arguing that the models are of a more complementary than a substituting nature. As a result developing methods suggest that there is the need to integrate either the two models (bottom up − top down) or aspects that combine two bottom up models or an upgrade of top down models to compensate for the documented limitations. Diverse schools of thought argue in favour of these integrations – currently known as hybrid models. In this paper complexities of identifying country specific and/or generic domestic energy models and their applications in different countries have been critically reviewed. Predominantly from the review it is evident that most of these methods have been adapted and used in the ‘western world’ with practically no such applications in Africa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tropical deep convection exhibits a variety of levels of aggregation over a wide range of scales. Based on a multisatellite analysis, the present study shows at mesoscale that different levels of aggregation are statistically associated with differing large-scale atmospheric states, despite similar convective intensity and large-scale forcings. The more aggregated the convection, the dryer and less cloudy the atmosphere, the stronger the outgoing longwave radiation, and the lower the planetary albedo. This suggests that mesoscale convective aggregation has the potential to affect couplings between moisture and convection and between convection, radiation, and large-scale ascent. In so doing, aggregation may play a role in phenomena such as “hot spots” or the Madden-Julian Oscillation. These findings support the need for the representation of mesoscale organization in cumulus parameterizations; most parameterizations used in current climate models lack any such representation. The ability of a cloud system-resolving model to reproduce observed relationships suggests that such models may be useful to guide attempts at parameterizations of convective aggregation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Airborne dust affects the Earth's energy balance — an impact that is measured in terms of the implied change in net radiation (or radiative forcing, in W m-2) at the top of the atmosphere. There remains considerable uncertainty in the magnitude and sign of direct forcing by airborne dust under current climate. Much of this uncertainty stems from simplified assumptions about mineral dust-particle size, composition and shape, which are applied in remote sensing retrievals of dust characteristics and dust-cycle models. Improved estimates of direct radiative forcing by dust will require improved characterization of the spatial variability in particle characteristics to provide reliable information dust optical properties. This includes constraints on: (1) particle-size distribution, including discrimination of particle subpopulations and quantification of the amount of dust in the sub-10 µm to <0.1 µm mass fraction; (2) particle composition, specifically the abundance of iron oxides, and whether particles consist of single or multi-mineral grains; (3) particle shape, including degree of sphericity and surface roughness, as a function of size and mineralogy; and (4) the degree to which dust particles are aggregated together. The use of techniques that measure the size, composition and shape of individual particles will provide a better basis for optical modelling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much UK research and market practice on portfolio strategy and performance benchmarking relies on a sector‐geography subdivision of properties. Prior tests of the appropriateness of such divisions have generally relied on aggregated or hypothetical return data. However, the results found in aggregate may not hold when individual buildings are considered. This paper makes use of a dataset of individual UK property returns. A series of multivariate exploratory statistical techniques are utilised to test whether the return behaviour of individual properties conforms to their a priori grouping. The results suggest strongly that neither standard sector nor regional classifications provide a clear demarcation of individual building performance. This has important implications for both portfolio strategy and performance measurement and benchmarking. However, there do appear to be size and yield effects that help explain return behaviour at the property level.