910 resultados para content distribution networks


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article present the result from a study of two sediment cores collected from the environmentally distinct zones of CES. Accumulation status of five toxic metals: Cadmium (Cd), Chromium (Cr), Cobalt (Co), Copper (Cu) and Lead (Pb) were analyzed. Besides texture and CHNS were determined to understand the composition of the sediment. Enrichment Factor (EF) and Anthropogenic Factor (AF) were used to differentiate the typical metal sources. Metal enrichment in the cores revealed heavy load at the northern (NS1 ) region compared with the southern zone (SS1). Elevation of metal content in core NS1 showed the industrial input. Statistical analyses were employed to understand the origin of metals in the sediment samples. Principal Component Analysis (PCA) distinguishes the two zones with different metal accumulation capacity: highest at NS1 and lowest at SS1. Correlation analysis revealed positive significant relation only in core NS1, adhering to the exposition of the intensified industrial pollution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitor a distribution network implies working with a huge amount of data coining from the different elements that interact in the network. This paper presents a visualization tool that simplifies the task of searching the database for useful information applicable to fault management or preventive maintenance of the network

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La comprensión de los factores que interviene en la internacionalización de las Pymes en Colombia, conlleva toda una compleja estructura, fundamentación, estrategias, teorías, modelos y metodología organizacional, en el contexto dinámico del mundo comercial y financiero. Por consiguiente, se realizó un análisis de las teorías, modelos de internacionalización y de los factores que allí se reflejan y que interviene en el desarrollo de las pequeñas y medianas empresas, se comparó con las Pymes en Colombia y se apoyó en datos estadísticos de entidades gubernamentales y bases de datos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When allocating a resource, geographical and infrastructural constraints have to be taken into account. We study the problem of distributing a resource through a network from sources endowed with the resource to citizens with claims. A link between a source and an agent depicts the possibility of a transfer from the source to the agent. Given the supplies at each source, the claims of citizens, and the network, the question is how to allocate the available resources among the citizens. We consider a simple allocation problem that is free of network constraints, where the total amount can be freely distributed. The simple allocation problem is a claims problem where the total amount of claims is greater than what is available. We focus on consistent and resource monotonic rules in claims problems that satisfy equal treatment of equals. We call these rules fairness principles and we extend fairness principles to allocation rules on networks. We require that for each pair of citizens in the network, the extension is robust with respect to the fairness principle. We call this condition pairwise robustness with respect to the fairness principle. We provide an algorithm and show that each fairness principle has a unique extension which is pairwise robust with respect to the fairness principle. We give applications of the algorithm for three fairness principles: egalitarianism, proportionality and equal sacrifice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis aims to understand the processes of entrepreneurship that try to create businesses or products with a high degree of complexity. This complexity comes from the fact that these products or initiatives can only be viable with the concurrence of a large number of heterogeneous actors (public, private, from different regions, etc..) which interact in a relational context. A case with these characteristics is the Camí dels Bons Homes. The thesis analyzes the evolution of the relational network from the point of view of its structure and content of its links. The results show and explain the observed changes in the network structure and the changes in the ties content. This analysis of the content of ties contributes to a new systematization and operationalization of ties’ content. Moreover this analysis takes in account negative ties, a less discussed issue in literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the advent of the internet in every day life in the 1990s, the barriers to producing, distributing and consuming multimedia data such as videos, music, ebooks, etc. have steadily been lowered for most computer users so that almost everyone with internet access can join the online communities who both produce, consume and of course also share media artefacts. Along with this trend, the violation of personal data privacy and copyright has increased with illegal file sharing being rampant across many online communities particularly for certain music genres and amongst the younger age groups. This has had a devastating effect on the traditional media distribution market; in most cases leaving the distribution companies and the content owner with huge financial losses. To prove that a copyright violation has occurred one can deploy fingerprinting mechanisms to uniquely identify the property. However this is currently based on only uni-modal approaches. In this paper we describe some of the design challenges and architectural approaches to multi-modal fingerprinting currently being examined for evaluation studies within a PhD research programme on optimisation of multi-modal fingerprinting architectures. Accordingly we outline the available modalities that are being integrated through this research programme which aims to establish the optimal architecture for multi-modal media security protection over the internet as the online distribution environment for both legal and illegal distribution of media products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SMPS and DMS500 analysers were used to measure particulate size distributions in the exhaust of a fully annular aero gas turbine engine at two operating conditions to compare and analyse sources of discrepancy. A number of different dilution ratio values were utilised for the comparative analysis, and a Dekati hot diluter operating at a temperature of 623°K was also utilised to remove volatile PM prior to measurements being made. Additional work focused on observing the effect of varying the sample line temperatures to ascertain the impact. Explanations are offered for most of the trends observed, although a new, repeatable event identified in the range from 417°K to 423°K – where there was a three order of magnitude increase in the nucleation mode of the sample – requires further study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquatic sediments often remove hydrophobic contaminants from fresh waters. The subsequent distribution and concentration of contaminants in bed sediments determines their effect on benthic organisms and the risk of re-entry into the water and/or leaching to groundwater. This study examines the transport of simazine and lindane in aquatic bed sediments with the aim of understanding the processes that determine their depth distribution. Experiments in flume channels (water flow of 10 cm s(-1)) determined the persistence of the compounds in the absence of sediment with (a) de-ionised water and (b) a solution that had been in contact with river sediment. In further experiments with river bed sediments in light and dark conditions, measurements were made of the concentration of the compounds in the overlying water and the development of bacterial/algal biofilms and bioturbation activity. At the end of the experiments, concentrations in sediments and associated pore waters were determined in sections of the sediment at 1 mm resolution down to 5 mm and then at 10 mm resolution to 50 mm depth and these distributions analysed using a sorption-diffusion-degradation model. The fine resolution in the depth profile permitted the detection of a maximum in the concentration of the compounds in the pore water near the surface, whereas concentrations in the sediment increased to a maximum at the surface itself. Experimental distribution coefficients determined from the pore water and sediment concentrations indicated a gradient with depth that was partly explained by an increase in organic matter content and specific surface area of the solids near the interface. The modelling showed that degradation of lindane within the sediment was necessary to explain the concentration profiles, with the optimum agreement between the measured and theoretical profiles obtained with differential degradation in the oxic and anoxic zones. The compounds penetrated to a depth of 40-50 rum over a period of 42 days. (C) 2004 Society of Chemical Industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Particle size distribution (psd) is one of the most important features of the soil because it affects many of its other properties, and it determines how soil should be managed. To understand the properties of chalk soil, psd analyses should be based on the original material (including carbonates), and not just the acid-resistant fraction. Laser-based methods rather than traditional sedimentation methods are being used increasingly to determine particle size to reduce the cost of analysis. We give an overview of both approaches and the problems associated with them for analyzing the psd of chalk soil. In particular, we show that it is not appropriate to use the widely adopted 8 pm boundary between the clay and silt size fractions for samples determined by laser to estimate proportions of these size fractions that are equivalent to those based on sedimentation. We present data from field and national-scale surveys of soil derived from chalk in England. Results from both types of survey showed that laser methods tend to over-estimate the clay-size fraction compared to sedimentation for the 8 mu m clay/silt boundary, and we suggest reasons for this. For soil derived from chalk, either the sedimentation methods need to be modified or it would be more appropriate to use a 4 pm threshold as an interim solution for laser methods. Correlations between the proportions of sand- and clay-sized fractions, and other properties such as organic matter and volumetric water content, were the opposite of what one would expect for soil dominated by silicate minerals. For water content, this appeared to be due to the predominance of porous, chalk fragments in the sand-sized fraction rather than quartz grains, and the abundance of fine (<2 mu m) calcite crystals rather than phyllosilicates in the clay-sized fraction. This was confirmed by scanning electron microscope (SEM) analyses. "Of all the rocks with which 1 am acquainted, there is none whose formation seems to tax the ingenuity of theorists so severely, as the chalk, in whatever respect we may think fit to consider it". Thomas Allan, FRS Edinburgh 1823, Transactions of the Royal Society of Edinburgh. (C) 2009 Natural Environment Research Council (NERC) Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to test the hypothesis that soil water content would vary spatially with distance from a tree row and that the effect would differ according to tree species. A field study was conducted on a kaolinitic Oxisol in the sub-humid highlands of western Kenya to compare soil water distribution and dynamics in a maize monoculture with that under maize (Zea mays L.) intercropped with a 3-year-old tree row of Grevillea robusta A. Cunn. Ex R. Br. (grevillea) and hedgerow of Senna spectabilis DC. (senna). Soil water content was measured at weekly intervals during one cropping season using a neutron probe. Measurements were made from 20 cm to a depth of 225 cm at distances of 75, 150, 300 and 525 cm from the tree rows. The amount of water stored was greater under the sole maize crop than the agroforestry systems, especially the grevillea-maize system. Stored soil water in the grevillea-maize system increased with increasing distance from the tree row but in the senna-maize system, it decreased between 75 and 300 cm from the hedgerow. Soil water content increased least and more slowly early in the season in the grevillea-maize system, and drying was also evident as the frequency of rain declined. Soil water content at the end of the cropping season was similar to that at the start of the season in the grevillea-maize system, but about 50 and 80 mm greater in the senna-maize and sole maize systems, respectively. The seasonal water balance showed there was 140 mm, of drainage from the sole maize system. A similar amount was lost from the agroforestry systems (about 160 mm in the grevillea-maize system and 145 mm in the senna-maize system) through drainage or tree uptake. The possible benefits of reduced soil evaporation and crop transpiration close to a tree row were not evident in the grevillea-maize system, but appeared to greatly compensate for water uptake losses in the senna-maize system. Grevillea, managed as a tree row, reduced stored soil water to a greater extent than senna, managed as a hedgerow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sediments play a fundamental role in the behaviour of contaminants in aquatic systems. Various processes in sediments, eg adsorption-desorption, oxidation-reduction, ion exchange or biological activities, can cause accumulation or release of metals and anions from the bottom of reservoirs, and have been recently studied in Polish waters [1-3]. Sediment samples from layer A: (1 divided by 6 cm depth in direct contact with bottom water); layer B: (7 divided by 12 cm depth moderate contact); and layer C: (12+ cm depth, in theory an inactive layer) were collected in September 2007 from six sites representing different types of hydrological conditions along the Dobczyce Reservoir (Fig. l). Water depths at the sampling points varied from 3.5 to 21 m. We have focused on studying the distribution and accumulation of several heavy metals (Cr, Pb, Cd, Cu and Zn) in the sediments. The surface, bottom and pore water (extracted from sediments by centrifugation) samples were also collected. Possible relationships between the heavy-metal distribution in sediments and the sediment characteristics (mineralogy, organic matter) as well as the Fe, Mn and Ca content of sediments, have been studied. The 02 concentrations in water samples were also measured. The heavy metals in sediments ranged from 19.0 to 226.3 mg/kg of dry mass (ppm). The results show considerable variations in heavy-metal concentrations between the 6 stations, but not in the individual layers (A, B, C). These variations are related to the mineralogy and chemical composition of the sediments and their pore waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective was to determine the concentration of total selenium (Se) and the proportion of total Se comprised as selenomethionine (SeMet) and selenocysteine (SeCys), as well as meat quality in terms of oxidative stability in post mortem tissues of lambs offered diets with an increasing dose rate of selenized enriched yeast (SY), or sodium selenite (SS). Fifty lambs were offered, for a period of 112 d, a total mixed ration which had either been supplemented with SY (0, 0.11, 0.21 or 0.31 mg/kg DM to give total Se contents of 0.19, 0.3, 0.4 and 0.5 mg Se/kg DM for treatments T1, T2, T3 and T4, respectively) or SS (0.11 mg/kg DM to give 0.3 mg Se/kg DM total Se [T5]). At enrolment and at 28, 56, 84 and 112 d following enrolment, blood samples were taken for Se and Se species determination, as well as glutathione peroxidase (GSH-Px) activity. At the end of the study lambs were euthanased and samples of heart, liver, kidney, and skeletal muscle were retained for Se and Se species determination. Tissue GSH-Px activity and thiobarbituric acid reactive substances (TBARS) were determined in Longissimus Thoracis. The incorporation into the diet of ascending concentrations of Se as SY increased whole blood total Se and the proportion of total Se comprised as SeMet, and erythrocyte GSH-Px activity. Comparable doses of SS supplementation did not result in significant differences between these parameters. With the exception of kidney tissue, all other tissues showed a dose dependant response to increasing concentrations of dietary SY, such that total Se and SeMet increased. Selenium content of Psoas Major was higher in animals fed SY when compared to a similar dose of SS, indicating improvements in Se availability and retention. There were no significant treatment effects on meat quality assessments GHS-Px and TBARS, reflecting the lack of difference in the proportion of total Se that was comprised as SeCys. However, oxidative stability improved marginally with ascending tissue Se content, providing an indication of a linear dose response whereby TBARS improved with ascending SY inclusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existence of endgame databases challenges us to extract higher-grade information and knowledge from their basic data content. Chess players, for example, would like simple and usable endgame theories if such holy grail exists: endgame experts would like to provide such insights and be inspired by computers to do so. Here, we investigate the use of artificial neural networks (NNs) to mine these databases and we report on a first use of NNs on KPK. The results encourage us to suggest further work on chess applications of neural networks and other data-mining techniques.