948 resultados para DISTRIBUTION NETWORKS
Resumo:
Erilaisten IP-pohjaisten palvelujen käyttö lisääntyy jatkuvasti samalla, kun käyttäjistä tulee yhä liikkuvaisempia. Tästä syystä IP- protokolla tulee väistämättä myös mobiiliverkkoihin. Tässä diplomityössä tutkitaan mobiliteetin IP multcastingiin tuomia ongelmia ja simuloidaan niitä Network Simulatoria käyttäen. Pääpaino on ongelmalla, joka aiheutuu multicast- ryhmänmuodostusviiveestä. Tätä ongelmaa simuloidaan, jotta viiveen, mobiilikäyttäjien palveluunsaapumistaajuuden ja Scalable Reliable Multicast (SRM) protokollan ajastinarvojen asetusten vaikutus repair request- pakettien määrään ja sitä kautta suoritettavien uudelleenlähetysten määrään selviäisi. Eri parametrien vaikutuksen tutkimiseksi esitetään simulaatiotuloksia varioiduilla parametreillä käyttäen CDF- käyriä. Tulosten perusteella merkittävin tekijä uudelleenlähetyspyyntöjen kannalta on protokollan ajastimien arvot ja haluttu palvelun taso, viiveen merkityksen jäädessä vähäiseksi. Työn lopuksi tutkitaan SRM- protokollan soveltuvuutta mobiiliverkkoihin ja pohditaan vaihtoehtoja toiminnan parantamiseksi.
Resumo:
We develop an analytical approach to the susceptible-infected-susceptible epidemic model that allows us to unravel the true origin of the absence of an epidemic threshold in heterogeneous networks. We find that a delicate balance between the number of high degree nodes in the network and the topological distance between them dictates the existence or absence of such a threshold. In particular, small-world random networks with a degree distribution decaying slower than an exponential have a vanishing epidemic threshold in the thermodynamic limit.
Resumo:
Animal societies rely on interactions between group members to effectively communicate and coordinate their actions. To date, the transmission properties of interaction networks formed by direct physical contacts have been extensively studied for many animal societies and in all cases found to inhibit spreading. Such direct interactions do not, however, represent the only viable pathways. When spreading agents can persist in the environment, indirect transmission via 'same-place, different-time' spatial coincidences becomes possible. Previous studies have neglected these indirect pathways and their role in transmission. Here, we use rock ant colonies, a model social species whose flat nest geometry, coupled with individually tagged workers, allowed us to build temporally and spatially explicit interaction networks in which edges represent either direct physical contacts or indirect spatial coincidences. We show how the addition of indirect pathways allows the network to enhance or inhibit the spreading of different types of agent. This dual-functionality arises from an interplay between the interaction-strength distribution generated by the ants' movement and environmental decay characteristics of the spreading agent. These findings offer a general mechanism for understanding how interaction patterns might be tuned in animal societies to control the simultaneous transmission of harmful and beneficial agents.
Resumo:
The origin of Spanish regional economic divergence can be traced back at least until the seventeenth century, although its full definition took place during industrialisation. Historians have often included uneven regional infrastructure endowments among the factors that explain divergence among Spanish regions, although no systematic analysis of the spatial distribution of Spanish infrastructure and its determinants has been carried out so far. This paper aims at filling that gap, by offering a description of the regional distribution of the main Spanish transport infrastructure between the middle of the nineteenth century and the Civil War. In addition, it estimates a panel data model to search into the main reasons that explain the differences among the Spanish regional endowments of railways and roads during that period. The outcomes of that analysis indicate that both institutional factors and the physical characteristics of each area had a strong influence on the distribution of transport infrastructure among the Spanish regions.
Resumo:
We investigate how correlations between the diversity of the connectivity of networks and the dynamics at their nodes affect the macroscopic behavior. In particular, we study the synchronization transition of coupled stochastic phase oscillators that represent the node dynamics. Crucially in our work, the variability in the number of connections of the nodes is correlated with the width of the frequency distribution of the oscillators. By numerical simulations on Erdös-Rényi networks, where the frequencies of the oscillators are Gaussian distributed, we make the counterintuitive observation that an increase in the strength of the correlation is accompanied by an increase in the critical coupling strength for the onset of synchronization. We further observe that the critical coupling can solely depend on the average number of connections or even completely lose its dependence on the network connectivity. Only beyond this state, a weighted mean-field approximation breaks down. If noise is present, the correlations have to be stronger to yield similar observations.
Resumo:
The maintenance of electric distribution network is a topical question for distribution system operators because of increasing significance of failure costs. In this dissertation the maintenance practices of the distribution system operators are analyzed and a theory for scheduling maintenance activities and reinvestment of distribution components is created. The scheduling is based on the deterioration of components and the increasing failure rates due to aging. The dynamic programming algorithm is used as a solving method to maintenance problem which is caused by the increasing failure rates of the network. The other impacts of network maintenance like environmental and regulation reasons are not included to the scope of this thesis. Further the tree trimming of the corridors and the major disturbance of the network are not included to the problem optimized in this thesis. For optimizing, four dynamic programming models are presented and the models are tested. Programming is made in VBA-language to the computer. For testing two different kinds of test networks are used. Because electric distribution system operators want to operate with bigger component groups, optimal timing for component groups is also analyzed. A maintenance software package is created to apply the presented theories in practice. An overview of the program is presented.
Resumo:
Production and generation of electrical power is evolving to more environmental friendly technologies and schemes. Pushed by the increasing cost of fossil fuels, the operational costs of producing electrical power with fossil fuels and the effect in the environment, like pollution and global warming, renewable energy sources gain con-stant impulse into the global energy economy. In consequence, the introduction of distributed energy sources has brought a new complexity to the electrical networks. In the new concept of smart grids and decen-tralized power generation; control, protection and measurement are also distributed and requiring, among other things, a new scheme of communication to operate with each other in balance and improve performance. In this research, an analysis of different communication technologies (power line communication, Ethernet over unshielded twisted pair (UTP), optic fiber, Wi-Fi, Wi-MAX, and Long Term Evolution) and their respective characteristics will be carried out. With the objective of pointing out strengths and weaknesses from different points of view (technical, economical, deployment, etc.) to establish a richer context on which a decision for communication approach can be done depending on the specific application scenario of a new smart grid deployment. As a result, a description of possible optimal deployment solutions for communication will be shown considering different options for technologies, and a mention of different important considerations to be taken into account will be made for some of the possible network implementation scenarios.
Resumo:
In 1997, Paul Gilroy was able to write: "I have been asking myself, whatever happened to breakdancing" (21), a form of vernacular dance associated with urban youth that emerged in the 1970s. However, in the last decade, breakdancing has experienced a massive renaissance in movies (You Got Served), commercials ("Gotta Have My Pops!") and documentaries (the acclaimed Freshest Kids). In this thesis, 1 explore the historical development of global b-boy/bgirl culture through a qualitative study involving dancers and their modes of communication. Widespread circulation of breakdancing images peaked in the mid-1980s, and subsequently b-boy/b-girl culture largely disappeared from the mediated landscape. The dance did not reemerge into the mainstream of North American popular culture until the late 1990s. 1 argue that the development of major transnational networks between b-boys and b-girls during the 1990s was a key factor in the return of 'b-boying/b-girling' (known formerly as breakdancing). Street dancers toured, traveled and competed internationally throughout this decade. They also began to create 'underground' video documentaries and travel video 'magazines.' These video artefacts circulated extensively around the globe through alternative distribution channels (including the backpacks of traveling dancers). 1 argue that underground video artefacts helped to produce 'imagined affinities' between dancers in various nations. Imagined affinities are identifications expressed by a cultural producer who shares an embodied activity with other practitioners through either mediated texts or travels through new places. These 'imagined affinities' helped to sustain b-boy/b-girl culture by generating visual/audio representations of popularity for the dance movement across geographical regions.
Resumo:
Les travaux traditionnels sur le crime organisé indiquent que le statut d’un individu déterminerait son succès individuel. Des recherches alternatives sur les réseaux des organisations criminelles et de la réussite criminelle indiquent que le rang est moins important que la croyance générale et que les mesures de positionnement stratégique de réseau sont plus susceptibles de déterminer le succès criminel. Ce mémoire étudie les variations des gains criminels au sein de l’organisation de distribution illicite de stupéfiants des Hells Angels. Son objectif est de distinguer, à l’aide de données de comptabilité autorévélées, les éléments influençant ces différences dans le succès criminel en fonction du positionnement plus stratégique ou vulnérable d’un individu au sein de son réseau. Les résultats révèlent des moyennes de volume d’argent transigé beaucoup plus élevées que ce qui est généralement recensé. La distribution de ces capitaux est largement inégale. La disparité des chances liées à l’association criminelle se retrouve aussi dans la polarisation entre les individus fortement privilégiés et les autres qui ont une capacité de positionnement médiocre. Le croisement entre les positions et l’inégalité des gains présente que le positionnement de l’individu dans son réseau est un meilleur prédicteur de réussite criminelle que toute autre variable contextuelle ou de rang. Enfin et surtout, en contradiction avec la littérature, le fait d’atteindre de haut rang hiérarchique nuirait au succès criminel, les résultats montrant que cet état réduit l’accès au crédit, réduit les quantités de drogue par transaction et augmente le prix de la drogue à l’unité.
Resumo:
De nombreux problèmes pratiques qui se posent dans dans le domaine de la logistique, peuvent être modélisés comme des problèmes de tournées de véhicules. De façon générale, cette famille de problèmes implique la conception de routes, débutant et se terminant à un dépôt, qui sont utilisées pour distribuer des biens à un nombre de clients géographiquement dispersé dans un contexte où les coûts associés aux routes sont minimisés. Selon le type de problème, un ou plusieurs dépôts peuvent-être présents. Les problèmes de tournées de véhicules sont parmi les problèmes combinatoires les plus difficiles à résoudre. Dans cette thèse, nous étudions un problème d’optimisation combinatoire, appartenant aux classes des problèmes de tournées de véhicules, qui est liée au contexte des réseaux de transport. Nous introduisons un nouveau problème qui est principalement inspiré des activités de collecte de lait des fermes de production, et de la redistribution du produit collecté aux usines de transformation, pour la province de Québec. Deux variantes de ce problème sont considérées. La première, vise la conception d’un plan tactique de routage pour le problème de la collecte-redistribution de lait sur un horizon donné, en supposant que le niveau de la production au cours de l’horizon est fixé. La deuxième variante, vise à fournir un plan plus précis en tenant compte de la variation potentielle de niveau de production pouvant survenir au cours de l’horizon considéré. Dans la première partie de cette thèse, nous décrivons un algorithme exact pour la première variante du problème qui se caractérise par la présence de fenêtres de temps, plusieurs dépôts, et une flotte hétérogène de véhicules, et dont l’objectif est de minimiser le coût de routage. À cette fin, le problème est modélisé comme un problème multi-attributs de tournées de véhicules. L’algorithme exact est basé sur la génération de colonnes impliquant un algorithme de plus court chemin élémentaire avec contraintes de ressources. Dans la deuxième partie, nous concevons un algorithme exact pour résoudre la deuxième variante du problème. À cette fin, le problème est modélisé comme un problème de tournées de véhicules multi-périodes prenant en compte explicitement les variations potentielles du niveau de production sur un horizon donné. De nouvelles stratégies sont proposées pour résoudre le problème de plus court chemin élémentaire avec contraintes de ressources, impliquant dans ce cas une structure particulière étant donné la caractéristique multi-périodes du problème général. Pour résoudre des instances de taille réaliste dans des temps de calcul raisonnables, une approche de résolution de nature heuristique est requise. La troisième partie propose un algorithme de recherche adaptative à grands voisinages où de nombreuses nouvelles stratégies d’exploration et d’exploitation sont proposées pour améliorer la performances de l’algorithme proposé en termes de la qualité de la solution obtenue et du temps de calcul nécessaire.
Resumo:
The theme of the thesis is centred around one important aspect of wireless sensor networks; the energy-efficiency.The limited energy source of the sensor nodes calls for design of energy-efficient routing protocols. The schemes for protocol design should try to minimize the number of communications among the nodes to save energy. Cluster based techniques were found energy-efficient. In this method clusters are formed and data from different nodes are collected under a cluster head belonging to each clusters and then forwarded it to the base station.Appropriate cluster head selection process and generation of desirable distribution of the clusters can reduce energy consumption of the network and prolong the network lifetime. In this work two such schemes were developed for static wireless sensor networks.In the first scheme, the energy wastage due to cluster rebuilding incorporating all the nodes were addressed. A tree based scheme is presented to alleviate this problem by rebuilding only sub clusters of the network. An analytical model of energy consumption of proposed scheme is developed and the scheme is compared with existing cluster based scheme. The simulation study proved the energy savings observed.The second scheme concentrated to build load-balanced energy efficient clusters to prolong the lifetime of the network. A voting based approach to utilise the neighbor node information in the cluster head selection process is proposed. The number of nodes joining a cluster is restricted to have equal sized optimum clusters. Multi-hop communication among the cluster heads is also introduced to reduce the energy consumption. The simulation study has shown that the scheme results in balanced clusters and the network achieves reduction in energy consumption.The main conclusion from the study was the routing scheme should pay attention on successful data delivery from node to base station in addition to the energy-efficiency. The cluster based protocols are extended from static scenario to mobile scenario by various authors. None of the proposals addresses cluster head election appropriately in view of mobility. An elegant scheme for electing cluster heads is presented to meet the challenge of handling cluster durability when all the nodes in the network are moving. The scheme has been simulated and compared with a similar approach.The proliferation of sensor networks enables users with large set of sensor information to utilise them in various applications. The sensor network programming is inherently difficult due to various reasons. There must be an elegant way to collect the data gathered by sensor networks with out worrying about the underlying structure of the network. The final work presented addresses a way to collect data from a sensor network and present it to the users in a flexible way.A service oriented architecture based application is built and data collection task is presented as a web service. This will enable composition of sensor data from different sensor networks to build interesting applications. The main objective of the thesis was to design energy-efficient routing schemes for both static as well as mobile sensor networks. A progressive approach was followed to achieve this goal.
Resumo:
Monitor a distribution network implies working with a huge amount of data coining from the different elements that interact in the network. This paper presents a visualization tool that simplifies the task of searching the database for useful information applicable to fault management or preventive maintenance of the network
Resumo:
When allocating a resource, geographical and infrastructural constraints have to be taken into account. We study the problem of distributing a resource through a network from sources endowed with the resource to citizens with claims. A link between a source and an agent depicts the possibility of a transfer from the source to the agent. Given the supplies at each source, the claims of citizens, and the network, the question is how to allocate the available resources among the citizens. We consider a simple allocation problem that is free of network constraints, where the total amount can be freely distributed. The simple allocation problem is a claims problem where the total amount of claims is greater than what is available. We focus on consistent and resource monotonic rules in claims problems that satisfy equal treatment of equals. We call these rules fairness principles and we extend fairness principles to allocation rules on networks. We require that for each pair of citizens in the network, the extension is robust with respect to the fairness principle. We call this condition pairwise robustness with respect to the fairness principle. We provide an algorithm and show that each fairness principle has a unique extension which is pairwise robust with respect to the fairness principle. We give applications of the algorithm for three fairness principles: egalitarianism, proportionality and equal sacrifice.
Resumo:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.