887 resultados para Dynamic networks
Resumo:
In distributed networks, it is often useful for the nodes to be aware of dense subgraphs, e.g., such a dense subgraph could reveal dense substructures in otherwise sparse graphs (e.g. the World Wide Web or social networks); these might reveal community clusters or dense regions for possibly maintaining good communication infrastructure. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs that the member nodes are aware of. The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that (2 + e)-approximate the densest subgraph and (3 + e)-approximate the at-least-k-densest subgraph (for a given parameter k). Our algorithms work for a wide range of parameter values and run in O(D log n) time. Further, a special case of our results also gives the first fully decentralized approximation algorithms for densest and at-least-k-densest subgraph problems for static distributed graphs. © 2012 Springer-Verlag.
Resumo:
In distributed networks, some groups of nodes may have more inter-connections, perhaps due to their larger bandwidth availability or communication requirements. In many scenarios, it may be useful for the nodes to know if they form part of a dense subgraph, e.g., such a dense subgraph could form a high bandwidth backbone for the network. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs (subgraphs that the member nodes are aware of). The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model in the sense that only messages of O(log n) size are permitted, where n is the number of nodes in the network. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that approximate both the densest subgraph, i.e., the subgraph of the highest density in the network, and the at-least-k-densest subgraph (for a given parameter k), i.e., the densest subgraph of size at least k. We give a (2 + e)-approximation algorithm for the densest subgraph problem. The at-least-k-densest subgraph is known to be NP-hard for the general case in the centralized setting and the best known algorithm gives a 2-approximation. We present an algorithm that maintains a (3+e)-approximation in our distributed, dynamic setting. Our algorithms run in O(Dlog n) time. © 2012 Authors.
Resumo:
We study the fundamental Byzantine leader election problem in dynamic networks where the topology can change from round to round and nodes can also experience heavy {\em churn} (i.e., nodes can join and leave the network continuously over time). We assume the full information model where the Byzantine nodes have complete knowledge about the entire state of the network at every round (including random choices made by all the nodes), have unbounded computational power and can deviate arbitrarily from the protocol. The churn is controlled by an adversary that has complete knowledge and control over which nodes join and leave and at what times and also may rewire the topology in every round and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is an $O(\log^3 n)$ round algorithm that achieves Byzantine leader election under the presence of up to $O({n}^{1/2 - \epsilon})$ Byzantine nodes (for a small constant $\epsilon > 0$) and a churn of up to \\$O(\sqrt{n}/\poly\log(n))$ nodes per round (where $n$ is the stable network size).The algorithm elects a leader with probability at least $1-n^{-\Omega(1)}$ and guarantees that it is an honest node with probability at least $1-n^{-\Omega(1)}$; assuming the algorithm succeeds, the leader's identity will be known to a $1-o(1)$ fraction of the honest nodes. Our algorithm is fully-distributed, lightweight, and is simple to implement. It is also scalable, as it runs in polylogarithmic (in $n$) time and requires nodes to send and receive messages of only polylogarithmic size per round.To the best of our knowledge, our algorithm is the first scalable solution for Byzantine leader election in a dynamic network with a high rate of churn; our protocol can also be used to solve Byzantine agreement in a straightforward way.We also show how to implement an (almost-everywhere) public coin with constant bias in a dynamic network with Byzantine nodes and provide a mechanism for enabling honest nodes to store information reliably in the network, which might be of independent interest.
Resumo:
A discrete-time random process is described, which can generate bursty sequences of events. A Bernoulli process, where the probability of an event occurring at time t is given by a fixed probability x, is modified to include a memory effect where the event probability is increased proportionally to the number of events that occurred within a given amount of time preceding t. For small values of x the interevent time distribution follows a power law with exponent −2−x. We consider a dynamic network where each node forms, and breaks connections according to this process. The value of x for each node depends on the fitness distribution, \rho(x), from which it is drawn; we find exact solutions for the expectation of the degree distribution for a variety of possible fitness distributions, and for both cases where the memory effect either is, or is not present. This work can potentially lead to methods to uncover hidden fitness distributions from fast changing, temporal network data, such as online social communications and fMRI scans.
Resumo:
In numerosi campi scientici l'analisi di network complessi ha portato molte recenti scoperte: in questa tesi abbiamo sperimentato questo approccio sul linguaggio umano, in particolare quello scritto, dove le parole non interagiscono in modo casuale. Abbiamo quindi inizialmente presentato misure capaci di estrapolare importanti strutture topologiche dai newtork linguistici(Degree, Strength, Entropia, . . .) ed esaminato il software usato per rappresentare e visualizzare i grafi (Gephi). In seguito abbiamo analizzato le differenti proprietà statistiche di uno stesso testo in varie sue forme (shuffolato, senza stopwords e senza parole con bassa frequenza): il nostro database contiene cinque libri di cinque autori vissuti nel XIX secolo. Abbiamo infine mostrato come certe misure siano importanti per distinguere un testo reale dalle sue versioni modificate e perché la distribuzione del Degree di un testo normale e di uno shuffolato abbiano lo stesso andamento. Questi risultati potranno essere utili nella sempre più attiva analisi di fenomeni linguistici come l'autorship attribution e il riconoscimento di testi shuffolati.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
The increasing diffusion of wireless-enabled portable devices is pushing toward the design of novel service scenarios, promoting temporary and opportunistic interactions in infrastructure-less environments. Mobile Ad Hoc Networks (MANET) are the general model of these higly dynamic networks that can be specialized, depending on application cases, in more specific and refined models such as Vehicular Ad Hoc Networks and Wireless Sensor Networks. Two interesting deployment cases are of increasing relevance: resource diffusion among users equipped with portable devices, such as laptops, smart phones or PDAs in crowded areas (termed dense MANET) and dissemination/indexing of monitoring information collected in Vehicular Sensor Networks. The extreme dynamicity of these scenarios calls for novel distributed protocols and services facilitating application development. To this aim we have designed middleware solutions supporting these challenging tasks. REDMAN manages, retrieves, and disseminates replicas of software resources in dense MANET; it implements novel lightweight protocols to maintain a desired replication degree despite participants mobility, and efficiently perform resource retrieval. REDMAN exploits the high-density assumption to achieve scalability and limited network overhead. Sensed data gathering and distributed indexing in Vehicular Networks raise similar issues: we propose a specific middleware support, called MobEyes, exploiting node mobility to opportunistically diffuse data summaries among neighbor vehicles. MobEyes creates a low-cost opportunistic distributed index to query the distributed storage and to determine the location of needed information. Extensive validation and testing of REDMAN and MobEyes prove the effectiveness of our original solutions in limiting communication overhead while maintaining the required accuracy of replication degree and indexing completeness, and demonstrates the feasibility of the middleware approach.
Resumo:
Pathogens and pests of stored grains move through complex dynamic networks linking fields, farms, and bulk storage facilities. Human transport and other forms of dispersal link the components of this network. A network model for pathogen and pest movement through stored grain systems is a first step toward new sampling and mitigation strategies that utilize information about the network structure. An understanding of network structure can be applied to identifying the key network components for pathogen or pest movement through the system. For example, it may be useful to identify a network node, such as a local grain storage facility, through which grain from a large number of fields will be accumulated and move through the network. This node may be particularly important for sampling and mitigation. In some cases more detailed information about network structure can identify key nodes that link two large sections of the network, such that management at the key nodes will greatly reduce the risk of spread between the two sections. In addition to the spread of particular species of pathogens and pests, we also evaluate the spread of problematic subpopulations, such as subpopulations with pesticide resistance. We present an analysis of stored grain pathogen and pest networks for Australia and the United States.
Resumo:
Many modern networks are \emph{reconfigurable}, in the sense that the topology of the network can be changed by the nodes in the network. For example, peer-to-peer, wireless and ad-hoc networks are reconfigurable. More generally, many social networks, such as a company's organizational chart; infrastructure networks, such as an airline's transportation network; and biological networks, such as the human brain, are also reconfigurable. Modern reconfigurable networks have a complexity unprecedented in the history of engineering, resembling more a dynamic and evolving living animal rather than a structure of steel designed from a blueprint. Unfortunately, our mathematical and algorithmic tools have not yet developed enough to handle this complexity and fully exploit the flexibility of these networks. We believe that it is no longer possible to build networks that are scalable and never have node failures. Instead, these networks should be able to admit small, and maybe, periodic failures and still recover like skin heals from a cut. This process, where the network can recover itself by maintaining key invariants in response to attack by a powerful adversary is what we call \emph{self-healing}. Here, we present several fast and provably good distributed algorithms for self-healing in reconfigurable dynamic networks. Each of these algorithms have different properties, a different set of gaurantees and limitations. We also discuss future directions and theoretical questions we would like to answer. %in the final dissertation that this document is proposed to lead to.
Resumo:
In this thesis we study the properties of two large dynamic networks, the competition network of advertisers on the Google and Bing search engines and the dynamic network of friend relationships among avatars in the massively multiplayer online game (MMOG) Planetside 2. We are particularly interested in removal patterns in these networks. Our main finding is that in both of these networks the nodes which are most commonly removed are minor near isolated nodes. We also investigate the process of merging of two large networks using data captured during the merger of servers of Planetside 2. We found that the original network structures do not really merge but rather they get gradually replaced by newcomers not associated with the original structures. In the final part of the thesis we investigate the concept of motifs in the Barabási-Albert random graph. We establish some bounds on the number of motifs in this graph.
Resumo:
Traditionally, applications and tools supporting collaborative computing have been designed only with personal computers in mind and support a limited range of computing and network platforms. These applications are therefore not well equipped to deal with network heterogeneity and, in particular, do not cope well with dynamic network topologies. Progress in this area must be made if we are to fulfil the needs of users and support the diversity, mobility, and portability that are likely to characterise group work in future. This paper describes a groupware platform called Coco that is designed to support collaboration in a heterogeneous network environment. The work demonstrates that progress in the p development of a generic supporting groupware is achievable, even in the context of heterogeneous and dynamic networks. The work demonstrates the progress made in the development of an underlying communications infrastructure, building on peer-to-peer concept and topologies to improve scalability and robustness.
Resumo:
Driven by a range of modern applications that includes telecommunications, e-business and on-line social interaction, recent ideas in complex networks can be extended to the case of time-varying connectivity. Here we propose a general frame- work for modelling and simulating such dynamic networks, and we explain how the long time behaviour may reveal important information about the mechanisms underlying the evolution.
Resumo:
We propose a new algorithm for summarizing properties of large-scale time-evolving networks. This type of data, recording connections that come and go over time, is being generated in many modern applications, including telecommunications and on-line human social behavior. The algorithm computes a dynamic measure of how well pairs of nodes can communicate by taking account of routes through the network that respect the arrow of time. We take the conventional approach of downweighting for length (messages become corrupted as they are passed along) and add the novel feature of downweighting for age (messages go out of date). This allows us to generalize widely used Katz-style centrality measures that have proved popular in network science to the case of dynamic networks sampled at non-uniform points in time. We illustrate the new approach on synthetic and real data.
Resumo:
Part 2: Behaviour and Coordination
Resumo:
Manual calibration of large and dynamic networks of cameras is labour intensive and time consuming. This is a strong motivator for the development of automatic calibration methods. Automatic calibration relies on the ability to find correspondences between multiple views of the same scene. If the cameras are sparsely placed, this can be a very difficult task. This PhD project focuses on the further development of uncalibrated wide baseline matching techniques.