120 resultados para Distributed virtualization
Resumo:
The ability to distribute quantum entanglement is a prerequisite for many fundamental tests of quantum theory and numerous quantum information protocols. Two distant parties can increase the amount of entanglement between them by means of quantum communication encoded in a carrier that is sent from one party to the other. Intriguingly, entanglement can be increased even when the exchanged carrier is not entangled with the parties. However, in light of the defining property of entanglement stating that it cannot increase under classical communication, the carrier must be quantum. Here we show that, in general, the increase of relative entropy of entanglement between two remote parties is bounded by the amount of nonclassical correlations of the carrier with the parties as quantified by the relative entropy of discord. We study implications of this bound, provide new examples of entanglement distribution via unentangled states, and put further limits on this phenomenon.
Resumo:
This paper discusses methods of using the Internet as a communications media between distributed generator sites to provide new forms of loss-of-mains protection. An analysis of the quality of the communications channels between several nodes on the network was carried out experimentally. It is shown that Internet connections in urban environments are already capable of providing real-time power system protection, whilst rural Internet connections are borderline suitable but could not yet be recommended as a primary method of protection. Two strategies of providing loss-of-mains across Internet protocol are considered, broadcast of a reference frequency or phasor representing the utility and an Internet based inter-tripping scheme.
Resumo:
We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletesan arbitrary node from the network, then the network responds by quickly adding a small number of new edges.
We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than O(log Delta) times its original diameter, where Delta is the maximum degree of the network initially. We note that for many peer-to-peer systems, Delta is polylogarithmic, so the diameter increase would be a O(loglog n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.
Resumo:
We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick "repairs," which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions,without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been - in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most - log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degreewould have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network. © Springer-Verlag 2012.
Distributed Switch-and-Stay Combining in Cognitive Relay Networks under Spectrum Sharing Constraints