18 resultados para CCF
em Boston University Digital Common
Resumo:
Numerous problems exist that can be modeled as traffic through a network in which constraints exist to regulate flow. Vehicular road travel, computer networks, and cloud based resource distribution, among others all have natural representations in this manner. As these networks grow in size and/or complexity, analysis and certification of the safety invariants becomes increasingly costly. The NetSketch formalism introduces a lightweight verification framework that allows for greater scalability than traditional analysis methods. The NetSketch tool was developed to provide the power of this formalism in an easy to use and intuitive user interface.
Resumo:
A secure sketch (defined by Dodis et al.) is an algorithm that on an input w produces an output s such that w can be reconstructed given its noisy version w' and s. Security is defined in terms of two parameters m and m˜ : if w comes from a distribution of entropy m, then a secure sketch guarantees that the distribution of w conditioned on s has entropy m˜ , where λ = m−m˜ is called the entropy loss. In this note we show that the entropy loss of any secure sketch (or, more generally, any randomized algorithm) on any distribution is no more than it is on the uniform distribution.
Resumo:
We define and construct efficient depth universal and almost size universal quantum circuits. Such circuits can be viewed as general purpose simulators for central classes of quantum circuits and can be used to capture the computational power of the circuit class being simulated. For depth we construct universal circuits whose depth is the same order as the circuits being simulated. For size, there is a log factor blow-up in the universal circuits constructed here. We prove that this construction is nearly optimal. Our results apply to a number of well-studied quantum circuit classes.
Resumo:
We consider a Delay Tolerant Network (DTN) whose users (nodes) are connected by an underlying Mobile Ad hoc Network (MANET) substrate. Users can declaratively express high-level policy constraints on how "content" should be routed. For example, content may be diverted through an intermediary DTN node for the purposes of preprocessing, authentication, etc. To support such capability, we implement Predicate Routing [7] where high-level constraints of DTN nodes are mapped into low-level routing predicates at the MANET level. Our testbed uses a Linux system architecture and leverages User Mode Linux [2] to emulate every node running a DTN Reference Implementation code [5]. In our initial prototype, we use the On Demand Distance Vector (AODV) MANET routing protocol. We use the network simulator ns-2 (ns-emulation version) to simulate the mobility and wireless connectivity of both DTN and MANET nodes. We show preliminary throughput results showing the efficient and correct operation of propagating routing predicates, and as a side effect, the performance benefit of content re-routing that dynamically (on-demand) breaks the underlying end-to-end TCP connection into shorter-length TCP connections.
Resumo:
This position paper outlines a new network architecture, i.e., a style of construction that identifies the objects and how they relate. We do not specify particular protocol implementations or specific interfaces and policies. After all, it should be possible to change protocols in an architecture without changing the architecture. Rather we outline the repeating patterns and structures, and how the proposed model would cope with the challenges faced by today's Internet (and that of the future). Our new architecture is based on the following principle: Application processes communicate via a distributed inter-process communication (IPC) facility. The application processes that make up this facility provide a protocol that implements an IPC mechanism, and a protocol for managing distributed IPC (routing, security and other management tasks). Existing implementation strategies, algorithms, and protocols can be cast and used within our proposed new structure.
Resumo:
In this paper we introduce a theory of policy routing dynamics based on fundamental axioms of routing update mechanisms. We develop a dynamic policy routing model (DPR) that extends the static formalism of the stable paths problem (introduced by Griffin et al.) with discrete synchronous time. DPR captures the propagation of path changes in any dynamic network irrespective of its time-varying topology. We introduce several novel structures such as causation chains, dispute fences and policy digraphs that model different aspects of routing dynamics and provide insight into how these dynamics manifest in a network. We exercise the practicality of the theoretical foundation provided by DPR with two fundamental problems: routing dynamics minimization and policy conflict detection. The dynamics minimization problem utilizes policy digraphs, that capture the dependencies in routing policies irrespective of underlying topology dynamics, to solve a graph optimization problem. This optimization problem explicitly minimizes the number of routing update messages in a dynamic network by optimally changing the path preferences of a minimal subset of nodes. The conflict detection problem, on the other hand, utilizes a theoretical result of DPR where the root cause of a causation cycle (i.e., cycle of routing update messages) can be precisely inferred as either a transient route flap or a dispute wheel (i.e., policy conflict). Using this result we develop SafetyPulse, a token-based distributed algorithm to detect policy conflicts in a dynamic network. SafetyPulse is privacy preserving, computationally efficient, and provably correct.
Resumo:
We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.
Resumo:
The initial phase in a content distribution (file sharing) scenario is a delicate phase due to the lack of global knowledge and the dynamics of the overlay. An unwise distribution of the pieces in this phase can cause delays in reaching steady state, thus increasing file download times. We devise a scheduling algorithm at the seed (source peer with full content), based on a proportional fair approach, and we implement it on a real file sharing client [1]. In dynamic overlays, our solution improves up to 25% the average downloading time of a standard protocol ala BitTorrent.
Resumo:
We consider a Delay Tolerant Network (DTN) whose users (nodes) are connected by an underlying Mobile Ad hoc Network (MANET) substrate. Users can declaratively express high-level policy constraints on how “content” should be routed. For example, content can be directed through an intermediary DTN node for the purposes of preprocessing, authentication, etc., or content from a malicious MANET node can be dropped. To support such content routing at the DTN level, we implement Predicate Routing [1] where high-level constraints of DTN nodes are mapped into low-level routing predicates within the MANET nodes. Our testbed [2] uses a Linux system architecture with User Mode Linux [3] to emulate every DTN node with a DTN Reference Implementation code [4]. In our initial architecture prototype, we use the On Demand Distance Vector (AODV) routing protocol at the MANET level. We use the network simulator ns-2 (ns-emulation version) to simulate the wireless connectivity of both DTN and MANET nodes. Preliminary results show the efficient and correct operation of propagating routing predicates. For the application of content re-routing through an intermediary, as a side effect, results demonstrate the performance benefit of content re-routing that dynamically (on-demand) breaks the underlying end-to-end TCP connections into shorter-length TCP connections.
Resumo:
We introduce the Dynamic Policy Routing (DPR) model that captures the propagation of route updates under arbitrary changes in topology or path preferences. DPR introduces the notion of causation chains where the route flap at one node causes a flap at the next node along the chain. Using DPR, we model the Gao-Rexford (economic) guidelines that guarantee the safety (i.e., convergence) of policy routing. We establish three principles of safe policy routing dynamics. The non-interference principle provides insight into which ASes can directly induce route changes in one another. The single cycle principle and the multi-tiered cycle principle provide insight into how cycles of routing updates can manifest in any network. We develop INTERFERENCEBEAT, a distributed algorithm that propagates a small token along causation chains to check adherence to these principles. To enhance the diagnosis power of INTERFERENCEBEAT, we model four violations of the Gao-Rexford guidelines (e.g., transiting between peers) and characterize the resulting dynamics.
Resumo:
We revisit the problem of connection management for reliable transport. At one extreme, a pure soft-state (SS) approach (as in Delta-t [9]) safely removes the state of a connection at the sender and receiver once the state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, a hybrid hard-state/soft-state (HS+SS) approach (as in TCP) uses both explicit handshaking as well as timer-based management of the connection’s state. In this paper, we consider the worst-case scenario of reliable single-message communication, and develop a common analytical model that can be instantiated to capture either the SS approach or the HS+SS approach. We compare the two approaches in terms of goodput, message and state overhead. We also use simulations to compare against other approaches, and evaluate them in terms of correctness (with respect to data loss and duplication) and robustness to bad network conditions (high message loss rate and variable channel delays). Our results show that the SS approach is more robust, and has lower message overhead. On the other hand, SS requires more memory to keep connection states, which reduces goodput. Given memories are getting bigger and cheaper, SS presents the best choice over bandwidth-constrained, error-prone networks.
Resumo:
As the Internet has evolved and grown, an increasing number of nodes (hosts or autonomous systems) have become multihomed, i.e., a node is connected to more than one network. Mobility can be viewed as a special case of multihoming—as a node moves, it unsubscribes from one network and subscribes to another, which is akin to one interface becoming inactive and another active. The current Internet architecture has been facing significant challenges in effectively dealing with multihoming (and consequently mobility). The Recursive INternet Architecture (RINA) [1] was recently proposed as a clean-slate solution to the current problems of the Internet. In this paper, we perform an average-case cost analysis to compare the multihoming / mobility support of RINA, against that of other approaches such as LISP and MobileIP. We also validate our analysis using trace-driven simulation.
Resumo:
The TCP/IP architecture was originally designed without taking security measures into consideration. Over the years, it has been subjected to many attacks, which has led to many patches to counter them. Our investigations into the fundamental principles of networking have shown that carefully following an abstract model of Interprocess Communication (IPC) addresses many problems [1]. Guided by this IPC principle, we designed a clean-slate Recursive INternet Architecture (RINA) [2]. In this paper, we show how, without the aid of cryptographic techniques, the bare-bones architecture of RINA can resist most of the security attacks faced by TCP/IP. We also show how hard it is for an intruder to compromise RINA. Then, we show how RINA inherently supports security policies in a more manageable, on-demand basis, in contrast to the rigid, piecemeal approach of TCP/IP.
Resumo:
We propose Trade & Cap (T&C), an economics-inspired mechanism that incentivizes users to voluntarily coordinate their consumption of the bandwidth of a shared resource (e.g., a DSLAM link) so as to converge on what they perceive to be an equitable allocation, while ensuring efficient resource utilization. Under T&C, rather than acting as an arbiter, an Internet Service Provider (ISP) acts as an enforcer of what the community of rational users sharing the resource decides is a fair allocation of that resource. Our T&C mechanism proceeds in two phases. In the first, software agents acting on behalf of users engage in a strategic trading game in which each user agent selfishly chooses bandwidth slots to reserve in support of primary, interactive network usage activities. In the second phase, each user is allowed to acquire additional bandwidth slots in support of presumed open-ended need for fluid bandwidth, catering to secondary applications. The acquisition of this fluid bandwidth is subject to the remaining "buying power" of each user and by prevalent "market prices" – both of which are determined by the results of the trading phase and a desirable aggregate cap on link utilization. We present analytical results that establish the underpinnings of our T&C mechanism, including game-theoretic results pertaining to the trading phase, and pricing of fluid bandwidth allocation pertaining to the capping phase. Using real network traces, we present extensive experimental results that demonstrate the benefits of our scheme, which we also show to be practical by highlighting the salient features of an efficient implementation architecture.
Resumo:
NetSketch is a tool for the specification of constrained-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. As a modeling tool, it enables the abstraction of an existing system while retaining sufficient information about it to carry out future analysis of safety properties. As a design tool, NetSketch enables the exploration of alternative safe designs as well as the identification of minimal requirements for outsourced subsystems. NetSketch embodies a lightweight formal verification philosophy, whereby the power (but not the heavy machinery) of a rigorous formalism is made accessible to users via a friendly interface. NetSketch does so by exposing tradeoffs between exactness of analysis and scalability, and by combining traditional whole-system analysis with a more flexible compositional analysis. The compositional analysis is based on a strongly-typed Domain-Specific Language (DSL) for describing and reasoning about constrained-flow networks at various levels of sketchiness along with invariants that need to be enforced thereupon. In this paper, we define the formal system underlying the operation of NetSketch, in particular the DSL behind NetSketch's user-interface when used in "sketch mode", and prove its soundness relative to appropriately-defined notions of validity. In a companion paper [6], we overview NetSketch, highlight its salient features, and illustrate how it could be used in two applications: the management/shaping of traffic flows in a vehicular network (as a proxy for CPS applications) and in a streaming media network (as a proxy for Internet applications).