957 resultados para Atrioventricular node
Resumo:
Commonly, research work in routing for delay tolerant networks (DTN) assumes that node encounters are predestined, in the sense that they are the result of unknown, exogenous processes that control the mobility of these nodes. In this paper, we argue that for many applications such an assumption is too restrictive: while the spatio-temporal coordinates of the start and end points of a node's journey are determined by exogenous processes, the specific path that a node may take in space-time, and hence the set of nodes it may encounter could be controlled in such a way so as to improve the performance of DTN routing. To that end, we consider a setting in which each mobile node is governed by a schedule consisting of a ist of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged for DTN message delivery purposes. We define the Mobility Coordination Problem (MCP) for DTNs as follows: Given a set of nodes, each with its own schedule, and a set of messages to be exchanged between these nodes, devise a set of node encounters that minimize message delivery delays while satisfying all node schedules. The MCP for DTNs is general enough that it allows us to model and evaluate some of the existing DTN schemes, including data mules and message ferries. In this paper, we show that MCP for DTNs is NP-hard and propose two detour-based approaches to solve the problem. The first (DMD) is a centralized heuristic that leverages knowledge of the message workload to suggest specific detours to optimize message delivery. The second (DNE) is a distributed heuristic that is oblivious to the message workload, and which selects detours so as to maximize node encounters. We evaluate the performance of these detour-based approaches using extensive simulations based on synthetic workloads as well as real schedules obtained from taxi logs in a major metropolitan area. Our evaluation shows that our centralized, workload-aware DMD approach yields the best performance, in terms of message delay and delivery success ratio, and that our distributed, workload-oblivious DNE approach yields favorable performance when compared to approaches that require the use of data mules and message ferries.
Resumo:
We consider a Delay Tolerant Network (DTN) whose users (nodes) are connected by an underlying Mobile Ad hoc Network (MANET) substrate. Users can declaratively express high-level policy constraints on how "content" should be routed. For example, content may be diverted through an intermediary DTN node for the purposes of preprocessing, authentication, etc. To support such capability, we implement Predicate Routing [7] where high-level constraints of DTN nodes are mapped into low-level routing predicates at the MANET level. Our testbed uses a Linux system architecture and leverages User Mode Linux [2] to emulate every node running a DTN Reference Implementation code [5]. In our initial prototype, we use the On Demand Distance Vector (AODV) MANET routing protocol. We use the network simulator ns-2 (ns-emulation version) to simulate the mobility and wireless connectivity of both DTN and MANET nodes. We show preliminary throughput results showing the efficient and correct operation of propagating routing predicates, and as a side effect, the performance benefit of content re-routing that dynamically (on-demand) breaks the underlying end-to-end TCP connection into shorter-length TCP connections.
Resumo:
A foundational issue underlying many overlay network applications ranging from routing to peer-to-peer file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and rewiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a distributed overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using extensive measurements of paths between nodes, we demonstrate that Egoist’s neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we use a multiplayer peer-to-peer game to demonstrate the value of Egoist to end-user applications. This technical report supersedes BUCS-TR-2007-013.
Resumo:
We present an online distributed algorithm, the Causation Logging Algorithm (CLA), in which Autonomous Systems (ASes) in the Internet individually report route oscillations/flaps they experience to a central Internet Routing Registry (IRR). The IRR aggregates these reports and may observe what we call causation chains where each node on the chain caused a route flap at the next node along the chain. A chain may also have a causation cycle. The type of an observed causation chain/cycle allows the IRR to infer the underlying policy routing configuration (i.e., the system of economic relationships and constraints on route/path preferences). Our algorithm is based on a formal policy routing model that captures the propagation dynamics of route flaps under arbitrary changes in topology or path preferences. We derive invariant properties of causation chains/cycles for ASes which conform to economic relationships based on the popular Gao-Rexford model. The Gao-Rexford model is known to be safe in the sense that the system always converges to a stable set of paths under static conditions. Our CLA algorithm recovers the type/property of an observed causation chain of an underlying system and determines whether it conforms to the safe economic Gao-Rexford model. Causes for nonconformity can be diagnosed by comparing the properties of the causation chains with those predicted from different variants of the Gao-Rexford model.
Resumo:
Localization is essential feature for many mobile wireless applications. Data collected from applications such as environmental monitoring, package tracking or position tracking has no meaning without knowing the location of this data. Other applications have location information as a building block for example, geographic routing protocols, data dissemination protocols and location-based services such as sensing coverage. Many of the techniques have the trade-off among many features such as deployment of special hardware, level of accuracy and computation power. In this paper, we present an algorithm that extracts location constraints from the connectivity information. Our solution, which does not require any special hardware and a small number of landmark nodes, uses two types of location constraints. The spatial constraints derive the estimated locations observing which nodes are within communication range of each other. The temporal constraints refine the areas, computed by the spatial constraints, using properties of time and space extracted from a contact trace. The intuition of the temporal constraints is to limit the possible locations that a node can be using its previous and future locations. To quantify this intuitive improvement in refine the nodes estimated areas adding temporal information, we performed simulations using synthetic and real contact traces. The results show this improvement and also the difficulties of using real traces.
Resumo:
Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.
Resumo:
The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.
Resumo:
This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.
Resumo:
This paper proposes the use of in-network caches (which we call Angels) to reduce the Minimum Distribution Time (MDT) of a file from a seeder – a node that possesses the file – to a set of leechers – nodes who are interested in downloading the file. An Angel is not a leecher in the sense that it is not interested in receiving the entire file, but rather it is interested in minimizing the MDT to all leechers, and as such uses its storage and up/down-link capacity to cache and forward parts of the file to other peers. We extend the analytical results by Kumar and Ross [1] to account for the presence of angels by deriving a new lower bound for the MDT. We show that this newly derived lower bound is tight by proposing a distribution strategy under assumptions of a fluid model. We present a GroupTree heuristic that addresses the impracticalities of the fluid model. We evaluate our designs through simulations that show that our Group-Tree heuristic outperforms other heuristics, that it scales well with the increase of the number of leechers, and that it closely approaches the optimal theoretical bounds.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
Controlling the mobility pattern of mobile nodes (e.g., robots) to monitor a given field is a well-studied problem in sensor networks. In this setup, absolute control over the nodes’ mobility is assumed. Apart from the physical ones, no other constraints are imposed on planning mobility of these nodes. In this paper, we address a more general version of the problem. Specifically, we consider a setting in which mobility of each node is externally constrained by a schedule consisting of a list of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged to achieve a specific coverage distribution of a field. Such a distribution defines the relative importance of different field locations. We define the Constrained Mobility Coordination problem for Preferential Coverage (CMC-PC) as follows: given a field with a desired monitoring distribution, and a number of nodes n, each with its own schedule, we need to coordinate the mobility of the nodes in order to achieve the following two goals: 1) satisfy the schedules of all nodes, and 2) attain the required coverage of the given field. We show that the CMC-PC problem is NP-complete (by reduction to the Hamiltonian Cycle problem). Then we propose TFM, a distributed heuristic to achieve field coverage that is as close as possible to the required coverage distribution. We verify the premise of TFM using extensive simulations, as well as taxi logs from a major metropolitan area. We compare TFM to the random mobility strategy—the latter provides a lower bound on performance. Our results show that TFM is very successful in matching the required field coverage distribution, and that it provides, at least, two-fold query success ratio for queries that follow the target coverage distribution of the field.
Resumo:
We consider a Delay Tolerant Network (DTN) whose users (nodes) are connected by an underlying Mobile Ad hoc Network (MANET) substrate. Users can declaratively express high-level policy constraints on how “content” should be routed. For example, content can be directed through an intermediary DTN node for the purposes of preprocessing, authentication, etc., or content from a malicious MANET node can be dropped. To support such content routing at the DTN level, we implement Predicate Routing [1] where high-level constraints of DTN nodes are mapped into low-level routing predicates within the MANET nodes. Our testbed [2] uses a Linux system architecture with User Mode Linux [3] to emulate every DTN node with a DTN Reference Implementation code [4]. In our initial architecture prototype, we use the On Demand Distance Vector (AODV) routing protocol at the MANET level. We use the network simulator ns-2 (ns-emulation version) to simulate the wireless connectivity of both DTN and MANET nodes. Preliminary results show the efficient and correct operation of propagating routing predicates. For the application of content re-routing through an intermediary, as a side effect, results demonstrate the performance benefit of content re-routing that dynamically (on-demand) breaks the underlying end-to-end TCP connections into shorter-length TCP connections.
Resumo:
We introduce the Dynamic Policy Routing (DPR) model that captures the propagation of route updates under arbitrary changes in topology or path preferences. DPR introduces the notion of causation chains where the route flap at one node causes a flap at the next node along the chain. Using DPR, we model the Gao-Rexford (economic) guidelines that guarantee the safety (i.e., convergence) of policy routing. We establish three principles of safe policy routing dynamics. The non-interference principle provides insight into which ASes can directly induce route changes in one another. The single cycle principle and the multi-tiered cycle principle provide insight into how cycles of routing updates can manifest in any network. We develop INTERFERENCEBEAT, a distributed algorithm that propagates a small token along causation chains to check adherence to these principles. To enhance the diagnosis power of INTERFERENCEBEAT, we model four violations of the Gao-Rexford guidelines (e.g., transiting between peers) and characterize the resulting dynamics.
Resumo:
Establishing correspondences among object instances is still challenging in multi-camera surveillance systems, especially when the cameras’ fields of view are non-overlapping. Spatiotemporal constraints can help in solving the correspondence problem but still leave a wide margin of uncertainty. One way to reduce this uncertainty is to use appearance information about the moving objects in the site. In this paper we present the preliminary results of a new method that can capture salient appearance characteristics at each camera node in the network. A Latent Dirichlet Allocation (LDA) model is created and maintained at each node in the camera network. Each object is encoded in terms of the LDA bag-of-words model for appearance. The encoded appearance is then used to establish probable matching across cameras. Preliminary experiments are conducted on a dataset of 20 individuals and comparison against Madden’s I-MCHR is reported.
Resumo:
As the Internet has evolved and grown, an increasing number of nodes (hosts or autonomous systems) have become multihomed, i.e., a node is connected to more than one network. Mobility can be viewed as a special case of multihoming—as a node moves, it unsubscribes from one network and subscribes to another, which is akin to one interface becoming inactive and another active. The current Internet architecture has been facing significant challenges in effectively dealing with multihoming (and consequently mobility). The Recursive INternet Architecture (RINA) [1] was recently proposed as a clean-slate solution to the current problems of the Internet. In this paper, we perform an average-case cost analysis to compare the multihoming / mobility support of RINA, against that of other approaches such as LISP and MobileIP. We also validate our analysis using trace-driven simulation.