4 resultados para Data-communication
em Boston University Digital Common
Resumo:
Recent advances in processor speeds, mobile communications and battery life have enabled computers to evolve from completely wired to completely mobile. In the most extreme case, all nodes are mobile and communication takes place at available opportunities – using both traditional communication infrastructure as well as the mobility of intermediate nodes. These are mobile opportunistic networks. Data communication in such networks is a difficult problem, because of the dynamic underlying topology, the scarcity of network resources and the lack of global information. Establishing end-to-end routes in such networks is usually not feasible. Instead a store-and-carry forwarding paradigm is better suited for such networks. This dissertation describes and analyzes algorithms for forwarding of messages in such networks. In order to design effective forwarding algorithms for mobile opportunistic networks, we start by first building an understanding of the set of all paths between nodes, which represent the available opportunities for any forwarding algorithm. Relying on real measurements, we enumerate paths between nodes and uncover what we refer to as the path explosion effect. The term path explosion refers to the fact that the number of paths between a randomly selected pair of nodes increases exponentially with time. We draw from the theory of epidemics to model and explain the path explosion effect. This is the first contribution of the thesis, and is a key observation that underlies subsequent results. Our second contribution is the study of forwarding algorithms. For this, we rely on trace driven simulations of different algorithms that span a range of design dimensions. We compare the performance (success rate and average delay) of these algorithms. We make the surprising observation that most algorithms we consider have roughly similar performance. We explain this result in light of the path explosion phenomenon. While the performance of most algorithms we studied was roughly the same, these algorithms differed in terms of cost. This prompted us to focus on designing algorithms with the explicit intent of reducing costs. For this, we cast the problem of forwarding as an optimal stopping problem. Our third main contribution is the design of strategies based on optimal stopping principles which we refer to as Delegation schemes. Our analysis shows that using a delegation scheme reduces cost over naive forwarding by a factor of O(√N), where N is the number of nodes in the network. We further validate this result on real traces, where the cost reduction observed is even greater. Our results so far include a key assumption, which is unbounded buffers on nodes. Next, we relax this assumption, so that the problem shifts to one of prioritization of messages for transmission and dropping. Our fourth contribution is the study of message prioritization schemes, combined with forwarding. Our main result is that one achieves higher performance by assigning higher priorities to young messages in the network. We again interpret this result in light of the path explosion effect.
Resumo:
We consider the general problem of synchronizing the data on two devices using a minimum amount of communication, a core infrastructural requirement for a large variety of distributed systems. Our approach considers the interactive synchronization of prioritized data, where, for example, certain information is more time-sensitive than other information. We propose and analyze a new scheme for efficient priority-based synchronization, which promises benefits over conventional synchronization.
Resumo:
Current low-level networking abstractions on modern operating systems are commonly implemented in the kernel to provide sufficient performance for general purpose applications. However, it is desirable for high performance applications to have more control over the networking subsystem to support optimizations for their specific needs. One approach is to allow networking services to be implemented at user-level. Unfortunately, this typically incurs costs due to scheduling overheads and unnecessary data copying via the kernel. In this paper, we describe a method to implement efficient application-specific network service extensions at user-level, that removes the cost of scheduling and provides protected access to lower-level system abstractions. We present a networking implementation that, with minor modifications to the Linux kernel, passes data between "sandboxed" extensions and the Ethernet device without copying or processing in the kernel. Using this mechanism, we put a customizable networking stack into a user-level sandbox and show how it can be used to efficiently process and forward data via proxies, or intermediate hosts, in the communication path of high performance data streams. Unlike other user-level networking implementations, our method makes no special hardware requirements to avoid unnecessary data copies. Results show that we achieve a substantial increase in throughput over comparable user-space methods using our networking stack implementation.
Resumo:
Abstract—Personal communication devices are increasingly being equipped with sensors that are able to passively collect information from their surroundings – information that could be stored in fairly small local caches. We envision a system in which users of such devices use their collective sensing, storage, and communication resources to query the state of (possibly remote) neighborhoods. The goal of such a system is to achieve the highest query success ratio using the least communication overhead (power). We show that the use of Data Centric Storage (DCS), or directed placement, is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, amorphous placement, in which sensory samples are cached locally and informed exchanges of cached samples is used to diffuse the sensory data throughout the whole network. In handling queries, the local cache is searched first for potential answers. If unsuccessful, the query is forwarded to one or more direct neighbors for answers. This technique leverages node mobility and caching capabilities to avoid the multi-hop communication overhead of directed placement. Using a simplified mobility model, we provide analytical lower and upper bounds on the ability of amorphous placement to achieve uniform field coverage in one and two dimensions. We show that combining informed shuffling of cached samples upon an encounter between two nodes, with the querying of direct neighbors could lead to significant performance improvements. For instance, under realistic mobility models, our simulation experiments show that amorphous placement achieves 10% to 40% better query answering ratio at a 25% to 35% savings in consumed power over directed placement.