18 resultados para Bitrate overhead

em Boston University Digital Common


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Communication and synchronization stand as the dual bottlenecks in the performance of parallel systems, and especially those that attempt to alleviate the programming burden by incurring overhead in these two domains. We formulate the notions of communicable memory and lazy barriers to help achieve efficient communication and synchronization. These concepts are developed in the context of BSPk, a toolkit library for programming networks of workstations|and other distributed memory architectures in general|based on the Bulk Synchronous Parallel (BSP) model. BSPk emphasizes efficiency in communication by minimizing local memory-to-memory copying, and in barrier synchronization by not forcing a process to wait unless it needs remote data. Both the message passing (MP) and distributed shared memory (DSM) programming styles are supported in BSPk. MP helps processes efficiently exchange short-lived unnamed data values, when the identity of either the sender or receiver is known to the other party. By contrast, DSM supports communication between processes that may be mutually anonymous, so long as they can agree on variable names in which to store shared temporary or long-lived data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Content providers often consider the costs of security to be greater than the losses they might incur without it; many view "casual piracy" as their main concern. Our goal is to provide a low cost defense against such attacks while maintaining rigorous security guarantees. Our defense is integrated with and leverages fast forward error correcting codes, such as Tornado codes, which are widely used to facilitate reliable delivery of rich content. We tune one such family of codes - while preserving their original desirable properties - to guarantee that none of the original content can b e recovered whenever a key subset of encoded packets is missing. Ultimately we encrypt only these key codewords (only 4% of all transmissions), making the security overhead negligible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MPLS (Multi-Protocol Label Switching) has recently emerged to facilitate the engineering of network traffic. This can be achieved by directing packet flows over paths that satisfy multiple requirements. MPLS has been regarded as an enhancement to traditional IP routing, which has the following problems: (1) all packets with the same IP destination address have to follow the same path through the network; and (2) paths have often been computed based on static and single link metrics. These problems may cause traffic concentration, and thus degradation in quality of service. In this paper, we investigate by simulations a range of routing solutions and examine the tradeoff between scalability and performance. At one extreme, IP packet routing using dynamic link metrics provides a stateless solution but may lead to routing oscillations. At the other extreme, we consider a recently proposed Profile-based Routing (PBR), which uses knowledge of potential ingress-egress pairs as well as the traffic profile among them. Minimum Interference Routing (MIRA) is another recently proposed MPLS-based scheme, which only exploits knowledge of potential ingress-egress pairs but not their traffic profile. MIRA and the more conventional widest-shortest path (WSP) routing represent alternative MPLS-based approaches on the spectrum of routing solutions. We compare these solutions in terms of utility, bandwidth acceptance ratio as well as their scalability (routing state and computational overhead) and load balancing capability. While the simplest of the per-flow algorithms we consider, the performance of WSP is close to dynamic per-packet routing, without the potential instabilities of dynamic routing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed hash tables have recently become a useful building block for a variety of distributed applications. However, current schemes based upon consistent hashing require both considerable implementation complexity and substantial storage overhead to achieve desired load balancing goals. We argue in this paper that these goals can b e achieved more simply and more cost-effectively. First, we suggest the direct application of the "power of two choices" paradigm, whereby an item is stored at the less loaded of two (or more) random alternatives. We then consider how associating a small constant number of hash values with a key can naturally b e extended to support other load balancing methods, including load-stealing or load-shedding schemes, as well as providing natural fault-tolerance mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, slotted communication protocols have employed guard times to delineate and align slots. These guard times may expand the slot duration significantly, especially when clocks are allowed to drift for longer time to reduce clock synchronization overhead. Recently, a new class of lightweight protocols for statistical estimation in wireless sensor networks have been proposed. This new class requires very short transmission durations (jam signals), thus the traditional approach of using guard times would impose significant overhead. We propose a new, more efficient algorithm to align slots. Based on geometrical properties of space, we prove that our approach bounds the slot duration by only a constant factor of what is needed. Furthermore, we show by simulation that this bound is loose and an even smaller slot duration is required, making our approach even more efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a prototype overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using measurements on PlanetLab and trace-based simulations, we demonstrate that Egoist's neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we discuss some of the potential benefits Egoist may offer to applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a mobile sensor network monitoring a spatio-temporal field. Given limited cache sizes at the sensor nodes, the goal is to develop a distributed cache management algorithm to efficiently answer queries with a known probability distribution over the spatial dimension. First, we propose a novel distributed information theoretic approach in which the nodes locally update their caches based on full knowledge of the space-time distribution of the monitored phenomenon. At each time instant, local decisions are made at the mobile nodes concerning which samples to keep and whether or not a new sample should be acquired at the current location. These decisions account for minimizing an entropic utility function that captures the average amount of uncertainty in queries given the probability distribution of query locations. Second, we propose a different correlation-based technique, which only requires knowledge of the second-order statistics, thus relaxing the stringent constraint of having a priori knowledge of the query distribution, while significantly reducing the computational overhead. It is shown that the proposed approaches considerably improve the average field estimation error by maintaining efficient cache content. It is further shown that the correlation-based technique is robust to model mismatch in case of imperfect knowledge of the underlying generative correlation structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A foundational issue underlying many overlay network applications ranging from routing to peer-to-peer file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and rewiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a distributed overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using extensive measurements of paths between nodes, we demonstrate that Egoist’s neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we use a multiplayer peer-to-peer game to demonstrate the value of Egoist to end-user applications. This technical report supersedes BUCS-TR-2007-013.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We revisit the problem of connection management for reliable transport. At one extreme, a pure soft-state (SS) approach (as in Delta-t [9]) safely removes the state of a connection at the sender and receiver once the state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, a hybrid hard-state/soft-state (HS+SS) approach (as in TCP) uses both explicit handshaking as well as timer-based management of the connection’s state. In this paper, we consider the worst-case scenario of reliable single-message communication, and develop a common analytical model that can be instantiated to capture either the SS approach or the HS+SS approach. We compare the two approaches in terms of goodput, message and state overhead. We also use simulations to compare against other approaches, and evaluate them in terms of correctness (with respect to data loss and duplication) and robustness to bad network conditions (high message loss rate and variable channel delays). Our results show that the SS approach is more robust, and has lower message overhead. On the other hand, SS requires more memory to keep connection states, which reduces goodput. Given memories are getting bigger and cheaper, SS presents the best choice over bandwidth-constrained, error-prone networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Routing protocols for ad-hoc networks assume that the nodes forming the network are either under a single authority, or else that they would be altruistically forwarding data for other nodes with no expectation of a return. These assumptions are unrealistic since in ad-hoc networks, nodes are likely to be autonomous and rational (selfish), and thus unwilling to help unless they have an incentive to do so. Providing such incentives is an important aspect that should be considered when designing ad-hoc routing protocols. In this paper, we propose a dynamic, decentralized routing protocol for ad-hoc networks that provides incentives in the form of payments to intermediate nodes used to forward data for others. In our Constrained Selfish Routing (CSR) protocol, game-theoretic approaches are used to calculate payments (incentives) that ensure both the truthfulness of participating nodes and the fairness of the CSR protocol. We show through simulations that CSR is an energy efficient protocol and that it provides lower communication overhead in the best and average cases compared to existing approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present Slack Stealing Job Admission Control (SSJAC)---a methodology for scheduling periodic firm-deadline tasks with variable resource requirements, subject to controllable Quality of Service (QoS) constraints. In a system that uses Rate Monotonic Scheduling, SSJAC augments the slack stealing algorithm of Thuel et al with an admission control policy to manage the variability in the resource requirements of the periodic tasks. This enables SSJAC to take advantage of the 31\% of utilization that RMS cannot use, as well as any utilization unclaimed by jobs that are not admitted into the system. Using SSJAC, each task in the system is assigned a resource utilization threshold that guarantees the minimal acceptable QoS for that task (expressed as an upper bound on the rate of missed deadlines). Job admission control is used to ensure that (1) only those jobs that will complete by their deadlines are admitted, and (2) tasks do not interfere with each other, thus a job can only monopolize the slack in the system, but not the time guaranteed to jobs of other tasks. We have evaluated SSJAC against RMS and Statistical RMS (SRMS). Ignoring overhead issues, SSJAC consistently provides better performance than RMS in overload, and, in certain conditions, better performance than SRMS. In addition, to evaluate optimality of SSJAC in an absolute sense, we have characterized the performance of SSJAC by comparing it to an inefficient, yet optimal scheduler for task sets with harmonic periods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present Statistical Rate Monotonic Scheduling (SRMS), a generalization of the classical RMS results of Liu and Layland that allows scheduling periodic tasks with highly variable execution times and statistical QoS requirements. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. The feasibility test for SRMS ensures that using SRMS' scheduling algorithms, it is possible for a given periodic task set to share a given resource (e.g. a processor, communication medium, switching device, etc.) in such a way that such sharing does not result in the violation of any of the periodic tasks QoS constraints. The SRMS scheduling algorithm incorporates a number of unique features. First, it allows for fixed priority scheduling that keeps the tasks' value (or importance) independent of their periods. Second, it allows for job admission control, which allows the rejection of jobs that are not guaranteed to finish by their deadlines as soon as they are released, thus enabling the system to take necessary compensating actions. Also, admission control allows the preservation of resources since no time is spent on jobs that will miss their deadlines anyway. Third, SRMS integrates reservation-based and best-effort resource scheduling seamlessly. Reservation-based scheduling ensures the delivery of the minimal requested QoS; best-effort scheduling ensures that unused, reserved bandwidth is not wasted, but rather used to improve QoS further. Fourth, SRMS allows a system to deal gracefully with overload conditions by ensuring a fair deterioration in QoS across all tasks---as opposed to penalizing tasks with longer periods, for example. Finally, SRMS has the added advantage that its schedulability test is simple and its scheduling algorithm has a constant overhead in the sense that the complexity of the scheduler is not dependent on the number of the tasks in the system. We have evaluated SRMS against a number of alternative scheduling algorithms suggested in the literature (e.g. RMS and slack stealing), as well as refinements thereof, which we describe in this paper. Consistently throughout our experiments, SRMS provided the best performance. In addition, to evaluate the optimality of SRMS, we have compared it to an inefficient, yet optimal scheduler for task sets with harmonic periods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose and evaluate an implementation of a prototype scalable web server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host---namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about connections in hash tables and linked lists. Every time a packet arrives, it is examined to see if it has to be redirected or not. Load information is maintained using periodic broadcasts amongst the cluster hosts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An increasing number of applications, such as distributed interactive simulation, live auctions, distributed games and collaborative systems, require the network to provide a reliable multicast service. This service enables one sender to reliably transmit data to multiple receivers. Reliability is traditionally achieved by having receivers send negative acknowledgments (NACKs) to request from the sender the retransmission of lost (or missing) data packets. However, this Automatic Repeat reQuest (ARQ) approach results in the well-known NACK implosion problem at the sender. Many reliable multicast protocols have been recently proposed to reduce NACK implosion. But, the message overhead due to NACK requests remains significant. Another approach, based on Forward Error Correction (FEC), requires the sender to encode additional redundant information so that a receiver can independently recover from losses. However, due to the lack of feedback from receivers, it is impossible for the sender to determine how much redundancy is needed. In this paper, we propose a new reliable multicast protocol, called ARM for Adaptive Reliable Multicast. Our protocol integrates ARQ and FEC techniques. The objectives of ARM are (1) reduce the message overhead due to NACK requests, (2) reduce the amount of data transmission, and (3) reduce the time it takes for all receivers to receive the data intact (without loss). During data transmission, the sender periodically informs the receivers of the number of packets that are yet to be transmitted. Based on this information, each receiver predicts whether this amount is enough to recover its losses. Only if it is not enough, that the receiver requests the sender to encode additional redundant packets. Using ns simulations, we show the superiority of our hybrid ARQ-FEC protocol over the well-known Scalable Reliable Multicast (SRM) protocol.