675 resultados para Congestion


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current congestion-oriented design of TCP hinders its ability to perform well in hybrid wireless/wired networks. We propose a new improvement on TCP NewReno (NewReno-FF) using a new loss labeling technique to discriminate wireless from congestion losses. The proposed technique is based on the estimation of average and variance of the round trip time using a filter cal led Flip Flop filter that is augmented with history information. We show the comparative performance of TCP NewReno, NewReno-FF, and TCP Westwood through extensive simulations. We study the fundamental gains and limits using TCP NewReno with varying Loss Labeling accuracy (NewReno-LL) as a benchmark. Lastly our investigation opens up important research directions. First, there is a need for a finer grained classification of losses (even within congestion and wireless losses) for TCP in heterogeneous networks. Second, it is essential to develop an appropriate control strategy for recovery after the correct classification of a packet loss.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a transport protocol whose goal is to reduce power consumption without compromising delivery requirements of applications. To meet its goal of energy efficiency, our transport protocol (1) contains mechanisms to balance end-to-end vs. local retransmissions; (2) minimizes acknowledgment traffic using receiver regulated rate-based flow control combined with selected acknowledgements and in-network caching of packets; and (3) aggressively seeks to avoid any congestion-based packet loss. Within a recently developed ultra low-power multi-hop wireless network system, extensive simulations and experimental results demonstrate that our transport protocol meets its goal of preserving the energy efficiency of the underlying network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of recent studies have pointed out that TCP's performance over ATM networks tends to suffer, especially under congestion and switch buffer limitations. Switch-level enhancements and link-level flow control have been proposed to improve TCP's performance in ATM networks. Selective Cell Discard (SCD) and Early Packet Discard (EPD) ensure that partial packets are discarded from the network "as early as possible", thus reducing wasted bandwidth. While such techniques improve the achievable throughput, their effectiveness tends to degrade in multi-hop networks. In this paper, we introduce Lazy Packet Discard (LPD), an AAL-level enhancement that improves effective throughput, reduces response time, and minimizes wasted bandwidth for TCP/IP over ATM. In contrast to the SCD and EPD policies, LPD delays as much as possible the removal from the network of cells belonging to a partially communicated packet. We outline the implementation of LPD and show the performance advantage of TCP/LPD, compared to plain TCP and TCP/EPD through analysis and simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Version 1.1 of the Hyper Text Transfer Protocol (HTTP) was principally developed as a means for reducing both document transfer latency and network traffic. The rationale for the performance enhancements in HTTP/1.1 is based on the assumption that the network is the bottleneck in Web transactions. In practice, however, the Web server can be the primary source of document transfer latency. In this paper, we characterize and compare the performance of HTTP/1.0 and HTTP/1.1 in terms of throughput at the server and transfer latency at the client. Our approach is based on considering a broader set of bottlenecks in an HTTP transfer; we examine how bottlenecks in the network, CPU, and in the disk system affect the relative performance of HTTP/1.0 versus HTTP/1.1. We show that the network demands under HTTP/1.1 are somewhat lower than HTTP/1.0, and we quantify those differences in terms of packets transferred, server congestion window size and data bytes per packet. We show that when the CPU is the bottleneck, there is relatively little difference in performance between HTTP/1.0 and HTTP/1.1. Surprisingly, we show that when the disk system is the bottleneck, performance using HTTP/1.1 can be much worse than with HTTP/1.0. Based on these observations, we suggest a connection management policy for HTTP/1.1 that can improve throughput, decrease latency, and keep network traffic low when the disk system is the bottleneck.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SomeCast is a novel paradigm for the reliable multicast of real-time data to a large set of receivers over the Internet. SomeCast is receiver-initiated and thus scalable in the number of receivers, the diverse characteristics of paths between senders and receivers (e.g. maximum bandwidth and round-trip-time), and the dynamic conditions of such paths (e.g. congestion-induced delays and losses). SomeCast enables receivers to dynamically adjust the rate at which they receive multicast information to enable the satisfaction of real-time QoS constraints (e.g. rate, deadlines, or jitter). This is done by enabling a receiver to join SOME number of concurrent multiCAST sessions, whereby each session delivers a portion of an encoding of the real-time data. By adjusting the number of such sessions dynamically, client-specific QoS constraints can be met independently. The SomeCast paradigm can be thought of as a generalization of the AnyCast (e.g. Dynamic Server Selection) and ManyCast (e.g. Digital Fountain) paradigms, which have been proposed in the literature to address issues of scalability of UniCast and MultiCast environments, respectively. In this paper we overview the SomeCast paradigm, describe an instance of a SomeCast protocol, and present simulation results that quantify the significant advantages gained from adopting such a protocol for the reliable multicast of data to a diverse set of receivers subject to real-time QoS constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When two or more such connections share a common endpoint, there is an opportunity to correlate the end-to-end measurements made by these protocols to better diagnose and control the use of shared resources. We develop packet probing techniques to determine whether a pair of connections experience shared congestion. Correct, efficient diagnoses could enable new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that the conditional (Bayesian) probing approach we employ provides superior accuracy, converges faster, and tolerates a wider range of network conditions than recently proposed memoryless (Markovian) probing approaches for addressing this opportunity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When a number of such connections share a common endpoint, that endpoint has the opportunity to correlate these end-to-end measurements to better diagnose and control the use of shared resources. A valuable characterization of such shared resources is the "loss topology". From the perspective of a server with concurrent connections to multiple clients, the loss topology is a logical tree rooted at the server in which edges represent lossy paths between a pair of internal network nodes. We develop an end-to-end unicast packet probing technique and an associated analytical framework to: (1) infer loss topologies, (2) identify loss rates of links in an existing loss topology, and (3) augment a topology to incorporate the arrival of a new connection. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that our approach is robust in terms of its accuracy and convergence over a wide range of network conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional approaches to receiver-driven layered multicast have advocated the benefits of cumulative layering, which can enable coarse-grained congestion control that complies with TCP-friendliness equations over large time scales. In this paper, we quantify the costs and benefits of using non-cumulative layering and present a new, scalable multicast congestion control scheme which provides a fine-grained approximation to the behavior of TCP additive increase/multiplicative decrease (AIMD). In contrast to the conventional wisdom, we demonstrate that fine-grained rate adjustment can be achieved with only modest increases in the number of layers and aggregate bandwidth consumption, while using only a small constant number of control messages to perform either additive increase or multiplicative decrease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Body Sensor Network (BSN) technology is seeing a rapid emergence in application areas such as health, fitness and sports monitoring. Current BSN wireless sensors typically operate on a single frequency band (e.g. utilizing the IEEE 802.15.4 standard that operates at 2.45GHz) employing a single radio transceiver for wireless communications. This allows a simple wireless architecture to be realized with low cost and power consumption. However, network congestion/failure can create potential issues in terms of reliability of data transfer, quality-of-service (QOS) and data throughput for the sensor. These issues can be especially critical in healthcare monitoring applications where data availability and integrity is crucial. The addition of more than one radio has the potential to address some of the above issues. For example, multi-radio implementations can allow access to more than one network, providing increased coverage and data processing as well as improved interoperability between networks. A small number of multi-radio wireless sensor solutions exist at present but require the use of more than one radio transceiver devices to achieve multi-band operation. This paper presents the design of a novel prototype multi-radio hardware platform that uses a single radio transceiver. The proposed design allows multi-band operation in the 433/868MHz ISM bands and this, together with its low complexity and small form factor, make it suitable for a wide range of BSN applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Does environmental regulation impair international competitiveness of pollution-intensive industries to the extent that they relocate to countries with less stringent regulation, turning those countries into "pollution havens"? We test this hypothesis using panel data on outward foreign direct investment (FDI) flows of various industries in the German manufacturing sector and account for several econometric issues that have been ignored in previous studies. Most importantly, we demonstrate that externalities associated with FDI agglomeration can bias estimates away from finding a pollution haven effect if omitted from the analysis. We include the stock of inward FDI as a proxy for agglomeration and employ a GMM estimator to control for endogenous time-varying determinants of FDI flows. Furthermore, we propose a difference estimator based on the least polluting industry to break the possible correlation between environmental regulatory stringency and unobservable attributes of FDI recipients in the cross-section. When accounting for these issues we find robust evidence of a pollution haven effect for the chemical industry. © 2008 Springer Science+Business Media B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Small laboratory fish share many anatomical and histological characteristics with other vertebrates, yet can be maintained in large numbers at low cost for lifetime studies. Here we characterize biomarkers associated with normal aging in the Japanese medaka (Oryzias latipes), a species that has been widely used in toxicology studies and has potential utility as a model organism for experimental aging research. PRINCIPAL FINDINGS: The median lifespan of medaka was approximately 22 months under laboratory conditions. We performed quantitative histological analysis of tissues from age-grouped individuals representing young adults (6 months old), mature adults (16 months old), and adults that had survived beyond the median lifespan (24 months). Livers of 24-month old individuals showed extensive morphologic changes, including spongiosis hepatis, steatosis, ballooning degeneration, inflammation, and nuclear pyknosis. There were also phagolysosomes, vacuoles, and residual bodies in parenchymal cells and congestion of sinusoidal vessels. Livers of aged individuals were characterized by increases in lipofuscin deposits and in the number of TUNEL-positive apoptotic cells. Some of these degenerative characteristics were seen, to a lesser extent, in the livers of 16-month old individuals, but not in 6-month old individuals. The basal layer of the dermis showed an age-dependent decline in the number of dividing cells and an increase in senescence-associated β-galactosidase. The hearts of aged individuals were characterized by fibrosis and lipofuscin deposition. There was also a loss of pigmented cells from the retinal epithelium. By contrast, age-associated changes were not apparent in skeletal muscle, the ocular lens, or the brain. SIGNIFICANCE: The results provide a set of markers that can be used to trace the process of normal tissue aging in medaka and to evaluate the effect of environmental stressors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We conduct the first empirical investigation of common-pool resource users' dynamic and strategic behavior at the micro level using real-world data. Fishermen's strategies in a fully dynamic game account for latent resource dynamics and other players' actions, revealing the profit structure of the fishery. We compare the fishermen's actual and socially optimal exploitation paths under a time-specific vessel allocation policy and find a sizable dynamic externality. Individual fishermen respond to other users by exerting effort above the optimal level early in the season. Congestion is costly instantaneously but is beneficial in the long run because it partially offsets dynamic inefficiencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traffic policing and bandwidth management strategies at the User Network Interface (UNI) of an ATM network are investigated by simulation. The network is assumed to transport real time (RT) traffic like voice and video as well as non-real time (non-RT) data traffic. The proposed policing function, called the super leaky bucket (S-LB), is based on the leaky bucket (LB), but handles the three types of traffic differently according to their quality of service (QoS) requirements. Separate queues are maintained for RT and non-RT traffic. They are normally served alternately, but if the number of RT cells exceeds a threshold, it gets non-pre-emptive priority. Further increase of the RT queue causes low priority cells to be discarded. Non-RT cells are buffered and the sources are throttled back during periods of congestion. The simulations clearly demonstrate the advantages of the proposed strategy in providing improved levels of service (delay, jitter and loss) for all types of traffic.