891 resultados para TCP-friendliness


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The increased diversity of Internet application requirements has spurred recent interests in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. In this paper, we define a new spectrum of window-based congestion control algorithms that are TCP-friendly as well as TCP-compatible under RED. Contrary to previous memory-less controls, our algorithms utilize history information in their control rules. Our proposed algorithms have two salient features: (1) They enable a wider region of TCP-friendliness, and thus more flexibility in trading off among smoothness, aggressiveness, and responsiveness; and (2) they ensure a faster convergence to fairness under a wide range of system conditions. We demonstrate analytically and through extensive ns simulations the steady-state and transient behaviors of several instances of this new spectrum of algorithms. In particular, SIMD is one instance in which the congestion window is increased super-linearly with time since the detection of the last loss. Compared to recently proposed TCP-friendly AIMD and binomial algorithms, we demonstrate the superiority of SIMD in: (1) adapting to sudden increases in available bandwidth, while maintaining competitive smoothness and responsiveness; and (2) rapidly converging to fairness and efficiency.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The increasing diversity of Internet application requirements has spurred recent interest in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The control rules are parameterized so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. This paper presents a comprehensive study of a new spectrum of window-based congestion controls, which are TCP-friendly as well as TCP-compatible under RED. Our controls utilize history information in their control rules. By doing so, they improve the transient behavior, compared to recently proposed slowly-responsive congestion controls such as general AIMD and binomial controls. Our controls can achieve better tradeoffs among smoothness, aggressiveness, and responsiveness, and they can achieve faster convergence. We demonstrate analytically and through extensive ns simulations the steady-state and transient behavior of several instances of this new spectrum.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The majority of Internet traffic use Transmission Control Protocol (TCP) as the transport level protocol. It provides a reliable ordered byte stream for the applications. However, applications such as live video streaming place an emphasis on timeliness over reliability. Also a smooth sending rate can be desirable over sharp changes in the sending rate. For these applications TCP is not necessarily suitable. Rate control attempts to address the demands of these applications. An important design feature in all rate control mechanisms is TCP friendliness. We should not negatively impact TCP performance since it is still the dominant protocol. Rate Control mechanisms are classified into two different mechanisms: window-based mechanisms and rate-based mechanisms. Window-based mechanisms increase their sending rate after a successful transfer of a window of packets similar to TCP. They typically decrease their sending rate sharply after a packet loss. Rate-based solutions control their sending rate in some other way. A large subset of rate-based solutions are called equation-based solutions. Equation-based solutions have a control equation which provides an allowed sending rate. Typically these rate-based solutions react slower to both packet losses and increases in available bandwidth making their sending rate smoother than that of window-based solutions. This report contains a survey of rate control mechanisms and a discussion of their relative strengths and weaknesses. A section is dedicated to a discussion on the enhancements in wireless environments. Another topic in the report is bandwidth estimation. Bandwidth estimation is divided into capacity estimation and available bandwidth estimation. We describe techniques that enable the calculation of a fair sending rate that can be used to create novel rate control mechanisms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A significant impediment to deployment of multicast services is the daunting technical complexity of developing, testing and validating congestion control protocols fit for wide-area deployment. Protocols such as pgmcc and TFMCC have recently made considerable progress on the single rate case, i.e. where one dynamic reception rate is maintained for all receivers in the session. However, these protocols have limited applicability, since scaling to session sizes beyond tens of participants necessitates the use of multiple rate protocols. Unfortunately, while existing multiple rate protocols exhibit better scalability, they are both less mature than single rate protocols and suffer from high complexity. We propose a new approach to multiple rate congestion control that leverages proven single rate congestion control methods by orchestrating an ensemble of independently controlled single rate sessions. We describe SMCC, a new multiple rate equation-based congestion control algorithm for layered multicast sessions that employs TFMCC as the primary underlying control mechanism for each layer. SMCC combines the benefits of TFMCC (smooth rate control, equation-based TCP friendliness) with the scalability and flexibility of multiple rates to provide a sound multiple rate multicast congestion control policy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traditional approaches to receiver-driven layered multicast have advocated the benefits of cumulative layering, which can enable coarse-grained congestion control that complies with TCP-friendliness equations over large time scales. In this paper, we quantify the costs and benefits of using non-cumulative layering and present a new, scalable multicast congestion control scheme which provides a fine-grained approximation to the behavior of TCP additive increase/multiplicative decrease (AIMD). In contrast to the conventional wisdom, we demonstrate that fine-grained rate adjustment can be achieved with only modest increases in the number of layers and aggregate bandwidth consumption, while using only a small constant number of control messages to perform either additive increase or multiplicative decrease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

TCP is a dominant protocol for consistent communication over the internet. It provides flow, congestion and error control mechanisms while using wired reliable networks. Its congestion control mechanism is not suitable for wireless links where data corruption and its lost rate are higher. The physical links are transparent from TCP that takes packet losses due to congestion only and initiates congestion handling mechanisms by reducing transmission speed. This results in wasting already limited available bandwidth on the wireless links. Therefore, there is no use to carry out research on increasing bandwidth of the wireless links until the available bandwidth is not optimally utilized. This paper proposed a hybrid scheme called TCP Detection and Recovery (TCP-DR) to distinguish congestion, corruption and mobility related losses and then instructs the data sending host to take appropriate action. Therefore, the link utilization is optimal while losses are either due to high bit error rate or mobility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human mesenchymal stem cells (hMSCs) possess great therapeutic potential for the treatment of bone disease and fracture non-union. Too often however, in vitro evidence alone of the interaction between hMSCs and the biomaterial of choice is used as justification for continued development of the material into the clinic. Clearly for hMSC-based regenerative medicine to be successful for the treatment of orthopaedic trauma, it is crucial to transplant hMSCs with a suitable carrier that facilitates their survival, optimal proliferation and osteogenic differentiation in vitro and in vivo. This motivated us to evaluate the use of polycaprolactone-20% tricalcium phosphate (PCL-TCP) scaffolds produced by fused deposition modeling for the delivery of hMSCs. When hMSCs were cultured on the PCL-TCP scaffolds and imaged by a combination of phase contrast, scanning electron and confocal laser microscopy, we observed five distinct stages of colonization over a 21-day period that were characterized by cell attachment, spreading, cellular bridging, the formation of a dense cellular mass and the accumulation of a mineralized extracellular matrix when induced with osteogenic stimulants. Having established that PCL-TCP scaffolds are able to support hMSC proliferation and osteogenic differentiation, we next tested the in vivo efficacy of hMSC-loaded PCL-TCP scaffolds in nude rat critical-sized femoral defects. We found that fluorescently labeled hMSCs survived in the defect site for up to 3 weeks post-transplantation. However, only 50% of the femoral defects treated with hMSCs responded favorably as determined by new bone volume. As such, we show that verification of hMSC viability and differentiation in vitro is not sufficient to predict the efficacy of transplanted stem cells to consistently promote bone formation in orthotopic defects in vivo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Smart matrices are required in bone tissueengineered grafts that provide an optimal environment for cells and retain osteo-inductive factors for sustained biological activity. We hypothesized that a slow-degrading heparin-incorporated hyaluronan (HA) hydrogel can preserve BMP-2; while an arterio–venous (A–V) loop can support axial vascularization to provide nutrition for a bioartificial bone graft. HA was evaluated for osteoblast growth and BMP-2 release. Porous PLDLLA–TCP–PCL scaffolds were produced by rapid prototyping technology and applied in vivo along with HA-hydrogel, loaded with either primary osteoblasts or BMP-2. A microsurgically created A–V loop was placed around the scaffold, encased in an isolation chamber in Lewis rats. HA-hydrogel supported growth of osteoblasts over 8 weeks and allowed sustained release of BMP-2 over 35 days. The A–V loop provided an angiogenic stimulus with the formation of vascularized tissue in the scaffolds. Bone-specific genes were detected by real time RT-PCR after 8 weeks. However, no significant amount of bone was observed histologically. The heterotopic isolation chamber in combination with absent biomechanical stimulation might explain the insufficient bone formation despite adequate expression of bone-related genes. Optimization of the interplay of osteogenic cells and osteo-inductive factors might eventually generate sufficient amounts of axially vascularized bone grafts for reconstructive surgery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The repair of bone defects that result from periodontal diseases remains a clinical challenge for periodontal therapy. β-tricalcium phosphate (β-TCP) ceramics are biodegradable inorganic bone substitutes with inorganic components that are similar to those of bone. Demineralized bone matrix (DBM) is an acid-extracted organic matrix derived from bone sources that consists of the collagen and matrix proteins of bone. A few studies have documented the effects of DBM on the proliferation and osteogenic differentiation of human periodontal ligament cells (hPDLCs). The aim of the present study was to investigate the effects of inorganic and organic elements of bone on the proliferation and osteogenic differentiation of hPDLCs using three-dimensional porous β-TCP ceramics and DBM with or without osteogenic inducers. Primary hPDLCs were isolated from human periodontal ligaments. The proliferation of the hPDLCs on the scaffolds in the growth culture medium was examined using a Cell‑Counting kit‑8 (CCK-8) and scanning electron microscopy (SEM). Alkaline phosphatase (ALP) activity and the osteogenic differentiation of the hPDLCs cultured on the β-TCP ceramics and DBM were examined in both the growth culture medium and osteogenic culture medium. Specific osteogenic differentiation markers were examined using reverse transcription-quantitative polymerase chain reaction (RT-qPCR). SEM images revealed that the cells on the β-TCP were spindle-shaped and much more spread out compared with the cells on the DBM surfaces. There were no significant differences observed in cell proliferation between the β-TCP ceramics and the DBM scaffolds. Compared with the cells that were cultured on β-TCP ceramics, the ALP activity, as well as the Runx2 and osteocalcin (OCN) mRNA levels in the hPDLCs cultured on DBM were significantly enhanced both in the growth culture medium and the osteogenic culture medium. The organic elements of bone may exhibit greater osteogenic differentiation effects on hPDLCs than the inorganic elements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The network scenario is that of an infrastructure IEEE 802.11 WLAN with a single AP with which several stations (STAs) are associated. The AP has a finite size buffer for storing packets. In this scenario, we consider TCP controlled upload and download file transfers between the STAs and a server on the wireline LAN (e.g., 100 Mbps Ethernet) to which the AP is connected. In such a situation, it is known (see, for example, (3), [9]) that because of packet loss due to finite buffers at the Ap, upload file transfers obtain larger throughputs than download transfers. We provide an analytical model for estimating the upload and download throughputs as a function of the buffer size at the AP. We provide models for the undelayed and delayed ACK cases for a TCP that performs loss recovery only by timeout, and also for TCP Reno.