991 resultados para Congestion control


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications. © 2013 IOP Publishing Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study, we showed various approachs implemented in Artificial Neural Networks for network resources management and Internet congestion control. Through a training process, Neural Networks can determine nonlinear relationships in a data set by associating the corresponding outputs to input patterns. Therefore, the application of these networks to Traffic Engineering can help achieve its general objective: “intelligent” agents or systems capable of adapting dataflow according to available resources. In this article, we analyze the opportunity and feasibility to apply Artificial Neural Networks to a number of tasks related to Traffic Engineering. In previous sections, we present the basics of each one of these disciplines, which are associated to Artificial Intelligence and Computer Networks respectively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interactive applications do not require more bandwidth to go faster. Instead, they require less latency. Unfortunately, the current design of transport protocols such as TCP limits possible latency reductions. In this paper we evaluate and compare different loss recovery enhancements to fight tail loss latency. The two recently proposed mechanisms "RTO Restart" (RTOR) and "Tail Loss Probe" (TLP) as well as a new mechanism that applies the logic of RTOR to the TLP timer management (TLPR) are considered. The results show that the relative performance of RTOR and TLP when tail loss occurs is scenario dependent, but with TLP having potentially larger gains. The TLPR mechanism reaps the benefits of both approaches and in most scenarios it shows the best performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2016.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the last two decades heart disease has been the highest single cause of death for the human population. With an alarming number of patients requiring heart transplant, and donations not able to satisfy the demand, treatment looks to mechanical alternatives. Rotary Ventricular Assist Devices, VADs, are miniature pumps which can be implanted alongside the heart to assist its pumping function. These constant flow devices are smaller, more efficient and promise a longer operational life than more traditional pulsatile VADs. The development of rotary VADs has focused on single pumps assisting the left ventricle only to supply blood for the body. In many patients however, failure of both ventricles demands that an additional pulsatile device be used to support the failing right ventricle. This condition renders them hospital bound while they wait for an unlikely heart donation. Reported attempts to use two rotary pumps to support both ventricles concurrently have warned of inherent haemodynamic instability. Poor balancing of the pumps’ flow rates quickly leads to vascular congestion increasing the risk of oedema and ventricular ‘suckdown’ occluding the inlet to the pump. This thesis introduces a novel Bi-Ventricular Assist Device (BiVAD) configuration where the pump outputs are passively balanced by vascular pressure. The BiVAD consists of two rotary pumps straddling the mechanical passive controller. Fluctuations in vascular pressure induce small deflections within both pumps adjusting their outputs allowing them to maintain arterial pressure. To optimise the passive controller’s interaction with the circulation, the controller’s dynamic response is optimised with a spring, mass, damper arrangement. This two part study presents a comprehensive assessment of the prototype’s ‘viability’ as a support device. Its ‘viability’ was considered based on its sensitivity to pathogenic haemodynamics and the ability of the passive response to maintain healthy circulation. The first part of the study is an experimental investigation where a prototype device was designed and built, and then tested in a pulsatile mock circulation loop. The BiVAD was subjected to a range of haemodynamic imbalances as well as a dynamic analysis to assess the functionality of the mechanical damper. The second part introduces the development of a numerical program to simulate human circulation supported by the passively controlled BiVAD. Both investigations showed that the prototype was able to mimic the native baroreceptor response. Simulating hypertension, poor flow balancing and subsequent ventricular failure during BiVAD support allowed the passive controller’s response to be assessed. Triggered by the resulting pressure imbalance, the controller responded by passively adjusting the VAD outputs in order to maintain healthy arterial pressures. This baroreceptor-like response demonstrated the inherent stability of the auto regulating BiVAD prototype. Simulating pulmonary hypertension in the more observable numerical model, however, revealed a serious issue with the passive response. The subsequent decrease in venous return into the left heart went unnoticed by the passive controller. Meanwhile the coupled nature of the passive response not only decreased RVAD output to reduce pulmonary arterial pressure, but it also increased LVAD output. Consequently, the LVAD increased fluid evacuation from the left ventricle, LV, and so actually accelerated the onset of LV collapse. It was concluded that despite the inherently stable baroreceptor-like response of the passive controller, its lack of sensitivity to venous return made it unviable in its present configuration. The study revealed a number of other important findings. Perhaps the most significant was that the reduced pulse experienced during constant flow support unbalanced the ratio of effective resistances of both vascular circuits. Even during steady rotary support therefore, the resulting ventricle volume imbalance increased the likelihood of suckdown. Additionally, mechanical damping of the passive controller’s response successfully filtered out pressure fluctuations from residual ventricular function. Finally, the importance of recognising inertial contributions to blood flow in the atria and ventricles in a numerical simulation were highlighted. This thesis documents the first attempt to create a fully auto regulated rotary cardiac assist device. Initial results encourage development of an inlet configuration sensitive to low flow such as collapsible inlet cannulae. Combining this with the existing baroreceptor-like response of the passive controller will render a highly stable passively controlled BiVAD configuration. The prototype controller’s passive interaction with the vasculature is a significant step towards a highly stable new generation of artificial heart.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a rate-based flow control scheme based upon per-VC virtual queuing is proposed for the Available Bit Rate (ABR) service in ATM. In this scheme, each VC in a shared buffer is assigned a virtual queue, which is a counter. To achieve a specific kind of fairness, an appropriate scheduler is applied to the virtual queues. Each VC's bottleneck rate (fair share) is derived from its virtual cell departure rate. This approach of deriving a VC's fair share is simple and accurate. By controlling each VC with respect to its virtual queue and queue build-up in the shared buffer, network congestion is avoided. The principle of the control scheme is first illustrated by max–min flow control, which is realised by scheduling the virtual queues in round-robin. Further application of the control scheme is demonstrated with the achievement of weighted fairness through weighted round robin scheduling. Simulation results show that with a simple computation, the proposed scheme achieves the desired fairness exactly and controls network congestion effectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A high performance, low computational complexity rate-based flow control algorithm which can avoid congestion and achieve fairness is important to ATM available bit rate service. The explicit rate allocation algorithm proposed by Kalampoukas et al. is designed to achieve max–min fairness in ATM networks. It has several attractive features, such as a fixed computational complexity of O(1) and the guaranteed convergence to max–min fairness. In this paper, certain drawbacks of the algorithm, such as the severe overload of an outgoing link during transient period and the non-conforming use of the current cell rate field in a resource management cell, have been identified and analysed; a new algorithm which overcomes these drawbacks is proposed. The proposed algorithm simplifies the rate computation as well. Compared with Kalampoukas's algorithm, it has better performance in terms of congestion avoidance and smoothness of rate allocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The IEEE Wireless LAN standard has been a true success story by enabling convenient, efficient and low-cost access to broadband networks for both private and professional use. However, the increasing density and uncoordinated operation of wireless access points, combined with constantly growing traffic demands have started hurting the users' quality of experience. On the other hand, the emerging ubiquity of wireless access has placed it at the center of attention for network attacks, which not only raises users' concerns on security but also indirectly affects connection quality due to proactive measures against security attacks. In this work, we introduce an integrated solution to congestion avoidance and attack mitigation problems through cooperation among wireless access points. The proposed solution implements a Partially Observable Markov Decision Process (POMDP) as an intelligent distributed control system. By successfully differentiating resource hampering attacks from overload cases, the control system takes an appropriate action in each detected anomaly case without disturbing the quality of service for end users. The proposed solution is fully implemented on a small-scale testbed, on which we present our observations and demonstrate the effectiveness of the system to detect and alleviate both attack and congestion situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless networked control systems (WNCSs) have been widely used in the areas of manufacturing and industrial processing over the last few years. They provide real-time control with a unique characteristic: periodic traffic. These systems have a time-critical requirement. Due to current wireless mechanisms, the WNCS performance suffers from long time-varying delays, packet dropout, and inefficient channel utilization. Current wirelessly networked applications like WNCSs are designed upon the layered architecture basis. The features of this layered architecture constrain the performance of these demanding applications. Numerous efforts have attempted to use cross-layer design (CLD) approaches to improve the performance of various networked applications. However, the existing research rarely considers large-scale networks and congestion network conditions in WNCSs. In addition, there is a lack of discussions on how to apply CLD approaches in WNCSs. This thesis proposes a cross-layer design methodology to address the issues of periodic traffic timeliness, as well as to promote the efficiency of channel utilization in WNCSs. The design of the proposed CLD is highlighted by the measurement of the underlying network condition, the classification of the network state, and the adjustment of sampling period between sensors and controllers. This period adjustment is able to maintain the minimally allowable sampling period, and also maximize the control performance. Extensive simulations are conducted using the network simulator NS-2 to evaluate the performance of the proposed CLD. The comparative studies involve two aspects of communications, with and without using the proposed CLD, respectively. The results show that the proposed CLD is capable of fulfilling the timeliness requirement under congested network conditions, and is also able to improve the channel utilization efficiency and the proportion of effective data in WNCSs.