952 resultados para PACKET-SWITCHED NETWORK


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development and deployment of distributed network-aware applications and services over the Internet require the ability to compile and maintain a model of the underlying network resources with respect to (one or more) characteristic properties of interest. To be manageable, such models must be compact, and must enable a representation of properties along temporal, spatial, and measurement resolution dimensions. In this paper, we propose a general framework for the construction of such metric-induced models using end-to-end measurements. We instantiate our approach using one such property, packet loss rates, and present an analytical framework for the characterization of Internet loss topologies. From the perspective of a server the loss topology is a logical tree rooted at the server with clients at its leaves, in which edges represent lossy paths between a pair of internal network nodes. We show how end-to-end unicast packet probing techniques could b e used to (1) infer a loss topology and (2) identify the loss rates of links in an existing loss topology. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. We report on simulation, implementation, and Internet deployment results that show the effectiveness of our approach and its robustness in terms of its accuracy and convergence over a wide range of network conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet streaming applications are adversely affected by network conditions such as high packet loss rates and long delays. This paper aims at mitigating such effects by leveraging the availability of client-side caching proxies. We present a novel caching architecture (and associated cache management algorithms) that turn edge caches into accelerators of streaming media delivery. A salient feature of our caching algorithms is that they allow partial caching of streaming media objects and joint delivery of content from caches and origin servers. The caching algorithms we propose are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit-rate requirements, and the available bandwidth between clients and servers. Using realistic models of Internet bandwidth (derived from proxy cache logs and measured over real Internet paths), we have conducted extensive simulations to evaluate the performance of various cache management alternatives. Our experiments demonstrate that network-aware caching algorithms can significantly reduce service delay and improve overall stream quality. Also, our experiments show that partial caching is particularly effective when bandwidth variability is not very high.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of recent studies have pointed out that TCP's performance over ATM networks tends to suffer, especially under congestion and switch buffer limitations. Switch-level enhancements and link-level flow control have been proposed to improve TCP's performance in ATM networks. Selective Cell Discard (SCD) and Early Packet Discard (EPD) ensure that partial packets are discarded from the network "as early as possible", thus reducing wasted bandwidth. While such techniques improve the achievable throughput, their effectiveness tends to degrade in multi-hop networks. In this paper, we introduce Lazy Packet Discard (LPD), an AAL-level enhancement that improves effective throughput, reduces response time, and minimizes wasted bandwidth for TCP/IP over ATM. In contrast to the SCD and EPD policies, LPD delays as much as possible the removal from the network of cells belonging to a partially communicated packet. We outline the implementation of LPD and show the performance advantage of TCP/LPD, compared to plain TCP and TCP/EPD through analysis and simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When a number of such connections share a common endpoint, that endpoint has the opportunity to correlate these end-to-end measurements to better diagnose and control the use of shared resources. A valuable characterization of such shared resources is the "loss topology". From the perspective of a server with concurrent connections to multiple clients, the loss topology is a logical tree rooted at the server in which edges represent lossy paths between a pair of internal network nodes. We develop an end-to-end unicast packet probing technique and an associated analytical framework to: (1) infer loss topologies, (2) identify loss rates of links in an existing loss topology, and (3) augment a topology to incorporate the arrival of a new connection. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that our approach is robust in terms of its accuracy and convergence over a wide range of network conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rachit Agarwal, Rafael V. Martinez-Catala, Sean Harte, Cedric Segard, Brendan O'Flynn, "Modeling Power in Multi-functionality Sensor Network Applications," sensorcomm, pp.507-512, 2008 Proceedings of the Second International Conference on Sensor Technologies and Applications, August 25-August 31 2008, Cap Esterel, France

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A full hardware implementation of a Weighted Fair Queuing (WFQ) packet scheduler is proposed. The circuit architecture presented has been implemented using Altera Stratix II FPGA technology, utilizing RLDII and QDRII memory components. The circuit can provide fine granularity Quality of Service (QoS) support at a line throughput rate of 12.8Gb/s in its current implementation. The authors suggest that, due to the flexible and scalable modular circuit design approach used, the current circuit architecture can be targeted for a full ASIC implementation to deliver 50 Gb/s throughput. The circuit itself comprises three main components; a WFQ algorithm computation circuit, a tag/time-stamp sort and retrieval circuit, and a high throughput shared buffer. The circuit targets the support of emerging wireline and wireless network nodes that focus on Service Level Agreements (SLA's) and Quality of Experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The requirement to provide multimedia services with QoS support in mobile networks has led to standardization and deployment of high speed data access technologies such as the High Speed Downlink Packet Access (HSDPA) system. HSDPA improves downlink packet data and multimedia services support in WCDMA-based cellular networks. As is the trend in emerging wireless access technologies, HSDPA supports end-user multi-class sessions comprising parallel flows with diverse Quality of Service (QoS) requirements, such as real-time (RT) voice or video streaming concurrent with non real-time (NRT) data service being transmitted to the same user, with differentiated queuing at the radio link interface. Hence, in this paper we present and evaluate novel radio link buffer management schemes for QoS control of multimedia traffic comprising concurrent RT and NRT flows in the same HSDPA end-user session. The new buffer management schemes—Enhanced Time Space Priority (E-TSP) and Dynamic Time Space Priority (D-TSP)—are designed to improve radio link and network resource utilization as well as optimize end-to-end QoS performance of both RT and NRT flows in the end-user session. Both schemes are based on a Time-Space Priority (TSP) queuing system, which provides joint delay and loss differentiation between the flows by queuing (partially) loss tolerant RT flow packets for higher transmission priority but with restricted access to the buffer space, whilst allowing unlimited access to the buffer space for delay-tolerant NRT flow but with queuing for lower transmission priority. Experiments by means of extensive system-level HSDPA simulations demonstrates that with the proposed TSP-based radio link buffer management schemes, significant end-to-end QoS performance gains accrue to end-user traffic with simultaneous RT and NRT flows, in addition to improved resource utilization in the radio access network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An overview of research on reconfigurable architectures for network processing applications within the Institute of Electronics, Communications and Information Technology (ECIT) is presented. Three key network processing topics, namely node throughput, Quality of Service (QoS) and security are examined where custom reconfigurability allows network nodes to adapt to fluctuating network traffic and customer demands. Various architectural possibilities have been investigated in order to explore the options and tradeoffs available when using reconfigurability for packet/frame processing, packet-scheduling and data encryption/decryption. This research has shown there is no common approach that can be applied. Rather the methodologies used and the cost-benefits for incorporation of reconfigurability depend on each of the functions considered, for example being well suited to encryption/decryption but not packet/frame processing. © 2005 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interesting wireless networking scenarios exist wherein network services must be guaranteed in a dynamic fashion for some priority users. For example, in disaster recovery, members need to be able to quickly block other users in order to gain sole use of the radio channel. As it is not always feasible to physically switch off other users, we propose a new approach, termed selective packet destruction (SPD) to ensure service for priority users. A testbed for SPD has been created, based on the Rice University Wireless open-Access Research Platform and been used to examine the feasibility of our approach. Results from the testbed are presented to demonstrate the feasibility of SPD and show how a balance between performance and acknowledgement destruction rate can be achieved. A 90% reduction in TCP & UDP traffic is achieved for a 75% MAC ACK destruction rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network management tools must be able to monitor and analyze traffic flowing through network systems. According to the OpenFlow protocol applied in Software-Defined Networking (SDN), packets are classified into flows that are searched in flow tables. Further actions, such as packet forwarding, modification, and redirection to a group table, are made in the flow table with respect to the search results. A novel hardware solution for SDN-enabled packet classification is presented in this paper. The proposed scheme is focused on a label-based search method, achieving high flexibility in memory usage. The implemented hardware architecture provides optimal lookup performance by configuring the search algorithm and by performing fast incremental update as programmed the software controller.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent trends, such as Software-Defined Networking (SDN), introduce programmability to the network with the opportunity to dynamically route traffic based on flow descriptions. Packet header lookup is the first phase in this process. In this paper, we illustrate improved header lookup and flow rule update speeds over conventional lookup algorithms. This is achieved by performing individual packet header field searches and combining the search results. We propose that individual algorithms should be selected for packet classification based on the application requirements. Improving the network processing performance with our configurable solution will directly support the proposed capability of programmability in SDN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Na última década tem-se assistido a um crescimento exponencial das redes de comunicações sem fios, nomeadamente no que se refere a taxa de penetração do serviço prestado e na implementação de novas infra-estruturas em todo o globo. É ponto assente neste momento que esta tendência irá não só continuar como se fortalecer devido à convergência que é esperada entre as redes móveis sem fio e a disponibilização de serviços de banda larga para a rede Internet fixa, numa evolução para um paradigma de uma arquitectura integrada e baseada em serviços e aplicações IP. Por este motivo, as comunicações móveis sem fios irão ter um papel fundamental no desenvolvimento da sociedade de informação a médio e longo prazos. A estratégia seguida no projecto e implementação das redes móveis celulares da actual geração (2G e 3G) foi a da estratificação da sua arquitectura protocolar numa estrutura modular em camadas estanques, onde cada camada do modelo é responsável pela implementação de um conjunto de funcionalidades. Neste modelo a comunicação dá-se apenas entre camadas adjacentes através de primitivas de comunicação pré-estabelecidas. Este modelo de arquitectura resulta numa mais fácil implementação e introdução de novas funcionalidades na rede. Entretanto, o facto das camadas inferiores do modelo protocolar não utilizarem informação disponibilizada pelas camadas superiores, e vice-versa acarreta uma degradação no desempenho do sistema. Este paradigma é particularmente importante quando sistemas de antenas múltiplas são implementados (sistemas MIMO). Sistemas de antenas múltiplas introduzem um grau adicional de liberdade no que respeita a atribuição de recursos rádio: o domínio espacial. Contrariamente a atribuição de recursos no domínio do tempo e da frequência, no domínio espacial os recursos rádio mapeados no domínio espacial não podem ser assumidos como sendo completamente ortogonais, devido a interferência resultante do facto de vários terminais transmitirem no mesmo canal e/ou slots temporais mas em feixes espaciais diferentes. Sendo assim, a disponibilidade de informação relativa ao estado dos recursos rádio às camadas superiores do modelo protocolar é de fundamental importância na satisfação dos critérios de qualidade de serviço exigidos. Uma forma eficiente de gestão dos recursos rádio exige a implementação de algoritmos de agendamento de pacotes de baixo grau de complexidade, que definem os níveis de prioridade no acesso a esses recursos por base dos utilizadores com base na informação disponibilizada quer pelas camadas inferiores quer pelas camadas superiores do modelo. Este novo paradigma de comunicação, designado por cross-layer resulta na maximização da capacidade de transporte de dados por parte do canal rádio móvel, bem como a satisfação dos requisitos de qualidade de serviço derivados a partir da camada de aplicação do modelo. Na sua elaboração, procurou-se que o standard IEEE 802.16e, conhecido por Mobile WiMAX respeitasse as especificações associadas aos sistemas móveis celulares de quarta geração. A arquitectura escalonável, o baixo custo de implementação e as elevadas taxas de transmissão de dados resultam num processo de multiplexagem de dados e valores baixos no atraso decorrente da transmissão de pacotes, os quais são atributos fundamentais para a disponibilização de serviços de banda larga. Da mesma forma a comunicação orientada à comutação de pacotes, inenente na camada de acesso ao meio, é totalmente compatível com as exigências em termos da qualidade de serviço dessas aplicações. Sendo assim, o Mobile WiMAX parece satisfazer os requisitos exigentes das redes móveis de quarta geração. Nesta tese procede-se à investigação, projecto e implementação de algoritmos de encaminhamento de pacotes tendo em vista a eficiente gestão do conjunto de recursos rádio nos domínios do tempo, frequência e espacial das redes móveis celulares, tendo como caso prático as redes móveis celulares suportadas no standard IEEE802.16e. Os algoritmos propostos combinam métricas provenientes da camada física bem como os requisitos de qualidade de serviço das camadas superiores, de acordo com a arquitectura de redes baseadas no paradigma do cross-layer. O desempenho desses algoritmos é analisado a partir de simulações efectuadas por um simulador de sistema, numa plataforma que implementa as camadas física e de acesso ao meio do standard IEEE802.16e.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho surge do interesse em substituir os nós de rede óptica baseados maioritariamente em electrónica por nós de rede baseados em tecnologia óptica. Espera-se que a tecnologia óptica permita maiores débitos binários na rede, maior transparência e maior eficiência através de novos paradigmas de comutação. Segundo esta visão, utilizou-se o MZI-SOA, um dispositivo semicondutor integrado hibridamente, para realizar funcionalidades de processamento óptico de sinal necessárias em nós de redes ópticas de nova geração. Nas novas redes ópticas são utilizados formatos de modulação avançados, com gestão da fase, pelo que foi estudado experimentalmente e por simulação o impacto da utilização destes formatos no desempenho do MZI-SOA na conversão de comprimento de onda e formato, em várias condições de operação. Foram derivadas regras de utilização para funcionamento óptimo. Foi também estudado o impacto da forma dos pulsos do sinal no desempenho do dispositivo. De seguida, o MZI-SOA foi utilizado para realizar funcionalidades temporais ao nível do bit e do pacote. Foi investigada a operação de um conversor de multiplexagem por divisão no comprimento de onda para multiplexagem por divisão temporal óptica, experimentalmente e por simulação, e de um compressor e descompressor de pacotes, por simulação. Para este último, foi investigada a operação com o MZI-SOA baseado em amplificadores ópticos de semicondutor com geometria de poço quântico e ponto quântico. Foi também realizado experimentalmente um ermutador de intervalos temporais que explora o MZI-SOA como conversor de comprimento de onda e usa um banco de linhas de atraso ópticas para introduzir no sinal um atraso seleccionável. Por fim, foi estudado analiticamente, experimentalmente e por simulação o impacto de diafonia em redes ópticas em diversas situações. Extendeu-se um modelo analítico de cálculo de desempenho para contemplar sinais distorcidos e afectados por diafonia. Estudou-se o caso de sinais muito filtrados e afectados por diafonia e mostrou-se que, para determinar correctamente as penalidades que ocorrem, ambos os efeitos devem ser considerados simultaneamente e não em separado. Foi estudada a escalabilidade limitada por diafonia de um comutador de intervalos temporais baseado em MZI-SOA a operar como comutador espacial. Mostrou-se também que sinais afectados fortemente por não-linearidades podem causar penalidades de diafonia mais elevadas do que sinais não afectados por não-linearidades. Neste trabalho foi demonstrado que o MZI-SOA permite construir vários e pertinentes circuitos ópticos, funcionando como bloco fundamental de construção, tendo sido o seu desempenho analisado, desde o nível de componente até ao nível de sistema. Tendo em conta as vantagens e desvantagens do MZI-SOA e os desenvolvimentos recentes de outras tecnologias, foram sugeridos tópicos de investigação com o intuito de evoluir para as redes ópticas de nova geração.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection