873 resultados para Wide area networks (Computer networks)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We provide a survey of some of our recent results ([9], [13], [4], [6], [7]) on the analytical performance modeling of IEEE 802.11 wireless local area networks (WLANs). We first present extensions of the decoupling approach of Bianchi ([1]) to the saturation analysis of IEEE 802.11e networks with multiple traffic classes. We have found that even when analysing WLANs with unsaturated nodes the following state dependent service model works well: when a certain set of nodes is nonempty, their channel attempt behaviour is obtained from the corresponding fixed point analysis of the saturated system. We will present our experiences in using this approximation to model multimedia traffic over an IEEE 802.11e network using the enhanced DCF channel access (EDCA) mechanism. We have found that we can model TCP controlled file transfers, VoIP packet telephony, and streaming video in the IEEE802.11e setting by this simple approximation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider an N x N non-blocking, space division ATM switch with input cell queueing. At each input, the cell arrival process comprises geometrically distributed bursts of consecutive cells for the various outputs. Motivated by the fact that some input links may be connected to metropolitan area networks, and others directly to B-ISDN terminals, we study the situation where there are two classes of inputs with different values of mean burst length. We show that when inputs contend for an output, giving priority to an input with smaller expected burst length yields a saturation throughput larger than if the reverse priority is given. Further, giving priority to less bursty traffic can give better throughput than if all the inputs were occupied by this less bursty traffic. We derive the asymptotic (as N --> infinity) saturation throughputs for each priority class.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fiber-optic CDMA technology is well suited for high speed local-area-networks (LANs) as it has good salient features. In this paper, we model the wavelength/time multiple-pulses-per-row (W/T MPR) FO-CDMA network channel, as a Z channel. We compare the performances of W/T MPR code with and without hard-limiter and show that significant performance improvement can be achieved by using hard-limiters in the receivers. In broadcast channels, MAI is the dominant source of noise. Hence the performance analysis is carried out considering only MAI and other receiver noises are neglected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the asymptotics of the invariant measure for the process of spatial distribution of N coupled Markov chains in the limit of a large number of chains. Each chain reflects the stochastic evolution of one particle. The chains are coupled through the dependence of transition rates on the spatial distribution of particles in the various states. Our model is a caricature for medium access interactions in wireless local area networks. Our model is also applicable in the study of spread of epidemics in a network. The limiting process satisfies a deterministic ordinary differential equation called the McKean-Vlasov equation. When this differential equation has a unique globally asymptotically stable equilibrium, the spatial distribution converges weakly to this equilibrium. Using a control-theoretic approach, we examine the question of a large deviation from this equilibrium.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-packet reception (MPR) promises significant throughput gains in wireless local area networks (WLANs) by allowing nodes to transmit even in the presence of ongoing transmissions in the medium. However, the medium access control (MAC) layer must now be redesigned to facilitate rather than discourage - these overlapping transmissions. We investigate asynchronous MPR MAC protocols, which successfully accomplish this by controlling the node behavior based on the number of ongoing transmissions in the channel. The protocols use the backoff timer mechanism of the distributed coordination function, which makes them practically appealing. We first highlight a unique problem of acknowledgment delays, which arises in asynchronous MPR, and investigate a solution that modifies the medium access rules to reduce these delays and increase system throughput in the single receiver scenario. We develop a general renewal-theoretic fixed-point analysis that leads to expressions for the saturation throughput, packet dropping probability, and average head-of-line packet delay. We also model and analyze the practical scenario in which nodes may incorrectly estimate the number of ongoing transmissions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the architecture of a vector-matrix multiplier (MVM) is simulated. The optical design can be made compact by the use of GRIN lenses for the optical fan-in. The intended application area was in storage area networks (SANs) but the concept can be applied to a neural network. © 2011 Allerton Press, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nesta dissertação são analisados métodos de localização baseados na rede, com destaque para os métodos de correlação de assinaturas de rádio-frequência (DCM - Database Correlation Methods). Métodos baseados na rede não requerem modificações nos terminais móveis (MS - Mobile Stations), sendo portanto capazes de estimar a localização de MS legados, i.e., sem suporte específico a posicionamento. Esta característica, associada a alta disponibilidade e precisão dos métodos DCM, torna-os candidatos viáveis para diversas aplicações baseadas em posição, e em particular para a localização de chamadas para números de emergência - polícia, defesa civil, corpo de bombeiros, etc. - originadas de telefones móveis celulares. Duas técnicas para diminuição do tempo médio para produção de uma estimativa de posição são formuladas: a filtragem determinística e a busca otimizada utilizando algoritmos genéticos. Uma modificação é realizada nas funções de avaliação utilizadas em métodos DCM, inserindo um fator representando a inacurácia intrínseca às medidas de nível de sinal realizadas pelos MS. As modificações propostas são avaliadas experimentalmente em redes de telefonia móvel celular de segunda e terceira gerações em ambientes urbanos e suburbanos, assim como em redes locais sem fio em ambiente indoor. A viabilidade da utilização de bancos de dados de correlação (CDB - Correlation Database) construídos a partir de modelagem de propagação é analisada, bem como o efeito da calibração de modelos de propagação empíricos na precisão de métodos DCM. Um dos métodos DCM propostos, utilizando um CDB calibrado, teve um desempenho superior ao de vários outros métodos DCM publicados na literatura, atingindo em área urbana a precisão exigida dos métodos baseados na rede pela regulamentação FCC (Federal Communications Commission) para o serviço E911 (Enhanced 911 ).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A combination of multilevel coding schemes and simple two-channel wavelength division multiplexing (WDM) at 1300 and 1550 nm was used to transmit an aggregate of 10 Gbit/s over 300 m of multimode fiber that is typical of that employed in current Local Area Networks (LANs). It was shown that this technique could be a simple solution for achieving 10 Gigabit ethernet links over installed multimode fiber building backbones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effective data communications between the project site and decision making office can be critical for the success of a construction project. It allows convenient access to centrally stored information and allows centrally located decision makers to remotely monitor the site and collect data in real-time. However, high bandwidth, flexible data communication networks, such as wired local area networks, can often be time-consuming and costly to deploy for such purposes especially when project sites (dams, highways, etc.) are located in rural, undeveloped areas where networking infrastructure is not available. In such construction sites, wireless networking could reliably link the construction site and the decision-making office. This paper presents a case study on long-distance, site – office wireless data communications. The purpose was to investigate the capability of wireless technology in exchanging construction data in a fast and efficient manner and in allowing site personnel to interact and share knowledge and data with the office staff. This study took place at the University of Michigan’s campus where performance, reliability, and cost/benefit tests were performed. The indoor and outdoor tests performed demonstrated the suitability of this technology for office-site data communications and exposed the need for more research to further improve the reliability and data handling of this technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The demand for high-speed optical links within local-area networks and storage-area networks continues to grow rapidly, with standards under development that demand single-wavelength solutions at data rates of 30 Gb/s and beyond. Robust low-cost schemes are required, with a particular emphasis on multimode-fibre links using optical transceivers based on vertical-cavity surface-emitting lasers. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A significant impediment to deployment of multicast services is the daunting technical complexity of developing, testing and validating congestion control protocols fit for wide-area deployment. Protocols such as pgmcc and TFMCC have recently made considerable progress on the single rate case, i.e. where one dynamic reception rate is maintained for all receivers in the session. However, these protocols have limited applicability, since scaling to session sizes beyond tens of participants necessitates the use of multiple rate protocols. Unfortunately, while existing multiple rate protocols exhibit better scalability, they are both less mature than single rate protocols and suffer from high complexity. We propose a new approach to multiple rate congestion control that leverages proven single rate congestion control methods by orchestrating an ensemble of independently controlled single rate sessions. We describe SMCC, a new multiple rate equation-based congestion control algorithm for layered multicast sessions that employs TFMCC as the primary underlying control mechanism for each layer. SMCC combines the benefits of TFMCC (smooth rate control, equation-based TCP friendliness) with the scalability and flexibility of multiple rates to provide a sound multiple rate multicast congestion control policy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.