89 resultados para network performance

em Indian Institute of Science - Bangalore - Índia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper optical code-division multiple-access (O-CDMA) packet network is considered, which offers inherent security in the access networks. Two types of random access protocols are proposed for packet transmission. In protocol 1, all distinct codes and in protocol 2, distinct codes as well as shifted versions of all these codes are used. O-CDMA network performance using optical orthogonal codes (OOCs) 1-D and two-dimensional (2-D) wavelength/time single-pulse-per-row (W/T SPR) codes are analyzed. The main advantage of using 2-D codes instead of one-dimensional (1-D) codes is to reduce the errors due to multiple access interference among different users. In this paper, correlation receiver and chip-level receiver are considered in the analysis. Using analytical model, we compute packet-success probability, throughput and compare for OOC and SPR codes in an O-CDMA network and the analysis shows improved performance with SPR codes as compared to OOC codes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the context of wireless sensor networks, we are motivated by the design of a tree network spanning a set of source nodes that generate packets, a set of additional relay nodes that only forward packets from the sources, and a data sink. We assume that the paths from the sources to the sink have bounded hop count, that the nodes use the IEEE 802.15.4 CSMA/CA for medium access control, and that there are no hidden terminals. In this setting, starting with a set of simple fixed point equations, we derive explicit conditions on the packet generation rates at the sources, so that the tree network approximately provides certain quality of service (QoS) such as end-to-end delivery probability and mean delay. The structures of our conditions provide insight on the dependence of the network performance on the arrival rate vector, and the topological properties of the tree network. Our numerical experiments suggest that our approximations are able to capture a significant part of the QoS aware throughput region (of a tree network), that is adequate for many sensor network applications. Furthermore, for the special case of equal arrival rates, default backoff parameters, and for a range of values of target QoS, we show that among all path-length-bounded trees (spanning a given set of sources and the data sink) that meet the conditions derived in the paper, a shortest path tree achieves the maximum throughput. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new strategy for load distribution in a single-level tree network equipped with or without front-ends. The load is distributed in more than one installment in an optimal manner to minimize the processing time. This is a deviation and an improvement over earlier studies in which the load distribution is done in only one installment. Recursive equations for the general case, and their closed form solutions for a special case in which the network has identical processors and identical links, are derived. An asymptotic analysis of the network performance with respect to the number of processors and the number of installments is carried out. Discussions of the results in terms of some practical issues like the tradeoff relationship between the number of processors and the number of installments are also presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper optical code-division multiple-access (O-CDMA) packet network is considered, which offers inherent security in the access networks. Two types of random access protocols are proposed for packet transmission. In protocol 1, all distinct codes and in protocol 2, distinct codes as well as shifted versions of all these codes are used. O-CDMA network performance using optical orthogonal codes (OOCs) 1-D and two-dimensional (2-D) wavelength/time single-pulse-per-row (W/T SPR) codes are analyzed. The main advantage of using 2-D codes instead of one-dimensional (1-D) codes is to reduce the errors due to multiple access interference among different users. SPR codes are chosen as they have nearly ideal correlation properties. In this paper, correlation receiver is considered in the analysis. Using analytical model, we compare the OOC and SPR code performances in O-CDMA networks. We compute packet-success probability and throughput for both the types of codes. The analysis shows improved performance with SPR codes as compared to OOC codes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper optical code-division multiple-access (O-CDMA) packet network is considered. Two types of random access protocols are proposed for packet transmission. In protocol 1, all distinct codes and in protocol 2, distinct codes as well as shifted versions of all these codes are used. O-CDMA network performance using optical orthogonal codes (OOCs) 1-D and twodimensional (2-D) wavelength/time single-pulse-per-row (W/TSPR) codes are analyzed. The main advantage of using 2-D codes instead of one-dimensional (1-D) codes is to reduce the errors due to multiple access interference among different users. In this paper, correlation receiver is considered in the analysis. Using analytical model, we compute and compare packet-success probability for 1-D and 2-D codes in an O-CDMA network and the analysis shows improved performance with 2-D codes as compared to 1-D codes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of finding optimal energy sharing policies that maximize the network performance of a system comprising of multiple sensor nodes and a single energy harvesting (EH) source. Sensor nodes periodically sense the random field and generate data, which is stored in the corresponding data queues. The EH source harnesses energy from ambient energy sources and the generated energy is stored in an energy buffer. Sensor nodes receive energy for data transmission from the EH source. The EH source has to efficiently share the stored energy among the nodes to minimize the long-run average delay in data transmission. We formulate the problem of energy sharing between the nodes in the framework of average cost infinite-horizon Markov decision processes (MDPs). We develop efficient energy sharing algorithms, namely Q-learning algorithm with exploration mechanisms based on the epsilon-greedy method as well as upper confidence bound (UCB). We extend these algorithms by incorporating state and action space aggregation to tackle state-action space explosion in the MDP. We also develop a cross entropy based method that incorporates policy parameterization to find near optimal energy sharing policies. Through simulations, we show that our algorithms yield energy sharing policies that outperform the heuristic greedy method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current day networks use Proactive networks for adaption to the dynamic scenarios. The use of cognition technique based on the Observe, Orient, Decide and Act loop (OODA) is proposed to construct proactive networks. The network performance degradation in knowledge acquisition and malicious node presence is a problem that exists. The use of continuous time dynamic neural network is considered to achieve cognition. The variance in service rates of user nodes is used to detect malicious activity in heterogeneous networks. The improved malicious node detection rates are proved through the experimental results presented in this paper. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a power, latency and throughput trade-off study on NoCs by varying microarchitectural (e.g. pipelining) and circuit level (e.g. frequency and voltage) parameters. We change pipelining depth, operating frequency and supply voltage for 3 example NoCs - 16 node 2D Torus, Tree network and Reduced 2D Torus. We use an in-house NoC exploration framework capable of topology generation and comparison using parameterized models of Routers and links developed in SystemC. The framework utilizes interconnect power and delay models from a low-level modelling tool called Intacte[1]1. We find that increased pipelining can actually reduce latency. We also find that there exists an optimal degree of pipelining which is the most energy efficient in terms of minimizing energy-delay product.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Digest caches have been proposed as an effective method tospeed up packet classification in network processors. In this paper, weshow that the presence of a large number of small flows and a few largeflows in the Internet has an adverse impact on the performance of thesedigest caches. In the Internet, a few large flows transfer a majority ofthe packets whereas the contribution of several small flows to the totalnumber of packets transferred is small. In such a scenario, the LRUcache replacement policy, which gives maximum priority to the mostrecently accessed digest, tends to evict digests belonging to the few largeflows. We propose a new cache management algorithm called SaturatingPriority (SP) which aims at improving the performance of digest cachesin network processors by exploiting the disparity between the number offlows and the number of packets transferred. Our experimental resultsdemonstrate that SP performs better than the widely used LRU cachereplacement policy in size constrained caches. Further, we characterizethe misses experienced by flow identifiers in digest caches.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a Petri net model for a commercial network processor (Intel iXP architecture) which is a multithreaded multiprocessor architecture. We consider and model three different applications viz., IPv4 forwarding, network address translation, and IP security running on IXP 2400/2850. A salient feature of the Petri net model is its ability to model the application, architecture and their interaction in great detail. The model is validated using the Intel proprietary tool (SDK 3.51 for IXP architecture) over a range of configurations. We conduct a detailed performance evaluation, identify the bottleneck resource, and propose a few architectural extensions and evaluate them in detail.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper deals with the solution to the problem of multisensor data fusion for a single target scenario as detected by an airborne track-while-scan radar. The details of a neural network implementation, various training algorithms based on standard backpropagation, and the results of training and testing the neural network are presented. The promising capabilities of RPROP algorithm for multisensor data fusion for various parameters are shown in comparison to other adaptive techniques

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We study the performance of cognitive (secondary) users in a cognitive radio network which uses a channel whenever the primary users are not using the channel. The usage of the channel by the primary users is modelled by an ON-OFF renewal process. The cognitive users may be transmitting data using TCP connections and voice traffic. The voice traffic is given priority over the data traffic. We theoretically compute the mean delay of TCP and voice packets and also the mean throughput of the different TCP connections. We compare the theoretical results with simulations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The performance analysis of adaptive physical layer network-coded two-way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. The deep channel fade conditions which occur at the relay referred as the singular fade states fall in the following two classes: (i) removable and (ii) non-removable singular fade states. With every singular fade state, we associate an error probability that the relay transmits a wrong network-coded symbol during the BC phase. It is shown that adaptive network coding provides a coding gain over fixed network coding, by making the error probabilities associated with the removable singular fade states contributing to the average Symbol Error Rate (SER) fall as SNR-2 instead of SNR-1. A high SNR upper-bound on the average end-to-end SER for the adaptive network coding scheme is derived, for a Rician fading scenario, which is found to be tight through simulations. Specifically, it is shown that for the adaptive network coding scheme, the probability that the relay node transmits a wrong network-coded symbol is upper-bounded by twice the average SER of a point-to-point fading channel, at high SNR. Also, it is shown that in a Rician fading scenario, it suffices to remove the effect of only those singular fade states which contribute dominantly to the average SER.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The analysis of modulation schemes for the physical layer network-coded two way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. Depending on the signal set used at the end nodes, the minimum distance of the effective constellation seen at the relay becomes zero for a finite number of channel fade states referred as the singular fade states. The singular fade states fall into the following two classes: (i) the ones which are caused due to channel outage and whose harmful effect cannot be mitigated by adaptive network coding called the non-removable singular fade states and (ii) the ones which occur due to the choice of the signal set and whose harmful effects can be removed called the removable singular fade states. In this paper, we derive an upper bound on the average end-to-end Symbol Error Rate (SER), with and without adaptive network coding at the relay, for a Rician fading scenario. It is shown that without adaptive network coding, at high Signal to Noise Ratio (SNR), the contribution to the end-to-end SER comes from the following error events which fall as SNR-1: the error events associated with the removable and nonremovable singular fade states and the error event during the BC phase. In contrast, for the adaptive network coding scheme, the error events associated with the removable singular fade states fall as SNR-2, thereby providing a coding gain over the case when adaptive network coding is not used. Also, it is shown that for a Rician fading channel, the error during the MA phase dominates over the error during the BC phase. Hence, adaptive network coding, which improves the performance during the MA phase provides more gain in a Rician fading scenario than in a Rayleigh fading scenario. Furthermore, it is shown that for large Rician factors, among those removable singular fade states which have the same magnitude, those which have the least absolute value of the phase - ngle alone contribute dominantly to the end-to-end SER and it is sufficient to remove the effect of only such singular fade states.