109 resultados para Multiport Network Model
Resumo:
The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.
Resumo:
Network processors today consist of multiple parallel processors (micro engines) with support for multiple threads to exploit packet level parallelism inherent in network workloads. With such concurrency, packet ordering at the output of the network processor cannot be guaranteed. This paper studies the effect of concurrency in network processors on packet ordering. We use a validated Petri net model of a commercial network processor, Intel IXP 2400, to determine the extent of packet reordering for IPv4 forwarding application. Our study indicates that in addition to the parallel processing in the network processor, the allocation scheme for the transmit buffer also adversely impacts packet ordering. In particular, our results reveal that these packet reordering results in a packet retransmission rate of up to 61%. We explore different transmit buffer allocation schemes namely, contiguous, strided, local, and global which reduces the packet retransmission to 24%. We propose an alternative scheme, packet sort, which guarantees complete packet ordering while achieving a throughput of 2.5 Gbps. Further, packet sort outperforms the in-built packet ordering schemes in the IXP processor by up to 35%.
Resumo:
We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Thetaopt bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form dopt(Pmacrt) x Thetaopt with dopt scaling as Pmacrt 1 /eta, where Pmacrt is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then pro- - vide a simple characterisation of the optimal operating point.
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. We obtain simple necessary and sufficient conditions on prices such that efficient and fair link sharing is possible. We illustrate the ideas using a simple example
Resumo:
Workstation clusters equipped with high performance interconnect having programmable network processors facilitate interesting opportunities to enhance the performance of parallel application run on them. In this paper, we propose schemes where certain application level processing in parallel database query execution is performed on the network processor. We evaluate the performance of TPC-H queries executing on a high end cluster where all tuple processing is done on the host processor, using a timed Petri net model, and find that tuple processing costs on the host processor dominate the execution time. These results are validated using a small cluster. We therefore propose 4 schemes where certain tuple processing activity is offloaded to the network processor. The first 2 schemes offload the tuple splitting activity - computation to identify the node on which to process the tuples, resulting in an execution time speedup of 1.09 relative to the base scheme, but with I/O bus becoming the bottleneck resource. In the 3rd scheme in addition to offloading tuple processing activity, the disk and network interface are combined to avoid the I/O bus bottleneck, which results in speedups up to 1.16, but with high host processor utilization. Our 4th scheme where the network processor also performs apart of join operation along with the host processor, gives a speedup of 1.47 along with balanced system resource utilizations. Further we observe that the proposed schemes perform equally well even in a scaled architecture i.e., when the number of processors is increased from 2 to 64
Resumo:
In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using enhanced distributed channel access (EDCA). We build upon the fixed point analysis and performance insights. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures. The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY)
Resumo:
Protein structure networks are constructed for the identification of long-range signaling pathways in cysteinyl tRNA synthetase (CysRS). Molecular dynamics simulation trajectory of CysRS-ligand complexes were used to determine conformational ensembles in order to gain insight into the allosteric signaling paths. Communication paths between the anticodon binding region and the aminoacylation region have been identified. Extensive interaction between the helix bundle domain and the anticodon binding domain, resulting in structural rigidity in the presence of tRNA, has been detected. Based on the predicted model, six residues along the communication paths have been examined by mutations (single and double) and shown to mediate a coordinated coupling between anticodon recognition and activation of amino acid at the active site. This study on CysRS clearly shows that specific key residues, which are involved in communication between distal sites in allosteric proteins but may be elusive in direct structure analysis, can be identified from dynamics of protein structure networks.
Resumo:
In the present study singular fractal functions (SFF) were used to generate stress-strain plots for quasibrittle material like concrete and cement mortar and subsequently stress-strain plot of cement mortar obtained using SFF was used for modeling fracture process in concrete. The fracture surface of concrete is rough and irregular. The fracture surface of concrete is affected by the concrete's microstructure that is influenced by water cement ratio, grade of cement and type of aggregate 11-41. Also the macrostructural properties such as the size and shape of the specimen, the initial notch length and the rate of loading contribute to the shape of the fracture surface of concrete. It is known that concrete is a heterogeneous and quasi-brittle material containing micro-defects and its mechanical properties strongly relate to the presence of micro-pores and micro-cracks in concrete 11-41. The damage in concrete is believed to be mainly due to initiation and development of micro-defects with irregularity and fractal characteristics. However, repeated observations at various magnifications also reveal a variety of additional structures that fall between the `micro' and the `macro' and have not yet been described satisfactorily in a systematic manner [1-11,15-17]. The concept of singular fractal functions by Mosolov was used to generate stress-strain plot of cement concrete, cement mortar and subsequently the stress-strain plot of cement mortar was used in two-dimensional lattice model [28]. A two-dimensional lattice model was used to study concrete fracture by considering softening of matrix (cement mortar). The results obtained from simulations with lattice model show softening behavior of concrete and fairly agrees with the experimental results. The number of fractured elements are compared with the acoustic emission (AE) hits. The trend in the cumulative fractured beam elements in the lattice fracture simulation reasonably reflected the trend in the recorded AE measurements. In other words, the pattern in which AE hits were distributed around the notch has the same trend as that of the fractured elements around the notch which is in support of lattice model. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a Petri net model for a commercial network processor (Intel iXP architecture) which is a multithreaded multiprocessor architecture. We consider and model three different applications viz., IPv4 forwarding, network address translation, and IP security running on IXP 2400/2850. A salient feature of the Petri net model is its ability to model the application, architecture and their interaction in great detail. The model is validated using the Intel proprietary tool (SDK 3.51 for IXP architecture) over a range of configurations. We conduct a detailed performance evaluation, identify the bottleneck resource, and propose a few architectural extensions and evaluate them in detail.
Resumo:
This paper presents the design and implementation of a learning controller for the Automatic Generation Control (AGC) in power systems based on a reinforcement learning (RL) framework. In contrast to the recent RL scheme for AGC proposed by us, the present method permits handling of power system variables such as Area Control Error (ACE) and deviations from scheduled frequency and tie-line flows as continuous variables. (In the earlier scheme, these variables have to be quantized into finitely many levels). The optimal control law is arrived at in the RL framework by making use of Q-learning strategy. Since the state variables are continuous, we propose the use of Radial Basis Function (RBF) neural networks to compute the Q-values for a given input state. Since, in this application we cannot provide training data appropriate for the standard supervised learning framework, a reinforcement learning algorithm is employed to train the RBF network. We also employ a novel exploration strategy, based on a Learning Automata algorithm,for generating training samples during Q-learning. The proposed scheme, in addition to being simple to implement, inherits all the attractive features of an RL scheme such as model independent design, flexibility in control objective specification, robustness etc. Two implementations of the proposed approach are presented. Through simulation studies the attractiveness of this approach is demonstrated.
Resumo:
The development of a neural network based power system damping controller (PSDC) for a static VAr compensator (SVC), designed to enhance the damping characteristics of a power system network representing a part of the Electricity Generating Authority of Thailand (EGAT) system is presented. The proposed stabilising controller scheme of the SVC consists of a neuro-identifier and a neuro-controller which have been developed based on a functional link network (FLN) model. A recursive online training algorithm has been utilised to train the two networks. The simulation results have been obtained under various operating conditions and disturbance cases to show that the proposed stabilising controller can provide a better damping to the low frequency oscillations, as compared to the conventional controllers. The effectiveness of the proposed stabilising controller has also been compared with a conventional power system stabiliser provided in the generator excitation system
Resumo:
The development of a neural network based power system damping controller (PSDC) for a static Var compensator (SVC), designed to enhance the damping characteristics of a power system network representing a part of the Electricity Generating Authority of Thailand (EGAT) system is presented. The proposed stabilising controller scheme of the SVC consists of a neuro-identifier and a neuro-controller which have been developed based on a functional link network (FLN) model. A recursive online training algorithm has been utilised to train the two networks. The simulation results have been obtained under various operating conditions and disturbance cases to show that the proposed stabilising controller can provide a better damping to the low frequency oscillations, as compared to the conventional controllers. The effectiveness of the proposed stabilising controller has also been compared with a conventional power system stabiliser provided in the generator excitation system.
Resumo:
This paper presents the development of a neural network based power system stabilizer (PSS) designed to enhance the damping characteristics of a practical power system network representing a part of Electricity Generating Authority of Thailand (EGAT) system. The proposed PSS consists of a neuro-identifier and a neuro-controller which have been developed based on functional link network (FLN) model. A recursive on-line training algorithm has been utilized to train the two neural networks. Simulation results have been obtained under various operating conditions and severe disturbance cases which show that the proposed neuro-PSS can provide a better damping to the local as well as interarea modes of oscillations as compared to a conventional PSS
Resumo:
The flux tube model offers a pictorial description of what happens during the deconfinement phase transition in QCD. The three-point vertices of a flux tube network lead to formation of baryons upon hadronization. Therefore, correlations in the baryon number distribution at the last scattering surface are related to the preceding pattern of the flux tube vertices in the quark-gluon plasma, and provide a signature of the nearby deconfinement phase transition. I discuss the nature of the expected signal, and how to extract it from the experimental data for heavy ion collisions at RHIC and LHC.
Resumo:
Causal relationships existing between observed levels of groundwater in a semi-arid sub-basin of the Kabini River basin (Karnataka state, India) are investigated in this study. A Vector Auto Regressive model is used for this purpose. Its structure is built on an upstream/downstream interaction network based on observed hydro-physical properties. Exogenous climatic forcing is used as an input based on cumulated rainfall departure. Optimal models are obtained thanks to a trial approach and are used as a proxy of the dynamics to derive causal networks. It appears to be an interesting tool for analysing the causal relationships existing inside the basin. The causal network reveals 3 main regions: the Northeastern part of the Gundal basin is closely coupled to the outlet dynamics. The Northwestern part is mainly controlled by the climatic forcing and only marginally linked to the outlet dynamic. Finally, the upper part of the basin plays as a forcing rather than a coupling with the lower part of the basin allowing for a separate analysis of this local behaviour. The analysis also reveals differential time scales at work inside the basin when comparing upstream oriented with downstream oriented causalities. In the upper part of the basin, time delays are close to 2 months in the upward direction and lower than 1 month in the downward direction. These time scales are likely to be good indicators of the hydraulic response time of the basin which is a parameter usually difficult to estimate practically. This suggests that, at the sub-basin scale, intra-annual time scales would be more relevant scales for analysing or modelling tropical basin dynamics in hard rock (granitic and gneissic) aquifers ubiquitous in south India. (c) 2012 Elsevier B.V. All rights reserved.