244 resultados para TVA Network
Resumo:
In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.
Resumo:
This paper presents the capability of the neural networks as a computational tool for solving constrained optimization problem, arising in routing algorithms for the present day communication networks. The application of neural networks in the optimum routing problem, in case of packet switched computer networks, where the goal is to minimize the average delays in the communication have been addressed. The effectiveness of neural network is shown by the results of simulation of a neural design to solve the shortest path problem. Simulation model of neural network is shown to be utilized in an optimum routing algorithm known as flow deviation algorithm. It is also shown that the model will enable the routing algorithm to be implemented in real time and also to be adaptive to changes in link costs and network topology. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper presents a prototype of a fuzzy system for alleviation of network overloads in the day-to-day operation of power systems. The control used for overload alleviation is real power generation rescheduling. Generation Shift Sensitivity Factors (GSSF) are computed accurately, using a more realistic operational load flow model. Overloading of lines and sensitivity of controlling variables are translated into fuzzy set notations to formulate the relation between overloading of line and controlling ability of generation scheduling. A fuzzy rule based system is formed to select the controllers, their movement direction and step size. Overall sensitivity of line loading to each of the generation is also considered in selecting the controller. Results obtained for network overload alleviation of two modified Indian power networks of 24 bus and 82 bus with line outage contingencies are presented for illustration purposes.
Resumo:
Pyrrolysyl-tRNA synthetase (PyIRS) is an atypical enzyme responsible for charging tRNA(Pyl) with pyrrolysine, despite lacking precise tRNA anticodon recognition. This dimeric protein exhibits allosteric regulation of function, like any other tRNA synthetases. In this study we examine the paths of allosteric communication at the atomic level, through energy-weighted networks of Desulfitobacterium hafniense PyIRS (DhPyIRS) and its complexes with tRNA(Pyl) and activated pyrrolysine. We performed molecular dynamics simulations of the structures of these complexes to obtain an ensemble conformation-population perspective. Weighted graph parameters relevant to identifying key players and ties in the context of social networks such as edge/node betweenness, closeness index, and the concept of funneling are explored in identifying key residues and interactions leading to shortest paths of communication in the structure networks of DhPylRS. Further, the changes in the status of important residues and connections and the costs of communication due to ligand induced perturbations are evaluated. The optimal, suboptimal, and preexisting paths are also investigated. Many of these parameters have exhibited an enhanced asymmetry between the two subunits of the dimeric protein, especially in the pretransfer complex, leading us to conclude that encoding of function goes beyond the sequence/structure of proteins. The local and global perturbations mediated by appropriate ligands and their influence on the equilibrium ensemble of conformations also have a significant role to play in the functioning of proteins. Taking a comprehensive view of these observations, we propose that the origin of many functional aspects (allostery rand half-sites reactivity in the case of DhPyIRS) lies in subtle rearrangements of interactions and dynamics at a global level.
Resumo:
An entirely different approach for localisation of winding deformation based on terminal measurements is presented. Within the context of this study, winding deformation means, a discrete and specific change externally imposed at a particular position on the winding. The proposed method is based on pre-computing and plotting the complex network-function loci e.g. driving-point impedance (DPI)] at a selected frequency, for a meaningful range of values for each element (increasing and decreasing) of the ladder network which represents the winding. This loci diagram is called the nomogram. After introducing a discrete change, amplitude and phase of DPI are measured. By plotting this single measurement on the nomogram, it is possible to estimate the location and identify the extent of change. In contrast to the existing approach, the proposed method is fast, non-iterative and yields reasonably good localisation. Experimental results for actual transformer windings (interleaved and continuous disc) are presented.
Resumo:
In this paper, we outline an approach to the task of designing network codes in a non-multicast setting. Our approach makes use of the concept of interference alignment. As an example, we consider the distributed storage problem where the data is stored across the network in n nodes and where a data collector can recover the data by connecting to any k of the n nodes and where furthermore, upon failure of a node, a new node can replicate the data stored in the failed node while minimizing the repair bandwidth.
Resumo:
Characterizing the functional connectivity between neurons is key for understanding brain function. We recorded spikes and local field potentials (LFPs) from multielectrode arrays implanted in monkey visual cortex to test the hypotheses that spikes generated outward-traveling LFP waves and the strength of functional connectivity depended on stimulus contrast, as described recently. These hypotheses were proposed based on the observation that the latency of the peak negativity of the spike-triggered LFP average (STA) increased with distance between the spike and LFP electrodes, and the magnitude of the STA negativity and the distance over which it was observed decreased with increasing stimulus contrast. Detailed analysis of the shape of the STA, however, revealed contributions from two distinct sources-a transient negativity in the LFP locked to the spike (similar to 0 ms) that attenuated rapidly with distance, and a low-frequency rhythm with peak negativity similar to 25 ms after the spike that attenuated slowly with distance. The overall negative peak of the LFP, which combined both these components, shifted from similar to 0 to similar to 25 ms going from electrodes near the spike to electrodes far from the spike, giving an impression of a traveling wave, although the shift was fully explained by changing contributions from the two fixed components. The low-frequency rhythm was attenuated during stimulus presentations, decreasing the overall magnitude of the STA. These results highlight the importance of accounting for the network activity while using STAs to determine functional connectivity.
Resumo:
An analog minimum-variance unbiased estimator(MVUE) over an asymmetric wireless sensor network is studied.Minimisation of variance is cast into a constrained non-convex optimisation problem. An explicit algorithm that solves the problem is provided. The solution is obtained by decomposing the original problem into a finite number of convex optimisation problems with explicit solutions. These solutions are then juxtaposed together by exploiting further structure in the objective function.
Resumo:
The poor performance of TCP over multi-hop wireless networks is well known. In this paper we explore to what extent network coding can help to improve the throughput performance of TCP controlled bulk transfers over a chain topology multi-hop wireless network. The nodes use a CSMA/ CA mechanism, such as IEEE 802.11’s DCF, to perform distributed packet scheduling. The reverse flowing TCP ACKs are sought to be X-ORed with forward flowing TCP data packets. We find that, without any modification to theMAC protocol, the gain from network coding is negligible. The inherent coordination problem of carrier sensing based random access in multi-hop wireless networks dominates the performance. We provide a theoretical analysis that yields a throughput bound with network coding. We then propose a distributed modification of the IEEE 802.11 DCF, based on tuning the back-off mechanism using a feedback approach. Simulation studies show that the proposed mechanism when combined with network coding, improves the performance of a TCP session by more than 100%.
Resumo:
Building flexible constraint length Viterbi decoders requires us to be able to realize de Bruijn networks of various sizes on the physically provided interconnection network. This paper considers the case when the physical network is itself a de Bruijn network and presents a scalable technique for realizing any n-node de Bruijn network on an N-node de Bruijn network, where n < N. The technique ensures that the length of the longest path realized on the network is minimized and that each physical connection is utilized to send only one data item, both of which are desirable in order to reduce the hardware complexity of the network and to obtain the best possible performance.
Resumo:
Digest caches have been proposed as an effective method tospeed up packet classification in network processors. In this paper, weshow that the presence of a large number of small flows and a few largeflows in the Internet has an adverse impact on the performance of thesedigest caches. In the Internet, a few large flows transfer a majority ofthe packets whereas the contribution of several small flows to the totalnumber of packets transferred is small. In such a scenario, the LRUcache replacement policy, which gives maximum priority to the mostrecently accessed digest, tends to evict digests belonging to the few largeflows. We propose a new cache management algorithm called SaturatingPriority (SP) which aims at improving the performance of digest cachesin network processors by exploiting the disparity between the number offlows and the number of packets transferred. Our experimental resultsdemonstrate that SP performs better than the widely used LRU cachereplacement policy in size constrained caches. Further, we characterizethe misses experienced by flow identifiers in digest caches.
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM