310 resultados para Network coding
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM
Resumo:
An optimal control law for a general nonlinear system can be obtained by solving Hamilton-Jacobi-Bellman equation. However, it is difficult to obtain an analytical solution of this equation even for a moderately complex system. In this paper, we propose a continuoustime single network adaptive critic scheme for nonlinear control affine systems where the optimal cost-to-go function is approximated using a parametric positive semi-definite function. Unlike earlier approaches, a continuous-time weight update law is derived from the HJB equation. The stability of the system is analysed during the evolution of weights using Lyapunov theory. The effectiveness of the scheme is demonstrated through simulation examples.
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.
Resumo:
The prevalent virtualization technologies provide QoS support within the software layers of the virtual machine monitor(VMM) or the operating system of the virtual machine(VM). The QoS features are mostly provided as extensions to the existing software used for accessing the I/O device because of which the applications sharing the I/O device experience loss of performance due to crosstalk effects or usable bandwidth. In this paper we examine the NIC sharing effects across VMs on a Xen virtualized server and present an alternate paradigm that improves the shared bandwidth and reduces the crosstalk effect on the VMs. We implement the proposed hardwaresoftware changes in a layered queuing network (LQN) model and use simulation techniques to evaluate the architecture. We find that simple changes in the device architecture and associated system software lead to application throughput improvement of up to 60%. The architecture also enables finer QoS controls at device level and increases the scalability of device sharing across multiple virtual machines. We find that the performance improvement derived using LQN model is comparable to that reported by similar but real implementations.
Resumo:
Scalable Networks on Chips (NoCs) are needed to match the ever-increasing communication demands of large-scale Multi-Processor Systems-on-chip (MPSoCs) for multi media communication applications. The heterogeneous nature of application specific on-chip cores along with the specific communication requirements among the cores calls for the design of application-specific NoCs for improved performance in terms of communication energy, latency, and throughput. In this work, we propose a methodology for the design of customized irregular networks-on-chip. The proposed method exploits a priori knowledge of the applications communication characteristic to generate an optimized network topology and corresponding routing tables.
Resumo:
Network processors today consist of multiple parallel processors (micro engines) with support for multiple threads to exploit packet level parallelism inherent in network workloads. With such concurrency, packet ordering at the output of the network processor cannot be guaranteed. This paper studies the effect of concurrency in network processors on packet ordering. We use a validated Petri net model of a commercial network processor, Intel IXP 2400, to determine the extent of packet reordering for IPv4 forwarding application. Our study indicates that in addition to the parallel processing in the network processor, the allocation scheme for the transmit buffer also adversely impacts packet ordering. In particular, our results reveal that these packet reordering results in a packet retransmission rate of up to 61%. We explore different transmit buffer allocation schemes namely, contiguous, strided, local, and global which reduces the packet retransmission to 24%. We propose an alternative scheme, packet sort, which guarantees complete packet ordering while achieving a throughput of 2.5 Gbps. Further, packet sort outperforms the in-built packet ordering schemes in the IXP processor by up to 35%.
Resumo:
We consider a joint power control and transmission scheduling problem in wireless networks with average power constraints. While the capacity region of a wireless network is convex, a characterization of this region is a hard problem. We formulate a network utility optimization problem involving time-sharing across different "transmission modes," where each mode corresponds to the set of power levels used in the network. The structure of the optimal solution is a time-sharing across a small set of such modes. We use this structure to develop an efficient heuristic approach to finding a suboptimal solution through column generation iterations. This heuristic approach converges quite fast in simulations, and provides a tool for wireless network planning.
Resumo:
In this paper, we propose and analyze a novel idea of performing interference cancellation (IC) in a distributed/cooperative manner, with a motivation to provide multiuser detection (MUD) benefit to nodes that have only a single user detection capability. In the proposed distributed interference cancellation (DIC) scheme, during phase-1 of transmission, an MUD capable cooperating relay node estimates all the sender nodes' bits through multistage interference cancellation. These estimated bits are then sent by the relay node on orthogonal tones in phase-2 of transmission. The destination nodes receive these bit estimates and use them for interference estimation/cancellation, thus achieving IC benefit in a distributed manner. For this DIC scheme, we analytically derive an exact expression for the bit error rate (BER) in a basic five-node network (two source-destination node pairs and a cooperating relay node) on AWGN channels. Analytical BER results are shown to match with simulation results. For more general system scenarios, including more than two source-destination pairs and fading channels without and with space-time coding, we present simulation results to establish the potential for improved performance in the proposed distributed approach to interference cancellation. We also present a linear version of the proposed DIC.
Resumo:
We consider the problem of distributed joint source-channel coding of correlated Gaussian sources over a Gaussian Multiple Access Channel (MAC). There may be side information at the encoders and/or at the decoder. First we specialize a general result in [16] to obtain sufficient conditions for reliable transmission over a Gaussian MAC. This system does not satisfy the source channel separation. Thus, next we study and compare three joint source channel coding schemes available in literature.
Resumo:
We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Thetaopt bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form dopt(Pmacrt) x Thetaopt with dopt scaling as Pmacrt 1 /eta, where Pmacrt is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then pro- - vide a simple characterisation of the optimal operating point.