972 resultados para Network constraints
Resumo:
In this article we study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh network, where the nodes in the network are subjected to maximum energy expenditure rates. We model link contention in the wireless network using the contention graph and we model energy expenditure rate constraint of nodes using the energy expenditure rate matrix. We formulate the problem as an aggregate utility maximization problem and apply duality theory in order to decompose the problem into two sub-problems namely, network layer routing and congestion control problem and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. We study the e�ects of energy expenditure rate constraints of the nodes on the optimal throughput of the network.
Resumo:
We consider a joint power control and transmission scheduling problem in wireless networks with average power constraints. While the capacity region of a wireless network is convex, a characterization of this region is a hard problem. We formulate a network utility optimization problem involving time-sharing across different "transmission modes," where each mode corresponds to the set of power levels used in the network. The structure of the optimal solution is a time-sharing across a small set of such modes. We use this structure to develop an efficient heuristic approach to finding a suboptimal solution through column generation iterations. This heuristic approach converges quite fast in simulations, and provides a tool for wireless network planning.
Resumo:
In a dense multi-hop network of mobile nodes capable of applying adaptive power control, we consider the problem of finding the optimal hop distance that maximizes a certain throughput measure in bit-metres/sec, subject to average network power constraints. The mobility of nodes is restricted to a circular periphery area centered at the nominal location of nodes. We incorporate only randomly varying path-loss characteristics of channel gain due to the random motion of nodes, excluding any multi-path fading or shadowing effects. Computation of the throughput metric in such a scenario leads us to compute the probability density function of random distance between points in two circles. Using numerical analysis we discover that choosing the nearest node as next hop is not always optimal. Optimal throughput performance is also attained at non-trivial hop distances depending on the available average network power.
Resumo:
Video streaming applications have hitherto been supported by single server systems. A major drawback of such a solution is that it increases the server load. The server restricts the number of clients that can be simultaneously supported due to limitation in bandwidth. The constraints of a single server system can be overcome in video streaming if we exploit the endless resources available in a distributed and networked system. We explore a P2P system for streaming video applications. In this paper we build a P2P streaming video (SVP2P) service in which multiple peers co-operate to serve video segments for new requests, thereby reducing server load and bandwidth used. Our simulation shows the playback latency using SVP2P is roughly 1/4th of the latency incurred when the server directly streams the video. Bandwidth consumed for control messages (overhead) is as low as 1.5% of the total data transfered. The most important observation is that the capacity of the SVP2P grows dynamically.
Resumo:
In this paper, we study duty cycling and power management in a network of energy harvesting sensor (EHS) nodes. We consider a one-hop network, where K EHS nodes send data to a destination over a wireless fading channel. The goal is to find the optimum duty cycling and power scheduling across the nodes that maximizes the average sum data rate, subject to energy neutrality at each node. We adopt a two-stage approach to simplify the problem. In the inner stage, we solve the problem of optimal duty cycling of the nodes, subject to the short-term power constraint set by the outer stage. The outer stage sets the short-term power constraints on the inner stage to maximize the long-term expected sum data rate, subject to long-term energy neutrality at each node. Albeit suboptimal, our solutions turn out to have a surprisingly simple form: the duty cycle allotted to each node by the inner stage is simply the fractional allotted power of that node relative to the total allotted power. The sum power allotted is a clipped version of the sum harvested power across all the nodes. The average sum throughput thus ultimately depends only on the sum harvested power and its statistics. We illustrate the performance improvement offered by the proposed solution compared to other naive schemes via Monte-Carlo simulations.
Resumo:
Recently, Ebrahimi and Fragouli proposed an algorithm to construct scalar network codes using small fields (and vector network codes of small lengths) satisfying multicast constraints in a given single-source, acyclic network. The contribution of this paper is two fold. Primarily, we extend the scalar network coding algorithm of Ebrahimi and Fragouli (henceforth referred to as the EF algorithm) to block network-error correction. Existing construction algorithms of block network-error correcting codes require a rather large field size, which grows with the size of the network and the number of sinks, and thereby can be prohibitive in large networks. We give an algorithm which, starting from a given network-error correcting code, can obtain another network code using a small field, with the same error correcting capability as the original code. Our secondary contribution is to improve the EF Algorithm itself. The major step in the EF algorithm is to find a least degree irreducible polynomial which is coprime to another large degree polynomial. We suggest an alternate method to compute this coprime polynomial, which is faster than the brute force method in the work of Ebrahimi and Fragouli.
Resumo:
We consider precoding strategies at the secondary base station (SBS) in a cognitive radio network with interference constraints at the primary users (PUs). Precoding strategies at the SBS which satisfy interference constraints at the PUs in cognitive radio networks have not been adequately addressed in the literature so far. In this paper, we consider two scenarios: i) when the primary base station (PBS) data is not available at SBS, and ii) when the PBS data is made available at the SBS. We derive the optimum MMSE and Tomlinson-Harashima precoding (THP) matrix Alters at the SBS which satisfy the interference constraints at the PUs for the former case. For the latter case, we propose a precoding scheme at the SBS which performs pre-cancellation of the PBS data, followed by THP on the pre-cancelled data. The optimum precoding matrix filters are computed through an iterative search. To illustrate the robustness of the proposed approach against imperfect CSI at the SBS, we then derive robust precoding filters under imperfect CSI for the latter case. Simulation results show that the proposed optimum precoders achieve good bit error performance at the secondary users while meeting the interference constraints at the PUs.
Resumo:
The design of modulation schemes for the physical layer network-coded two-way relaying scenario is considered with a protocol which employs two phases: multiple access (MA) phase and broadcast (BC) phase. It was observed by Koike-Akino et al. that adaptively changing the network coding map used at the relay according to the channel conditions greatly reduces the impact of MA interference which occurs at the relay during the MA phase and all these network coding maps should satisfy a requirement called the exclusive law. We show that every network coding map that satisfies the exclusive law is representable by a Latin Square and conversely, that this relationship can be used to get the network coding maps satisfying the exclusive law. The channel fade states for which the minimum distance of the effective constellation at the relay become zero are referred to as the singular fade states. For M - PSK modulation (M any power of 2), it is shown that there are (M-2/4 - M/2 + 1) M singular fade states. Also, it is shown that the constraints which the network coding maps should satisfy so that the harmful effects of the singular fade states are removed, can be viewed equivalently as partially filled Latin Squares (PFLS). The problem of finding all the required maps is reduced to finding a small set of maps for M - PSK constellations (any power of 2), obtained by the completion of PFLS. Even though the completability of M x M PFLS using M symbols is an open problem, specific cases where such a completion is always possible are identified and explicit construction procedures are provided. Having obtained the network coding maps, the set of all possible channel realizations (the complex plane) is quantized into a finite number of regions, with a specific network coding map chosen in a particular region. It is shown that the complex plane can be partitioned into two regions: a region in which any network coding map which satisfies the exclusive law gives the same best performance and a region in which the choice of the network coding map affects the performance. The quantization thus obtained analytically, leads to the same as the one obtained using computer search for M = 4-PSK signal set by Koike-Akino et al., when specialized for Simulation results show that the proposed scheme performs better than the conventional exclusive-OR (XOR) network coding and in some cases outperforms the scheme proposed by Koike-Akino et al.
Resumo:
We use information theoretic achievable rate formulas for the multi-relay channel to study the problem of optimal placement of relay nodes along the straight line joining a source node and a destination node. The achievable rate formulas that we utilize are for full-duplex radios at the relays and decode-and-forward relaying. For the single relay case, and individual power constraints at the source node and the relay node, we provide explicit formulas for the optimal relay location and the optimal power allocation to the source-relay channel, for the exponential and the power-law path-loss channel models. For the multiple relay case, we consider exponential path-loss and a total power constraint over the source and the relays, and derive an optimization problem, the solution of which provides the optimal relay locations. Numerical results suggest that at low attenuation the relays are mostly clustered close to the source in order to be able to cooperate among themselves, whereas at high attenuation they are uniformly placed and work as repeaters. We also prove that a constant rate independent of the attenuation in the network can be achieved by placing a large enough number of relay nodes uniformly between the source and the destination, under the exponential path-loss model with total power constraint.
Resumo:
Transmission investments are currently needed to meet an increasing electricity demand, to address security of supply concerns, and to reach carbon-emissions targets. A key issue when assessing the benefits from an expanded grid concerns the valuation of the uncertain cash flows that result from the expansion. We propose a valuation model that accommodates both physical and economic uncertainties following the Real Options approach. It combines optimization techniques with Monte Carlo simulation. We illustrate the use of our model in a simplified, two-node grid and assess the decision whether to invest or not in a particular upgrade. The generation mix includes coal-and natural gas-fired stations that operate under carbon constraints. The underlying parameters are estimated from observed market data.
Resumo:
A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.
Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.
Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.
Resumo:
In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions.
Resumo:
In this paper, we studied range-based attacks on links in geographically constrained scale-free networks and found that there is a continuous switching of roles of short-and long-range attacks on links when tuning the geographical constraint strength. Our results demonstrate that the geography has a significant impact on the network efficiency and security; thus one can adjust the geographical structure to optimize the robustness and the efficiency of the networks. We introduce a measurement of the impact of links on the efficiency of the network, and an effective attacking strategy is suggested
Resumo:
The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.
Resumo:
BACKGROUND:In the current climate of high-throughput computational biology, the inference of a protein's function from related measurements, such as protein-protein interaction relations, has become a canonical task. Most existing technologies pursue this task as a classification problem, on a term-by-term basis, for each term in a database, such as the Gene Ontology (GO) database, a popular rigorous vocabulary for biological functions. However, ontology structures are essentially hierarchies, with certain top to bottom annotation rules which protein function predictions should in principle follow. Currently, the most common approach to imposing these hierarchical constraints on network-based classifiers is through the use of transitive closure to predictions.RESULTS:We propose a probabilistic framework to integrate information in relational data, in the form of a protein-protein interaction network, and a hierarchically structured database of terms, in the form of the GO database, for the purpose of protein function prediction. At the heart of our framework is a factorization of local neighborhood information in the protein-protein interaction network across successive ancestral terms in the GO hierarchy. We introduce a classifier within this framework, with computationally efficient implementation, that produces GO-term predictions that naturally obey a hierarchical 'true-path' consistency from root to leaves, without the need for further post-processing.CONCLUSION:A cross-validation study, using data from the yeast Saccharomyces cerevisiae, shows our method offers substantial improvements over both standard 'guilt-by-association' (i.e., Nearest-Neighbor) and more refined Markov random field methods, whether in their original form or when post-processed to artificially impose 'true-path' consistency. Further analysis of the results indicates that these improvements are associated with increased predictive capabilities (i.e., increased positive predictive value), and that this increase is consistent uniformly with GO-term depth. Additional in silico validation on a collection of new annotations recently added to GO confirms the advantages suggested by the cross-validation study. Taken as a whole, our results show that a hierarchical approach to network-based protein function prediction, that exploits the ontological structure of protein annotation databases in a principled manner, can offer substantial advantages over the successive application of 'flat' network-based methods.