875 resultados para Distribution network reconfiguration problem
Resumo:
We consider a joint power control and transmission scheduling problem in wireless networks with average power constraints. While the capacity region of a wireless network is convex, a characterization of this region is a hard problem. We formulate a network utility optimization problem involving time-sharing across different "transmission modes," where each mode corresponds to the set of power levels used in the network. The structure of the optimal solution is a time-sharing across a small set of such modes. We use this structure to develop an efficient heuristic approach to finding a suboptimal solution through column generation iterations. This heuristic approach converges quite fast in simulations, and provides a tool for wireless network planning.
Resumo:
We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Thetaopt bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form dopt(Pmacrt) x Thetaopt with dopt scaling as Pmacrt 1 /eta, where Pmacrt is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then pro- - vide a simple characterisation of the optimal operating point.
Resumo:
We consider the classical problem of sequential detection of change in a distribution (from hypothesis 0 to hypothesis 1), where the fusion centre receives vectors of periodic measurements, with the measurements being i.i.d. over time and across the vector components, under each of the two hypotheses. In our problem, the sensor devices ("motes") that generate the measurements constitute an ad hoc wireless network. The motes contend using a random access protocol (such as CSMA/CA) to transmit their measurement packets to the fusion centre. The fusion centre waits for vectors of measurements to accumulate before taking decisions. We formulate the optimal detection problem, taking into account the network delay experienced by the vectors of measurements, and find that, under periodic sampling, the detection delay decouples into network delay and decision delay. We obtain a lower bound on the network delay, and propose a censoring scheme, where lagging sensors drop their delayed observations in order to mitigate network delay. We show that this scheme can achieve the lower bound. This approach is explored via simulation. We also use numerical evaluation and simulation to study issues such as: the optimal sampling rate for a given number of sensors, and the optimal number of sensors for a given measurement rate
Resumo:
In this paper, we consider the problem of association of wireless stations (STAs) with an access network served by a wireless local area network (WLAN) and a 3G cellular network. There is a set of WLAN Access Points (APs) and a set of 3G Base Stations (BSs) and a number of STAs each of which needs to be associated with one of the APs or one of the BSs. We concentrate on downlink bulk elastic transfers. Each association provides each ST with a certain transfer rate. We evaluate an association on the basis of the sum log utility of the transfer rates and seek the utility maximizing association. We also obtain the optimal time scheduling of service from a 3G BS to the associated STAs. We propose a fast iterative heuristic algorithm to compute an association. Numerical results show that our algorithm converges in a few steps yielding an association that is within 1% (in objective value) of the optimal (obtained through exhaustive search); in most cases the algorithm yields an optimal solution.
Resumo:
In a dense multi-hop network of mobile nodes capable of applying adaptive power control, we consider the problem of finding the optimal hop distance that maximizes a certain throughput measure in bit-metres/sec, subject to average network power constraints. The mobility of nodes is restricted to a circular periphery area centered at the nominal location of nodes. We incorporate only randomly varying path-loss characteristics of channel gain due to the random motion of nodes, excluding any multi-path fading or shadowing effects. Computation of the throughput metric in such a scenario leads us to compute the probability density function of random distance between points in two circles. Using numerical analysis we discover that choosing the nearest node as next hop is not always optimal. Optimal throughput performance is also attained at non-trivial hop distances depending on the available average network power.
Resumo:
In a typical sensor network scenario a goal is to monitor a spatio-temporal process through a number of inexpensive sensing nodes, the key parameter being the fidelity at which the process has to be estimated at distant locations. We study such a scenario in which multiple encoders transmit their correlated data at finite rates to a distant and common decoder. In particular, we derive inner and outer bounds on the rate region for the random field to be estimated with a given mean distortion.
Resumo:
Problems related to network coding for acyclic, instantaneous networks (where the edges of the acyclic graph representing the network are assumed to have zero-delay) have been extensively dealt with in the recent past. The most prominent of these problems include (a) the existence of network codes that achieve maximum rate of transmission, (b) efficient network code constructions, and (c) field size issues. In practice, however, networks have transmission delays. In network coding theory, such networks with transmission delays are generally abstracted by assuming that their edges have integer delays. Using enough memory at the nodes of an acyclic network with integer delays can effectively simulate instantaneous behavior, which is probably why only acyclic instantaneous networks have been primarily focused on thus far. However, nulling the effect of the network delays are not always uniformly advantageous, as we will show in this work. Essentially, we elaborate on issues ((a), (b) and (c) above) related to network coding for acyclic networks with integer delays, and show that using the delay network as is (without adding memory) turns out to be advantageous, disadvantageous or immaterial, depending on the topology of the network and the problem considered i.e., (a), (b) or (c).
Resumo:
This paper deals with the solution to the problem of multisensor data fusion for a single target scenario as detected by an airborne track-while-scan radar. The details of a neural network implementation, various training algorithms based on standard backpropagation, and the results of training and testing the neural network are presented. The promising capabilities of RPROP algorithm for multisensor data fusion for various parameters are shown in comparison to other adaptive techniques
Resumo:
The development of a neural network based power system damping controller (PSDC) for a static Var compensator (SVC), designed to enhance the damping characteristics of a power system network representing a part of the Electricity Generating Authority of Thailand (EGAT) system is presented. The proposed stabilising controller scheme of the SVC consists of a neuro-identifier and a neuro-controller which have been developed based on a functional link network (FLN) model. A recursive online training algorithm has been utilised to train the two networks. The simulation results have been obtained under various operating conditions and disturbance cases to show that the proposed stabilising controller can provide a better damping to the low frequency oscillations, as compared to the conventional controllers. The effectiveness of the proposed stabilising controller has also been compared with a conventional power system stabiliser provided in the generator excitation system.
Resumo:
Electric power utilities are installing distribution automation systems (DAS) for better management and control of the distribution networks during the recent past. The success of DAS, largely depends on the availability of reliable database of the control centre and thus requires an efficient state estimation (SE) solution technique. This paper presents an efficient and robust three-phase SE algorithm for application to radial distribution networks. This method exploits the radial nature of the network and uses forward and backward propagation scheme to estimate the line flows, node voltage and loads at each node, based on the measured quantities. The SE cannot be executed without adequate number of measurements. The extension of the method to the network observability analysis and bad data detection is also discussed. The proposed method has been tested to analyze several practical distribution networks of various voltage levels and also having high R:X ratio of lines. The results for a typical network are presented for illustration purposes. © 2000 Elsevier Science S.A. All rights reserved.
Resumo:
We consider a small extent sensor network for event detection, in which nodes periodically take samples and then contend over a random access network to transmit their measurement packets to the fusion center. We consider two procedures at the fusion center for processing the measurements. The Bayesian setting, is assumed, that is, the fusion center has a prior distribution on the change time. In the first procedure, the decision algorithm at the fusion center is network-oblivious and makes a decision only when a complete vector of measurements taken at a sampling instant is available. In the second procedure, the decision algorithm at the fusion center is network-aware and processes measurements as they arrive, but in a time-causal order. In this case, the decision statistic depends on the network delays, whereas in the network-oblivious case, the decision statistic does not. This yields a Bayesian change-detection problem with a trade-off between the random network delay and the decision delay that is, a higher sampling rate reduces the decision delay but increases the random access delay. Under periodic sampling, in the network-oblivious case, the structure of the optimal stopping rule is the same as that without the network, and the optimal change detection delay decouples into the network delay and the optimal decision delay without the network. In the network-aware case, the optimal stopping problem is analyzed as a partially observable Markov decision process, in which the states of the queues and delays in the network need to be maintained. A sufficient decision statistic is the network state and the posterior probability of change having occurred, given the measurements received and the state of the network. The optimal regimes are studied using simulation.
Resumo:
Convergence of the vast sequence space of proteins into a highly restricted fold/conformational space suggests a simple yet unique underlying mechanism of protein folding that has been the subject of much debate in the last several decades. One of the major challenges related to the understanding of protein folding or in silico protein structure prediction is the discrimination of non-native structures/decoys from the native structure. Applications of knowledge-based potentials to attain this goal have been extensively reported in the literature. Also, scoring functions based on accessible surface area and amino acid neighbourhood considerations were used in discriminating the decoys from native structures. In this article, we have explored the potential of protein structure network (PSN) parameters to validate the native proteins against a large number of decoy structures generated by diverse methods. We are guided by two principles: (a) the PSNs capture the local properties from a global perspective and (b) inclusion of non-covalent interactions, at all-atom level, including the side-chain atoms, in the network construction accommodates the sequence dependent features. Several network parameters such as the size of the largest cluster, community size, clustering coefficient are evaluated and scored on the basis of the rank of the native structures and the Z-scores. The network analysis of decoy structures highlights the importance of the global properties contributing to the uniqueness of native structures. The analysis also exhibits that the network parameters can be used as metrics to identify the native structures and filter out non-native structures/decoys in a large number of data-sets; thus also has a potential to be used in the protein `structure prediction' problem.
Resumo:
Niche differentiation has been proposed as an explanation for rarity in species assemblages. To test this hypothesis requires quantifying the ecological similarity of species. This similarity can potentially be estimated by using phylogenetic relatedness. In this study, we predicted that if niche differentiation does explain the co-occurrence of rare and common species, then rare species should contribute greatly to the overall community phylogenetic diversity (PD), abundance will have phylogenetic signal, and common and rare species will be phylogenetically dissimilar. We tested these predictions by developing a novel method that integrates species rank abundance distributions with phylogenetic trees and trend analyses, to examine the relative contribution of individual species to the overall community PD. We then supplement this approach with analyses of phylogenetic signal in abundances and measures of phylogenetic similarity within and between rare and common species groups. We applied this analytical approach to 15 long-term temperate and tropical forest dynamics plots from around the world. We show that the niche differentiation hypothesis is supported in six of the nine gap-dominated forests but is rejected in the six disturbance-dominated and three gap-dominated forests. We also show that the three metrics utilized in this study each provide unique but corroborating information regarding the phylogenetic distribution of rarity in communities.
Resumo:
We consider a dense, ad hoc wireless network, confined to a small region. The wireless network is operated as a single cell, i.e., only one successful transmission is supported at a time. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organize into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention-based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first motivate that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc wireless network (described above) as a single cell, we study the hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (t) (1/eta), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterization of the optimal operating point. Simulation results are provided comparing the performance of the optimal strategy derived here with some simple strategies for operating the network.
Resumo:
We address the problem of robust formant tracking in continuous speech in the presence of additive noise. We propose a new approach based on mixture modeling of the formant contours. Our approach consists of two main steps: (i) Computation of a pyknogram based on multiband amplitude-modulation/frequency-modulation (AM/FM) decomposition of the input speech; and (ii) Statistical modeling of the pyknogram using mixture models. We experiment with both Gaussian mixture model (GMM) and Student's-t mixture model (tMM) and show that the latter is robust with respect to handling outliers in the pyknogram data, parameter selection, accuracy, and smoothness of the estimated formant contours. Experimental results on simulated data as well as noisy speech data show that the proposed tMM-based approach is also robust to additive noise. We present performance comparisons with a recently developed adaptive filterbank technique proposed in the literature and the classical Burg's spectral estimator technique, which show that the proposed technique is more robust to noise.