875 resultados para Distribution network reconfiguration problem
Resumo:
The recently developed single network adaptive critic (SNAC) design has been used in this study to design a power system stabiliser (PSS) for enhancing the small-signal stability of power systems over a wide range of operating conditions. PSS design is formulated as a discrete non-linear quadratic regulator problem. SNAC is then used to solve the resulting discrete-time optimal control problem. SNAC uses only a single critic neural network instead of the action-critic dual network architecture of typical adaptive critic designs. SNAC eliminates the iterative training loops between the action and critic networks and greatly simplifies the training procedure. The performance of the proposed PSS has been tested on a single machine infinite bus test system for various system and loading conditions. The proposed stabiliser, which is relatively easier to synthesise, consistently outperformed stabilisers based on conventional lead-lag and linear quadratic regulator designs.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
Location of concentrators in a computer communication network: a stochastic automation search method
Resumo:
The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.
Resumo:
We study a scheduling problem in a wireless network where vehicles are used as store-and-forward relays, a situation that might arise, for example, in practical rural communication networks. A fixed source node wants to transfer a file to a fixed destination node, located beyond its communication range. In the absence of any infrastructure connecting the two nodes, we consider the possibility of communication using vehicles passing by. Vehicles arrive at the source node at renewal instants and are known to travel towards the destination node with average speed v sampled from a given probability distribution. Th source node communicates data packets (or fragments) of the file to the destination node using these vehicles as relays. We assume that the vehicles communicate with the source node and the destination node only, and hence, every packet communication involves two hops. In this setup, we study the source node's sequential decision problem of transferring packets of the file to vehicles as they pass by, with the objective of minimizing delay in the network. We study both the finite file size case and the infinite file size case. In the finite file size case, we aim to minimize the expected file transfer delay, i.e. expected value of the maximum of the packet sojourn times. In the infinite file size case, we study the average packet delay minimization problem as well as the optimal tradeoff achievable between the average queueing delay at the source node buffer and the average transit delay in the relay vehicle.
Resumo:
This paper presents an Artificial Neural Network (ANN) approach for locating faults in distribution systems. Different from the traditional Fault Section Estimation methods, the proposed approach uses only limited measurements. Faults are located according to the impedances of their path using a Feed Forward Neural Networks (FFNN). Various practical situations in distribution systems, such as protective devices placed only at the substation, limited measurements available, various types of faults viz., three-phase, line (a, b, c) to ground, line to line (a-b, b-c, c-a) and line to line to ground (a-b-g, b-c-g, c-a-g) faults and a wide range of varying short circuit levels at substation, are considered for studies. A typical IEEE 34 bus practical distribution system with unbalanced loads and with three- and single- phase laterals and a 69 node test feeder with different configurations are considered for studies. The results presented show that the proposed approach of fault location gives close to accurate results in terms of the estimated fault location.
Resumo:
A neural network approach for solving the two-dimensional assignment problem is proposed. The design of the neural network is discussed and simulation results are presented. The neural network obtains 10-15% lower cost placements on the examples considered, than the adjacent pairwise exchange method.
Resumo:
A new feature-based technique is introduced to solve the nonlinear forward problem (FP) of the electrical capacitance tomography with the target application of monitoring the metal fill profile in the lost foam casting process. The new technique is based on combining a linear solution to the FP and a correction factor (CF). The CF is estimated using an artificial neural network (ANN) trained using key features extracted from the metal distribution. The CF adjusts the linear solution of the FP to account for the nonlinear effects caused by the shielding effects of the metal. This approach shows promising results and avoids the curse of dimensionality through the use of features and not the actual metal distribution to train the ANN. The ANN is trained using nine features extracted from the metal distributions as input. The expected sensors readings are generated using ANSYS software. The performance of the ANN for the training and testing data was satisfactory, with an average root-mean-square error equal to 2.2%.
Resumo:
In this article, we present a novel application of a quantum clustering (QC) technique to objectively cluster the conformations, sampled by molecular dynamics simulations performed on different ligand bound structures of the protein. We further portray each conformational population in terms of dynamically stable network parameters which beautifully capture the ligand induced variations in the ensemble in atomistic detail. The conformational populations thus identified by the QC method and verified by network parameters are evaluated for different ligand bound states of the protein pyrrolysyl-tRNA synthetase (DhPylRS) from D. hafniense. The ligand/environment induced re-distribution of protein conformational ensembles forms the basis for understanding several important biological phenomena such as allostery and enzyme catalysis. The atomistic level characterization of each population in the conformational ensemble in terms of the re-orchestrated networks of amino acids is a challenging problem, especially when the changes are minimal at the backbone level. Here we demonstrate that the QC method is sensitive to such subtle changes and is able to cluster MD snapshots which are similar at the side-chain interaction level. Although we have applied these methods on simulation trajectories of a modest time scale (20 ns each), we emphasize that our methodology provides a general approach towards an objective clustering of large-scale MD simulation data and may be applied to probe multistate equilibria at higher time scales, and to problems related to protein folding for any protein or protein-protein/RNA/DNA complex of interest with a known structure.
Resumo:
This paper is concerned with the optimal flow control of an ATM switching element in a broadband-integrated services digital network. We model the switching element as a stochastic fluid flow system with a finite buffer, a constant output rate server, and a Gaussian process to characterize the input, which is a heterogeneous set of traffic sources. The fluid level should be maintained between two levels namely b1 and b2 with b1
Resumo:
We consider the problem of secure communication in mobile Wireless Sensor Networks (WSNs). Achieving security in WSNs requires robust encryption and authentication standards among the sensor nodes. Severe resources constraints in typical Wireless Sensor nodes hinder them in achieving key agreements. It is proved from past studies that many notable key management schemes do not work well in sensor networks due to their limited capacities. The idea of key predistribution is not feasible considering the fact that the network could scale to millions. We prove a novel algorithm that provides robust and secure communication channel in WSNs. Our Double Encryption with Validation Time (DEV) using Key Management Protocol algorithm works on the basis of timed sessions within which a secure secret key remains valid. A mobile node is used to bootstrap and exchange secure keys among communicating pairs of nodes. Analysis and simulation results show that the performance of the DEV using Key Management Protocol Algorithm is better than the SEV scheme and other related work.
Resumo:
We study the problem of optimal sequential (''as-you-go'') deployment of wireless relay nodes, as a person walks along a line of random length (with a known distribution). The objective is to create an impromptu multihop wireless network for connecting a packet source to be placed at the end of the line with a sink node located at the starting point, to operate in the light traffic regime. In walking from the sink towards the source, at every step, measurements yield the transmit powers required to establish links to one or more previously placed nodes. Based on these measurements, at every step, a decision is made to place a relay node, the overall system objective being to minimize a linear combination of the expected sum power (or the expected maximum power) required to deliver a packet from the source to the sink node and the expected number of relay nodes deployed. For each of these two objectives, two different relay selection strategies are considered: (i) each relay communicates with the sink via its immediate previous relay, (ii) the communication path can skip some of the deployed relays. With appropriate modeling assumptions, we formulate each of these problems as a Markov decision process (MDP). We provide the optimal policy structures for all these cases, and provide illustrations of the policies and their performance, via numerical results, for some typical parameters.
Resumo:
In this paper, we consider the setting of the pattern maximum likelihood (PML) problem studied by Orlitsky et al. We present a well-motivated heuristic algorithm for deciding the question of when the PML distribution of a given pattern is uniform. The algorithm is based on the concept of a ``uniform threshold''. This is a threshold at which the uniform distribution exhibits an interesting phase transition in the PML problem, going from being a local maximum to being a local minimum.
Resumo:
Low Voltage (LV) electricity distribution grid operations can be improved through a combination of new smart metering systems' capabilities based on real time Power Line Communications (PLC) and LV grid topology mapping. This paper presents two novel contributions. The first one is a new methodology developed for smart metering PLC network monitoring and analysis. It can be used to obtain relevant information from the grid, thus adding value to existing smart metering deployments and facilitating utility operational activities. A second contribution describes grid conditioning used to obtain LV feeder and phase identification of all connected smart electric meters. Real time availability of such information may help utilities with grid planning, fault location and a more accurate point of supply management.
Resumo:
Observations of individual weight, duration of development and production of different stages of Tropodiaptomus incognitus are presented. The study is based on data gathered from Lake Chad in 1968.