978 resultados para distributed parameters


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new algorithm based on honey-bee mating optimization (HBMO) to estimate harmonic state variables in distribution networks including distributed generators (DGs). The proposed algorithm performs estimation for both amplitude and phase of each harmonics by minimizing the error between the measured values from phasor measurement units (PMUs) and the values computed from the estimated parameters during the estimation process. Simulation results on two distribution test system are presented to demonstrate that the speed and accuracy of proposed distribution harmonic state estimation (DHSE) algorithm is extremely effective and efficient in comparison with the conventional algorithms such as weight least square (WLS), genetic algorithm (GA) and tabu search (TS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new algorithm based on a Hybrid Particle Swarm Optimization (PSO) and Simulated Annealing (SA) called PSO-SA to estimate harmonic state variables in distribution networks. The proposed algorithm performs estimation for both amplitude and phase of each harmonic currents injection by minimizing the error between the measured values from Phasor Measurement Units (PMUs) and the values computed from the estimated parameters during the estimation process. The proposed algorithm can take into account the uncertainty of the harmonic pseudo measurement and the tolerance in the line impedances of the network as well as uncertainty of the Distributed Generators (DGs) such as Wind Turbines (WT). The main feature of proposed PSO-SA algorithm is to reach quickly around the global optimum by PSO with enabling a mutation function and then to find that optimum by SA searching algorithm. Simulation results on IEEE 34 bus radial and a realistic 70-bus radial test networks are presented to demonstrate the speed and accuracy of proposed Distribution Harmonic State Estimation (DHSE) algorithm is extremely effective and efficient in comparison with the conventional algorithms such as Weight Least Square (WLS), Genetic Algorithm (GA), original PSO and Honey Bees Mating Optimization (HBMO) algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of Distributed Generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. This paper addresses the issue of improving the network voltage profile in distribution systems by installing a DG of the most suitable size, at a suitable location. An analytical approach is developed based on algebraic equations for uniformly distributed loads to determine the optimal operation, size and location of the DG in order to achieve required levels of network voltage. The developed method is simple to use for conceptual design and analysis of distribution system expansion with a DG and suitable for a quick estimation of DG parameters (such as optimal operating angle, size and location of a DG system) in a radial network. A practical network is used to verify the proposed technique and test results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large arrays and networks of carbon nanotubes, both single- and multi-walled, feature many superior properties which offer excellent opportunities for various modern applications ranging from nanoelectronics, supercapacitors, photovoltaic cells, energy storage and conversation devices, to gas- and biosensors, nanomechanical and biomedical devices etc. At present, arrays and networks of carbon nanotubes are mainly fabricated from the pre-fabricated separated nanotubes by solution-based techniques. However, the intrinsic structure of the nanotubes (mainly, the level of the structural defects) which are required for the best performance in the nanotube-based applications, are often damaged during the array/network fabrication by surfactants, chemicals, and sonication involved in the process. As a result, the performance of the functional devices may be significantly degraded. In contrast, directly synthesized nanotube arrays/networks can preclude the adverse effects of the solution-based process and largely preserve the excellent properties of the pristine nanotubes. Owing to its advantages of scale-up production and precise positioning of the grown nanotubes, catalytic and catalyst-free chemical vapor depositions (CVD), as well as plasma-enhanced chemical vapor deposition (PECVD) are the methods most promising for the direct synthesis of the nanotubes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2–12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226–268 µm2h−1, 311–351 µm2h−1 and 0.23–0.39, 0.32–0.61 for the experimental periods of 0–24 h and 24–48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concurrency control (CC) algorithms are important in distributed database systems to ensure consistency of the database. A number of such algorithms are available in the literature. The issue of performance evaluation of these algorithms has been recognized to be important. However, only a few studies have been carried out towards this. This paper deals with the performance evaluation of a CC algorithm proposed by Rosenkrantz et al. through a detailed simulation study. In doing so, the algorithm has been modified so that it can, within itself, take care of the redundancy in the database. The influences of various system parameters and the transaction profile on the response time and on the degree of conflict are considered. The entire study has been carried out using the programming language SIMULA on a DEC-1090 system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The conferencing systems in IP Multimedia (IM) networks are going through restructuring, accomplished in the near future. One of the changes introduced is the concept of floors and floor control in its current form with matching entity roles. The Binary Floor Control Protocol (BFCP) is a novelty to be exploited in distributed tightly coupled conferencing services. The protocol defines the floor control server (FCS), which implements floor control giving access to shared resources. As the newest tendency is to distribute the conferencing services, the locations of different functionality units play an important role in developing the standards. The floor control server location is not yet single-mindedly fixed in different standardization bodies, and the debate goes on where to place it within the media server, providing the conferencing service. The thesis main objective is to evaluate two distinctive alternatives in respect the Mp interface protocol between the respective nodes, as the interface in relation to floor control is under standardization work at the moment. The thesis gives a straightforward preamble in IMS network, nodes of interest including floor control server and conferencing. Knowledge on several protocols – BFCP, SDP, SIP and H.248 provides an important background for understanding the functionality changes introduced in the Mp interface and therefore introductions on those protocols and how they are connected to the full picture is given. The actual analysis on the impact of the floor control server into the Mp reference point is concluded in relation to the locations, giving basic flows, requirements analysis including a limited implementation proposal on supporting protocol parameters. The overall conclusion of the thesis is that even if both choices are seemingly useful, not one of the locations is clearly the most suitable in the light of this work. The thesis suggests a solution having both possibilities available to be chosen from in separate circumstances, realized with consistent standardization. It is evident, that if the preliminary assumption for the analysis is kept regarding to only one right place for the floor control server, more work is to be done in connected areas to discover the one most appropriate location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The author presents adaptive control techniques for controlling the flow of real-time jobs from the peripheral processors (PPs) to the central processor (CP) of a distributed system with a star topology. He considers two classes of flow control mechanisms: (1) proportional control, where a certain proportion of the load offered to each PP is sent to the CP, and (2) threshold control, where there is a maximum rate at which each PP can send jobs to the CP. The problem is to obtain good algorithms for dynamically adjusting the control level at each PP in order to prevent overload of the CP, when the load offered by the PPs is unknown and varying. The author formulates the problem approximately as a standard system control problem in which the system has unknown parameters that are subject to change. Using well-known techniques (e.g., naive-feedback-controller and stochastic approximation techniques), he derives adaptive controls for the system control problem. He demonstrates the efficacy of these controls in the original problem by using the control algorithms in simulations of a queuing model of the CP and the load controls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. Also of interest is minimization of the bandwidth required to repair the system following a node failure. In a recent paper, Wu et al. characterize the tradeoff between the repair bandwidth and the amount of data stored per node. They also prove the existence of regenerating codes that achieve this tradeoff. In this paper, we introduce Exact Regenerating Codes, which are regenerating codes possessing the additional property of being able to duplicate the data stored at a failed node. Such codes require low processing and communication overheads, making the system practical and easy to maintain. Explicit construction of exact regenerating codes is provided for the minimum bandwidth point on the storage-repair bandwidth tradeoff, relevant to distributed-mail-server applications. A sub-space based approach is provided and shown to yield necessary and sufficient conditions on a linear code to possess the exact regeneration property as well as prove the uniqueness of our construction. Also included in the paper, is an explicit construction of regenerating codes for the minimum storage point for parameters relevant to storage in peer-to-peer systems. This construction supports a variable number of nodes and can handle multiple, simultaneous node failures. All constructions given in the paper are of low complexity, requiring low field size in particular.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed characterization of interference power statistics in CDMA systems is of considerable practical and theoretical interest. Such a characterization for uplink inter-cell interference has been difficult because of transmit power control, randomness in the number of interfering mobile stations, and randomness in their locations. We develop a new method to model the uplink inter-cell interference power as a lognormal distribution, and show that it is an order of magnitude more accurate than the conventional Gaussian approximation even when the average number of mobile stations per cell is relatively large and even outperforms the moment-matched lognormal approximation considered in the literature. The proposed method determines the lognormal parameters by matching its moment generating function with a new approximation of the moment generating function for the inter-cell interference. The method is tractable and exploits the elegant spatial Poisson process theory. Using several numerical examples, the accuracy of the proposed method in modeling the probability distribution of inter-cell interference is verified for both small and large values of interference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the distributed storage setting that we consider, data is stored across n nodes in the network such that the data can be recovered by connecting to any subset of k nodes. Additionally, one can repair a failed node by connecting to any d nodes while downloading beta units of data from each. Dimakis et al. show that the repair bandwidth d beta can be considerably reduced if each node stores slightly more than the minimum required and characterize the tradeoff between the amount of storage per node and the repair bandwidth. In the exact regeneration variation, unlike the functional regeneration, the replacement for a failed node is required to store data identical to that in the failed node. This greatly reduces the complexity of system maintenance. The main result of this paper is an explicit construction of codes for all values of the system parameters at one of the two most important and extreme points of the tradeoff - the Minimum Bandwidth Regenerating point, which performs optimal exact regeneration of any failed node. A second result is a non-existence proof showing that with one possible exception, no other point on the tradeoff can be achieved for exact regeneration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the distributed storage setting introduced by Dimakis et al., B units of data are stored across n nodes in the network in such a way that the data can be recovered by connecting to any k nodes. Additionally one can repair a failed node by connecting to any d nodes while downloading at most beta units of data from each node. In this paper, we introduce a flexible framework in which the data can be recovered by connecting to any number of nodes as long as the total amount of data downloaded is at least B. Similarly, regeneration of a failed node is possible if the new node connects to the network using links whose individual capacity is bounded above by beta(max) and whose sum capacity equals or exceeds a predetermined parameter gamma. In this flexible setting, we obtain the cut-set lower bound on the repair bandwidth along with a constructive proof for the existence of codes meeting this bound for all values of the parameters. An explicit code construction is provided which is optimal in certain parameter regimes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The steady state throughput performance of distributed applications deployed in switched networks in presence of end-system bottlenecks is studied in this paper. The effect of various limitations at an end-system is modelled as an equivalent transmission capacity limitation. A class of distributed applications is characterised by a static traffic distribution matrix that determines the communication between various components of the application. It is found that uniqueness of steady state throughputs depends only on the traffic distribution matrix and that some applications (e.g., broadcast applications) can yield non-unique values for the steady state component throughputs. For a given switch capacity, with traffic distribution that yield fair unique throughputs, the trade-off between the end-system capacity and the number of application components is brought out. With a proposed distributed rate control, it has been illustrated that it is possible to have unique solution for certain traffic distributions which is otherwise impossible. Also, by proper selection of rate control parameters, various throughput performance objectives can be realised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a new fault-tolerant distributed deadlock detection algorithm which can handle loss of any resource release message. It is based on a token-based distributed mutual exclusion algorithm. We have evaluated and compared the performance of the proposed algorithm with two other algorithms which belong to two different classes, using simulation studies. The proposed algorithm is found to be efficient in terms of average number of messages per wait and average deadlock duration compared to the other two algorithms in all situations, and has comparable or better performance in terms of other parameters.