357 resultados para Distributed Algorithm
Resumo:
This paper presents comparative evaluation of the distance relay characteristics for UHV and EHV transmission lines. Distance protection relay characteristics for the EHV and UHV systems are developed using Electromagnetic Transients (EMT) program. The variation of ideal trip boundaries for both the systems are presented. Unlike the conventional distance protection relay which uses a lumped parameter model, this paper uses the distributed parameter model. The effect of larger shunt susceptance on the trip boundaries is highlighted. Performance of distance relay with ideal trip boundaries for EHV and UHV lines have been tested for various fault locations and fault resistances. Electromagnetic Transients (EMT) program has been developed considering distributed parameter line model for simulating the test systems. The voltage and current phasors are computed from the signals using an improved full cycle DFT algorithm taking 20 samples per cycle. Two practical transmission systems of Indian power grid, namely 765 kV UHV transmission line and SREB 24-bus 400kV EHV system are used to test the performance of the proposed approach.
Resumo:
In this paper we propose a new algorithm for learning polyhedral classifiers which we call as Polyceptron. It is a Perception like algorithm which updates the parameters only when the current classifier misclassifies any training data. We give both batch and online version of Polyceptron algorithm. Finally we give experimental results to show the effectiveness of our approach.
Resumo:
We propose an iterative algorithm to detect transient segments in audio signals. Short time Fourier transform(STFT) is used to detect rapid local changes in the audio signal. The algorithm has two steps that iteratively - (a) calculate a function of the STFT and (b) build a transient signal. A dynamic thresholding scheme is used to locate the potential positions of transients in the signal. The iterative procedure ensures that genuine transients are built up while the localised spectral noise are suppressed by using an energy criterion. The extracted transient signal is later compared to a ground truth dataset. The algorithm performed well on two databases. On the EBU-SQAM database of monophonic sounds, the algorithm achieved an F-measure of 90% while on our database of polyphonic audio an F-measure of 91% was achieved. This technique is being used as a preprocessing step for a tempo analysis algorithm and a TSR (Transients + Sines + Residue) decomposition scheme.
Resumo:
Effective conservation and management of natural resources requires up-to-date information of the land cover (LC) types and their dynamics. The LC dynamics are being captured using multi-resolution remote sensing (RS) data with appropriate classification strategies. RS data with important environmental layers (either remotely acquired or derived from ground measurements) would however be more effective in addressing LC dynamics and associated changes. These ancillary layers provide additional information for delineating LC classes' decision boundaries compared to the conventional classification techniques. This communication ascertains the possibility of improved classification accuracy of RS data with ancillary and derived geographical layers such as vegetation index, temperature, digital elevation model (DEM), aspect, slope and texture. This has been implemented in three terrains of varying topography. The study would help in the selection of appropriate ancillary data depending on the terrain for better classified information.
Resumo:
Structural health monitoring of existing infrastructure is currently an active field of research, where elaborate experimental programs and advanced analytical methods are used in identifying the current state of health of critical structures. Change of static deflection as the indicator of damage is the simplest tool in a structural health monitoring scenario of bridges that is least exploited in damage identification strategies. In this paper, some simple and elegant equations based on loss of symmetry due to damage are derived and presented for identification of damage in a bridge girder modeled as a simply supported beam using changes in static deflections and dynamic parameters. A single contiguous and distributed damage, typical of reinforced or prestressed concrete structures, is assumed for the structure. The methodology is extended for a base-line-free as well as base-line-inclusive measurement. Measurement strategy involves application of loads only at two symmetric points one at a time and deflection measurements at those symmetric points as well as at the midspan of the beam. A laboratory-based experiment is used to validate the approach. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Algorithms for adaptive mesh refinement using a residual error estimator are proposed for fluid flow problems in a finite volume framework. The residual error estimator, referred to as the R-parameter is used to derive refinement and coarsening criteria for the adaptive algorithms. An adaptive strategy based on the R-parameter is proposed for continuous flows, while a hybrid adaptive algorithm employing a combination of error indicators and the R-parameter is developed for discontinuous flows. Numerical experiments for inviscid and viscous flows on different grid topologies demonstrate the effectiveness of the proposed algorithms on arbitrary polygonal grids.
Resumo:
Wind power, as an alternative to fossil fuels, is plentiful, renewable, widely distributed, clean, produces no greenhouse gas emissions during operation, and uses little land. In operation, the overall cost per unit of energy produced is similar to the cost for new coal and natural gas installations. However, the stochastic behaviour of wind speeds leads to significant disharmony between wind energy production and electricity demand. Wind generation suffers from an intermittent characteristics due to the own diurnal and seasonal patterns of the wind behaviour. Both reactive power and voltage control are important under varying operating conditions of wind farm. To optimize reactive power flow and to keep voltages in limit, an optimization method is proposed in this paper. The objective proposed is minimization of the voltage deviations of the load buses (Vdesired). The approach considers the reactive power limits of wind generators and co-ordinates the transformer taps. This algorithm has been tested under practically varying conditions simulated on a test system. The results are obtained on a system of 50-bus real life equivalent power network. The result shows the efficiency of the proposed method.
Resumo:
Several experimental studies have shown that fracture surfaces in brittle metallic glasses (MGs) generally exhibit nanoscale corrugations which may be attributed to the nucleation and coalescence of nanovoids during crack propagation. Recent atomistic simulations suggest that this phenomenon is due to large spatial fluctuations in material properties in a brittle MG, which leads to void nucleation in regions of low atomic density and then catastrophic fracture through void coalescence. To explain this behavior, we propose a model of a heterogeneous solid containing a distribution of weak zones to represent a brittle MG. Plane strain continuum finite element analysis of cavitation in such an elastic-plastic solid is performed with the weak zones idealized as periodically distributed regions having lower yield strength than the background material. It is found that the presence of weak zones can significantly reduce the critical hydrostatic stress for the onset of cavitation which is controlled uniquely by the local yield properties of these zones. Also, the presence of weak zones diminishes the sensitivity of the cavitation stress to the volume fraction of a preexisting void. These results provide plausible explanations for the observations reported in recent atomistic simulations of brittle MGs. An analytical solution for a composite, incompressible elastic-plastic solid with a weak inner core is used to investigate the effect of volume fraction and yield strength of the core on the nature of cavitation bifurcation. It is shown that snap-cavitation may occur, giving rise to sudden formation of voids with finite size, which does not happen in a homogeneous plastic solid. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Let X-1,..., X-m be a set of m statistically dependent sources over the common alphabet F-q, that are linearly independent when considered as functions over the sample space. We consider a distributed function computation setting in which the receiver is interested in the lossless computation of the elements of an s-dimensional subspace W spanned by the elements of the row vector X-1,..., X-m]Gamma in which the (m x s) matrix Gamma has rank s. A sequence of three increasingly refined approaches is presented, all based on linear encoders. The first approach uses a common matrix to encode all the sources and a Korner-Marton like receiver to directly compute W. The second improves upon the first by showing that it is often more efficient to compute a carefully chosen superspace U of W. The superspace is identified by showing that the joint distribution of the {X-i} induces a unique decomposition of the set of all linear combinations of the {X-i}, into a chain of subspaces identified by a normalized measure of entropy. This subspace chain also suggests a third approach, one that employs nested codes. For any joint distribution of the {X-i} and any W, the sum-rate of the nested code approach is no larger than that under the Slepian-Wolf (SW) approach. Under the SW approach, W is computed by first recovering each of the {X-i}. For a large class of joint distributions and subspaces W, the nested code approach is shown to improve upon SW. Additionally, a class of source distributions and subspaces are identified, for which the nested-code approach is sum-rate optimal.
Resumo:
We address the problem of mining targeted association rules over multidimensional market-basket data. Here, each transaction has, in addition to the set of purchased items, ancillary dimension attributes associated with it. Based on these dimensions, transactions can be visualized as distributed over cells of an n-dimensional cube. In this framework, a targeted association rule is of the form {X -> Y} R, where R is a convex region in the cube and X. Y is a traditional association rule within region R. We first describe the TOARM algorithm, based on classical techniques, for identifying targeted association rules. Then, we discuss the concepts of bottom-up aggregation and cubing, leading to the CellUnion technique. This approach is further extended, using notions of cube-count interleaving and credit-based pruning, to derive the IceCube algorithm. Our experiments demonstrate that IceCube consistently provides the best execution time performance, especially for large and complex data cubes.
Resumo:
The rapid growth in the field of data mining has lead to the development of various methods for outlier detection. Though detection of outliers has been well explored in the context of numerical data, dealing with categorical data is still evolving. In this paper, we propose a two-phase algorithm for detecting outliers in categorical data based on a novel definition of outliers. In the first phase, this algorithm explores a clustering of the given data, followed by the ranking phase for determining the set of most likely outliers. The proposed algorithm is expected to perform better as it can identify different types of outliers, employing two independent ranking schemes based on the attribute value frequencies and the inherent clustering structure in the given data. Unlike some existing methods, the computational complexity of this algorithm is not affected by the number of outliers to be detected. The efficacy of this algorithm is demonstrated through experiments on various public domain categorical data sets.
Resumo:
We present a novel multi-timescale Q-learning algorithm for average cost control in a Markov decision process subject to multiple inequality constraints. We formulate a relaxed version of this problem through the Lagrange multiplier method. Our algorithm is different from Q-learning in that it updates two parameters - a Q-value parameter and a policy parameter. The Q-value parameter is updated on a slower time scale as compared to the policy parameter. Whereas Q-learning with function approximation can diverge in some cases, our algorithm is seen to be convergent as a result of the aforementioned timescale separation. We show the results of experiments on a problem of constrained routing in a multistage queueing network. Our algorithm is seen to exhibit good performance and the various inequality constraints are seen to be satisfied upon convergence of the algorithm.
Resumo:
Particle Swarm Optimization is a parallel algorithm that spawns particles across a search space searching for an optimized solution. Though inherently parallel, they have distinct synchronizations points which stumbles attempts to create completely distributed versions of it. In this paper, we attempt to create a completely distributed peer-peer particle swarm optimization in a cluster of heterogeneous nodes. Since, the original algorithm requires explicit synchronization points we modified the algorithm in multiple ways to support a peer-peer system of nodes. We also modify certain aspect of the basic PSO algorithm and show how certain numerical problems can take advantage of the same thereby yielding fast convergence.
Resumo:
The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.