228 resultados para interchange algorithm
Resumo:
Based on trial interchanges, this paper develops three algorithms for the solution of the placement problem of logic modules in a circuit. A significant decrease in the computation time of such placement algorithms can be achieved by restricting the trial interchanges to only a subset of all the modules in a circuit. The three algorithms are simulated on a DEC 1090 system in Pascal and the performance of these algorithms in terms of total wirelength and computation time is compared with the results obtained by Steinberg, for the 34-module backboard wiring problem. Performance analysis of the first two algorithms reveals that algorithms based on pairwise trial interchanges (2 interchanges) achieve a desired placement faster than the algorithms based on trial N interchanges. The first two algorithms do not perform better than Steinberg's algorithm1, whereas the third algorithm based on trial pairwise interchange among unconnected pairs of modules (UPM) and connected pairs of modules (CPM) performs better than Steinberg's algorithm, both in terms of total wirelength (TWL) and computation time.
Resumo:
In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.
Resumo:
In this paper, we first recast the generalized symmetric eigenvalue problem, where the underlying matrix pencil consists of symmetric positive definite matrices, into an unconstrained minimization problem by constructing an appropriate cost function, We then extend it to the case of multiple eigenvectors using an inflation technique, Based on this asymptotic formulation, we derive a quasi-Newton-based adaptive algorithm for estimating the required generalized eigenvectors in the data case. The resulting algorithm is modular and parallel, and it is globally convergent with probability one, We also analyze the effect of inexact inflation on the convergence of this algorithm and that of inexact knowledge of one of the matrices (in the pencil) on the resulting eigenstructure. Simulation results demonstrate that the performance of this algorithm is almost identical to that of the rank-one updating algorithm of Karasalo. Further, the performance of the proposed algorithm has been found to remain stable even over 1 million updates without suffering from any error accumulation problems.
Resumo:
Recognizing similarities and deriving relationships among protein molecules is a fundamental requirement in present-day biology. Similarities can be present at various levels which can be detected through comparison of protein sequences or their structural folds. In some cases similarities obscure at these levels could be present merely in the substructures at their binding sites. Inferring functional similarities between protein molecules by comparing their binding sites is still largely exploratory and not as yet a routine protocol. One of the main reasons for this is the limitation in the choice of appropriate analytical tools that can compare binding sites with high sensitivity. To benefit from the enormous amount of structural data that is being rapidly accumulated, it is essential to have high throughput tools that enable large scale binding site comparison. Results: Here we present a new algorithm PocketMatch for comparison of binding sites in a frame invariant manner. Each binding site is represented by 90 lists of sorted distances capturing shape and chemical nature of the site. The sorted arrays are then aligned using an incremental alignment method and scored to obtain PMScores for pairs of sites. A comprehensive sensitivity analysis and an extensive validation of the algorithm have been carried out. A comparison with other site matching algorithms is also presented. Perturbation studies where the geometry of a given site was retained but the residue types were changed randomly, indicated that chance similarities were virtually non-existent. Our analysis also demonstrates that shape information alone is insufficient to discriminate between diverse binding sites, unless combined with chemical nature of amino acids. Conclusion: A new algorithm has been developed to compare binding sites in accurate, efficient and high-throughput manner. Though the representation used is conceptually simplistic, we demonstrate that along with the new alignment strategy used, it is sufficient to enable binding comparison with high sensitivity. Novel methodology has also been presented for validating the algorithm for accuracy and sensitivity with respect to geometry and chemical nature of the site. The method is also fast and takes about 1/250(th) second for one comparison on a single processor. A parallel version on BlueGene has also been implemented.
Resumo:
This paper presents a new approach for assessing power system voltage stability based on artificial feed forward neural network (FFNN). The approach uses real and reactive power, as well as voltage vectors for generators and load buses to train the neural net (NN). The input properties of the NN are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm based on the L-index. The performance of the trained NN is investigated on two systems under various voltage stability assessment conditions. Main advantage is that the proposed approach is fast, robust, accurate and can be used online for predicting the L-indices of all the power system buses simultaneously. The method can also be effectively used to determining local and global stability margin for further improvement measures.
Resumo:
Algorithms for planning quasistatic attitude maneuvers based on the Jacobian of the forward kinematic mapping of fully-reversed (FR) sequences of rotations are proposed in this paper. An FR sequence of rotations is a series of finite rotations that consists of initial rotations about the axes of a body-fixed coordinate frame and subsequent rotations that undo these initial rotations. Unlike the Jacobian of conventional systems such as a robot manipulator, the Jacobian of the system manipulated through FR rotations is a null matrix at the identity, which leads to a total breakdown of the traditional Jacobian formulation. Therefore, the Jacobian algorithm is reformulated and implemented so as to synthesize an FR sequence for a desired rotational displacement. The Jacobian-based algorithm presented in this paper identifies particular six-rotation FR sequences that synthesize desired orientations. We developed the single-step and the multiple-step Jacobian methods to accomplish a given task using six-rotation FR sequences. The single-step Jacobian method identifies a specific FR sequence for a given desired orientation and the multiple-step Jacobian algorithm synthesizes physically feasible FR rotations on an optimal path. A comparison with existing algorithms verifies the fast convergence ability of the Jacobian-based algorithm. Unlike closed-form solutions to the inverse kinematics problem, the Jacobian-based algorithm determines the most efficient FR sequence that yields a desired rotational displacement through a simple and inexpensive numerical calculation. The procedure presented here is useful for those motion planning problems wherein the Jacobian is singular or null.
Resumo:
This work deals with the formulation and implementation of an energy-momentum conserving algorithm for conducting the nonlinear transient analysis of structures, within the framework of stress-based hybrid elements. Hybrid elements, which are based on a two-field variational formulation, are much less susceptible to locking than conventional displacement-based elements within the static framework. We show that this advantage carries over to the transient case, so that not only are the solutions obtained more accurate, but they are obtained in fewer iterations. We demonstrate the efficacy of the algorithm on a wide range of problems such as ones involving dynamic buckling, complicated three-dimensional motions, et cetera.
Resumo:
A simple sequential thinning algorithm for peeling off pixels along contours is described. An adaptive algorithm obtained by incorporating shape adaptivity into this sequential process is also given. The distortions in the skeleton at the right-angle and acute-angle corners are minimized in the adaptive algorithm. The asymmetry of the skeleton, which is a characteristic of sequential algorithm, and is due to the presence of T-corners in some of the even-thickness pattern is eliminated. The performance (in terms of time requirements and shape preservation) is compared with that of a modern thinning algorithm.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
We consider the problem of tracking a maneuvering target in clutter. In such an environment, missed detections and false alarms make it impossible to decide, with certainty, the origin of received echoes. Processing radar returns in cluttered environments consists of three functions: 1) target detection and plot formation, 2) plot-to-track association, and 3) track updating. Two inadequacies of the present approaches are 1) Optimization of detection characteristics have not been considered and 2) features that can be used in the plot-to-track correlation process are restricted to a specific class. This paper presents a new approach to overcome these limitations. This approach facilitates tracking of a maneuvering target in clutter and improves tracking performance for weak targets.
Resumo:
A modified least mean fourth (LMF) adaptive algorithm applicable to non-stationary signals is presented. The performance of the proposed algorithm is studied by simulation for non-stationarities in bandwidth, centre frequency and gain of a stochastic signal. These non-stationarities are in the form of linear, sinusoidal and jump variations of the parameters. The proposed LMF adaptation is found to have better parameter tracking capability than the LMS adaptation for the same speed of convergence.
Resumo:
An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the algorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.
Resumo:
A new parallel algorithm for transforming an arithmetic infix expression into a par se tree is presented. The technique is based on a result due to Fischer (1980) which enables the construction of the parse tree, by appropriately scanning the vector of precedence values associated with the elements of the expression. The algorithm presented here is suitable for execution on a shared memory model of an SIMD machine with no read/write conflicts permitted. It uses O(n) processors and has a time complexity of O(log2n) where n is the expression length. Parallel algorithms for generating code for an SIMD machine are also presented.
Resumo:
Concurrency control (CC) algorithms are important in distributed database systems to ensure consistency of the database. A number of such algorithms are available in the literature. The issue of performance evaluation of these algorithms has been recognized to be important. However, only a few studies have been carried out towards this. This paper deals with the performance evaluation of a CC algorithm proposed by Rosenkrantz et al. through a detailed simulation study. In doing so, the algorithm has been modified so that it can, within itself, take care of the redundancy in the database. The influences of various system parameters and the transaction profile on the response time and on the degree of conflict are considered. The entire study has been carried out using the programming language SIMULA on a DEC-1090 system.
Resumo:
This paper considers the applicability of the least mean fourth (LM F) power gradient adaptation criteria with 'advantage' for signals associated with gaussian noise, the associated noise power estimate not being known. The proposed method, as an adaptive spectral estimator, is found to provide superior performance than the least mean square (LMS) adaptation for the same (or even lower) speed of convergence for signals having sufficiently high signal-to-gaussian noise ratio. The results include comparison of the performance of the LMS-tapped delay line, LMF-tapped delay line, LMS-lattice and LMF-lattice algorithms, with the Burg's block data method as reference. The signals, like sinusoids with noise and stochastic signals like EEG, are considered in this study.