237 resultados para ripple algorithm
em Indian Institute of Science - Bangalore - Índia
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
High-power voltage-source inverters (VSI) are often switched at low frequencies due to switching loss constraints. Numerous low-switching-frequency PWM techniques have been reported, which are quite successful in reducing the total harmonic distortion under open-loop conditions at such low operating frequencies. However, the line current still contains low-frequency components (though of reduced amplitudes), which are fed back to the current loop controller during closed-loop operation. Since the harmonic frequencies are quite low and are not much higher than the bandwidth of the current loop, these are amplified by the current controller, causing oscillations and instability. Hence, only the fundamental current should be fed back. Filtering out these harmonics from the measured current (before feeding back) leads to phase shift and attenuation of the fundamental component, while not eliminating the harmonics totally. This paper proposes a method for on-line extraction of the fundamental current in induction motor drives, modulated with low-switching-frequency PWM. The proposed method is validated through simulations on MATLAB/Simulink. Further, the proposed algorithm is implemented on Cyclone FPGA based controller board. Experimental results are presented for an R-L load.
Resumo:
In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.
Resumo:
A pulsewidth modulation (PWM) technique is proposed for minimizing the rms torque ripple in inverter-fed induction motor drives subject to a given average switching frequency of the inverter. The proposed PWM technique is a combination of optimal continuous modulation and discontinuous modulation. The proposed technique is evaluated both theoretically as well as experimentally and is compared with well-known PWM techniques. It is shown that the proposed method reduces the rms torque ripple by about 30% at the rated speed of the motor drive, compared to conventional space vector PWM.
Resumo:
In this paper, we first recast the generalized symmetric eigenvalue problem, where the underlying matrix pencil consists of symmetric positive definite matrices, into an unconstrained minimization problem by constructing an appropriate cost function, We then extend it to the case of multiple eigenvectors using an inflation technique, Based on this asymptotic formulation, we derive a quasi-Newton-based adaptive algorithm for estimating the required generalized eigenvectors in the data case. The resulting algorithm is modular and parallel, and it is globally convergent with probability one, We also analyze the effect of inexact inflation on the convergence of this algorithm and that of inexact knowledge of one of the matrices (in the pencil) on the resulting eigenstructure. Simulation results demonstrate that the performance of this algorithm is almost identical to that of the rank-one updating algorithm of Karasalo. Further, the performance of the proposed algorithm has been found to remain stable even over 1 million updates without suffering from any error accumulation problems.
Resumo:
Recognizing similarities and deriving relationships among protein molecules is a fundamental requirement in present-day biology. Similarities can be present at various levels which can be detected through comparison of protein sequences or their structural folds. In some cases similarities obscure at these levels could be present merely in the substructures at their binding sites. Inferring functional similarities between protein molecules by comparing their binding sites is still largely exploratory and not as yet a routine protocol. One of the main reasons for this is the limitation in the choice of appropriate analytical tools that can compare binding sites with high sensitivity. To benefit from the enormous amount of structural data that is being rapidly accumulated, it is essential to have high throughput tools that enable large scale binding site comparison. Results: Here we present a new algorithm PocketMatch for comparison of binding sites in a frame invariant manner. Each binding site is represented by 90 lists of sorted distances capturing shape and chemical nature of the site. The sorted arrays are then aligned using an incremental alignment method and scored to obtain PMScores for pairs of sites. A comprehensive sensitivity analysis and an extensive validation of the algorithm have been carried out. A comparison with other site matching algorithms is also presented. Perturbation studies where the geometry of a given site was retained but the residue types were changed randomly, indicated that chance similarities were virtually non-existent. Our analysis also demonstrates that shape information alone is insufficient to discriminate between diverse binding sites, unless combined with chemical nature of amino acids. Conclusion: A new algorithm has been developed to compare binding sites in accurate, efficient and high-throughput manner. Though the representation used is conceptually simplistic, we demonstrate that along with the new alignment strategy used, it is sufficient to enable binding comparison with high sensitivity. Novel methodology has also been presented for validating the algorithm for accuracy and sensitivity with respect to geometry and chemical nature of the site. The method is also fast and takes about 1/250(th) second for one comparison on a single processor. A parallel version on BlueGene has also been implemented.
Resumo:
This paper presents a new approach for assessing power system voltage stability based on artificial feed forward neural network (FFNN). The approach uses real and reactive power, as well as voltage vectors for generators and load buses to train the neural net (NN). The input properties of the NN are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm based on the L-index. The performance of the trained NN is investigated on two systems under various voltage stability assessment conditions. Main advantage is that the proposed approach is fast, robust, accurate and can be used online for predicting the L-indices of all the power system buses simultaneously. The method can also be effectively used to determining local and global stability margin for further improvement measures.
Resumo:
Algorithms for planning quasistatic attitude maneuvers based on the Jacobian of the forward kinematic mapping of fully-reversed (FR) sequences of rotations are proposed in this paper. An FR sequence of rotations is a series of finite rotations that consists of initial rotations about the axes of a body-fixed coordinate frame and subsequent rotations that undo these initial rotations. Unlike the Jacobian of conventional systems such as a robot manipulator, the Jacobian of the system manipulated through FR rotations is a null matrix at the identity, which leads to a total breakdown of the traditional Jacobian formulation. Therefore, the Jacobian algorithm is reformulated and implemented so as to synthesize an FR sequence for a desired rotational displacement. The Jacobian-based algorithm presented in this paper identifies particular six-rotation FR sequences that synthesize desired orientations. We developed the single-step and the multiple-step Jacobian methods to accomplish a given task using six-rotation FR sequences. The single-step Jacobian method identifies a specific FR sequence for a given desired orientation and the multiple-step Jacobian algorithm synthesizes physically feasible FR rotations on an optimal path. A comparison with existing algorithms verifies the fast convergence ability of the Jacobian-based algorithm. Unlike closed-form solutions to the inverse kinematics problem, the Jacobian-based algorithm determines the most efficient FR sequence that yields a desired rotational displacement through a simple and inexpensive numerical calculation. The procedure presented here is useful for those motion planning problems wherein the Jacobian is singular or null.
Resumo:
This work deals with the formulation and implementation of an energy-momentum conserving algorithm for conducting the nonlinear transient analysis of structures, within the framework of stress-based hybrid elements. Hybrid elements, which are based on a two-field variational formulation, are much less susceptible to locking than conventional displacement-based elements within the static framework. We show that this advantage carries over to the transient case, so that not only are the solutions obtained more accurate, but they are obtained in fewer iterations. We demonstrate the efficacy of the algorithm on a wide range of problems such as ones involving dynamic buckling, complicated three-dimensional motions, et cetera.
Resumo:
A simple sequential thinning algorithm for peeling off pixels along contours is described. An adaptive algorithm obtained by incorporating shape adaptivity into this sequential process is also given. The distortions in the skeleton at the right-angle and acute-angle corners are minimized in the adaptive algorithm. The asymmetry of the skeleton, which is a characteristic of sequential algorithm, and is due to the presence of T-corners in some of the even-thickness pattern is eliminated. The performance (in terms of time requirements and shape preservation) is compared with that of a modern thinning algorithm.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
We consider the problem of tracking a maneuvering target in clutter. In such an environment, missed detections and false alarms make it impossible to decide, with certainty, the origin of received echoes. Processing radar returns in cluttered environments consists of three functions: 1) target detection and plot formation, 2) plot-to-track association, and 3) track updating. Two inadequacies of the present approaches are 1) Optimization of detection characteristics have not been considered and 2) features that can be used in the plot-to-track correlation process are restricted to a specific class. This paper presents a new approach to overcome these limitations. This approach facilitates tracking of a maneuvering target in clutter and improves tracking performance for weak targets.
Resumo:
A modified least mean fourth (LMF) adaptive algorithm applicable to non-stationary signals is presented. The performance of the proposed algorithm is studied by simulation for non-stationarities in bandwidth, centre frequency and gain of a stochastic signal. These non-stationarities are in the form of linear, sinusoidal and jump variations of the parameters. The proposed LMF adaptation is found to have better parameter tracking capability than the LMS adaptation for the same speed of convergence.
Resumo:
An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the algorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.
Resumo:
A new parallel algorithm for transforming an arithmetic infix expression into a par se tree is presented. The technique is based on a result due to Fischer (1980) which enables the construction of the parse tree, by appropriately scanning the vector of precedence values associated with the elements of the expression. The algorithm presented here is suitable for execution on a shared memory model of an SIMD machine with no read/write conflicts permitted. It uses O(n) processors and has a time complexity of O(log2n) where n is the expression length. Parallel algorithms for generating code for an SIMD machine are also presented.