117 resultados para PLC and SCADA programming
em Indian Institute of Science - Bangalore - Índia
Resumo:
This paper obtains a new accurate model for sensitivity in power systems and uses it in conjunction with linear programming for the solution of load-shedding problems with a minimum loss of loads. For cases where the error in the sensitivity model increases, other linear programming and quadratic programming models have been developed, assuming currents at load buses as variables and not load powers. A weighted error criterion has been used to take priority schedule into account; it can be either a linear or a quadratic function of the errors, and depending upon the function appropriate programming techniques are to be employed.
Resumo:
This paper presents methodologies for incorporating phasor measurements into conventional state estimator. The angle measurements obtained from Phasor Measurement Units are handled as angle difference measurements rather than incorporating the angle measurements directly. Handling in such a manner overcomes the problems arising due to the choice of reference bus. Current measurements obtained from Phasor Measurement Units are treated as equivalent pseudo-voltage measurements at the neighboring buses. Two solution approaches namely normal equations approach and linear programming approach are presented to show how the Phasor Measurement Unit measurements can be handled. Comparative evaluation of both the approaches is also presented. Test results on IEEE 14 bus system are presented to validate both the approaches.
Resumo:
We analyze the spectral zero-crossing rate (SZCR) properties of transient signals and show that SZCR contains accurate localization information about the transient. For a train of pulses containing transient events, the SZCR computed on a sliding window basis is useful in locating the impulse locations accurately. We present the properties of SZCR on standard stylized signal models and then show how it may be used to estimate the epochs in speech signals. We also present comparisons with some state-of-the-art techniques that are based on the group-delay function. Experiments on real speech show that the proposed SZCR technique is better than other group-delay-based epoch detectors. In the presence of noise, a comparison with the zero-frequency filtering technique (ZFF) and Dynamic programming projected Phase-Slope Algorithm (DYPSA) showed that performance of the SZCR technique is better than DYPSA and inferior to that of ZFF. For highpass-filtered speech, where ZFF performance suffers drastically, the identification rates of SZCR are better than those of DYPSA.
Resumo:
Bearing capacity factor N-c for axially loaded piles in clays whose cohesion increases linearly with depth has been estimated numerically under undrained (phi=0) condition. The Study follows the lower bound limit analysis in conjunction With finite elements and linear programming. A new formulation is proposed for solving an axisymmetric geotechnical stability problem. The variation of N-c with embedment ratio is obtained for several rates of the increase of soil cohesion with depth; a special case is also examined when the pile base was placed on the stiff clay stratum overlaid by a soft clay layer. It was noticed that the magnitude of N-c reaches almost a constant value for embedment ratio greater than unity. The roughness of the pile base and shaft affects marginally the magnitudes of N-c. The results obtained from the present study are found to compare quite well with the different numerical solutions reported in the literature.
Resumo:
The vertical uplift resistance of two interfering rigid rough strip anchors embedded horizontally in sand at shallow depths has been examined. The analysis is performed by using an upper bound theorem o limit analysis in combination with finite elements and linear programming. It is specified that both the anchors are loaded to failure simultaneously at the same magnitude of the failure load. For different clear spacing (S) between the anchors, the magnitude of the efficiency factor (xi(gamma)) is determined. On account of interference, the magnitude of xi(gamma) is found to reduce continuously with a decrease in the spacing between the anchors. The results from the numerical analysis were found to compare reasonably well with the available theoretical data from the literature.
Resumo:
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.
Resumo:
In this paper we show the applicability of Ant Colony Optimisation (ACO) techniques for pattern classification problem that arises in tool wear monitoring. In an earlier study, artificial neural networks and genetic programming have been successfully applied to tool wear monitoring problem. ACO is a recent addition to evolutionary computation technique that has gained attention for its ability to extract the underlying data relationships and express them in form of simple rules. Rules are extracted for data classification using training set of data points. These rules are then applied to set of data in the testing/validation set to obtain the classification accuracy. A major attraction in ACO based classification is the possibility of obtaining an expert system like rules that can be directly applied subsequently by the user in his/her application. The classification accuracy obtained in ACO based approach is as good as obtained in other biologically inspired techniques.
Resumo:
By using the lower bound limit analysis in conjunction with finite elements and linear programming, the bearing capacity factors due to cohesion, surcharge and unit weight, respectively, have been computed for a circular footing with different values of phi. The recent axisymmetric formulation proposed by the authors under phi = 0 condition, which is based on the concept that the magnitude of the hoop stress (sigma(theta)) remains closer to the least compressive normal stress (sigma(3)), is extended for a general c-phi soil. The computational results are found to compare quite well with the available numerical results from literature. It is expected that the study will be useful for solving various axisymmetric geotechnical stability problems. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
By incorporating the variation of peak soil friction angle (phi) with mean principal stress (sigma(m)), the effect of anchor width (B) on vertical uplift resistance of a strip anchor plate has been examined. The anchor was embedded horizontally in a granular medium. The analysis was performed using lower bound finite element limit analysis and linear programming. An iterative procedure, proposed recently by the authors, was implemented to incorporate the variation of phi with sigma(m). It is noted that for a given embedment ratio, with a decrease in anchor width (B), (i) the uplift factor (F-gamma) increases continuously and (ii) the average ultimate uplift pressure (q(u)) decreases quite significantly. The scale effect becomes more pronounced at greater embedment ratios.
Resumo:
We describe a compiler for the Flat Concurrent Prolog language on a message passing multiprocessor architecture. This compiler permits symbolic and declarative programming in the syntax of Guarded Horn Rules, The implementation has been verified and tested on the 64-node PARAM parallel computer developed by C-DAC (Centre for the Development of Advanced Computing, India), Flat Concurrent Prolog (FCP) is a logic programming language designed for concurrent programming and parallel execution, It is a process oriented language, which embodies dataflow synchronization and guarded-command as its basic control mechanisms. An identical algorithm is executed on every processor in the network, We assume regular network topologies like mesh, ring, etc, Each node has a local memory, The algorithm comprises of two important parts: reduction and communication, The most difficult task is to integrate the solutions of problems that arise in the implementation in a coherent and efficient manner. We have tested the efficacy of the compiler on various benchmark problems of the ICOT project that have been reported in the recent book by Evan Tick, These problems include Quicksort, 8-queens, and Prime Number Generation, The results of the preliminary tests are favourable, We are currently examining issues like indexing and load balancing to further optimize our compiler.
Resumo:
Sixty-four sequences containing lectin domains with homologs of known three-dimensional structure were identified through a search of mycobacterial genomes. They appear to belong to the -prism II, the C-type, the Microcystis virdis (MV), and the -trefoil lectin folds. The first three always occur in conjunction with the LysM, the PI-PLC, and the -grasp domains, respectively while mycobacterial -trefoil lectins are unaccompanied by any other domain. Thirty heparin binding hemagglutinins (HBHA), already annotated, have also been included in the study although they have no homologs of known three-dimensional structure. The biological role of HBHA has been well characterized. A comparison between the sequences of the lectin from pathogenic and nonpathogenic mycobacteria provides insights into the carbohydrate binding region of the molecule, but the structure of the molecule is yet to be determined. A reasonable picture of the structural features of other mycobacterial proteins containing one or the other of the four lectin domains can be gleaned through the examination of homologs proteins, although the structure of none of them is available. Their biological role is also yet to be elucidated. The work presented here is among the first steps towards exploring the almost unexplored area of the structural biology of mycobacterial lectins. Proteins 2013. (c) 2012 Wiley Periodicals, Inc.
Resumo:
The stability of two long unsupported circular parallel tunnels aligned horizontally in fully cohesive and cohesive-frictional soils has been determined. An upper bound limit analysis in combination with finite elements and linear programming is employed to perform the analysis. For different clear spacing (S) between the tunnels, the stability of tunnels is expressed in terms of a non-dimensional stability number (gamma H-max/c); where H is tunnel cover, c refers to soil cohesion, and gamma(max) is maximum unit weight of soil mass which the tunnels can bear without any collapse. The variation of the stability number with tunnels' spacing has been established for different combinations of H/D, m and phi; where D refers to diameter of each tunnel, phi is the internal friction angle of soil and m accounts for the rate at which the cohesion increases linearly with depth. The stability number reduces continuously with a decrease in the spacing between the tunnels. The optimum spacing (S-opt) between the two tunnels required to eliminate the interference effect increases with (i) an increase in H/D and (ii) a decrease in the values of both m and phi. The value of S-opt lies approximately in a range of 1.5D-3.5D with H/D = 1 and 7D-12D with H/D = 7. The results from the analysis compare reasonably well with the different solutions reported in literature. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Space-vector-based pulse width modulation (PWM) for a voltage source inverter (VSI) offers flexibility in terms of different switching sequences. Numerical simulation is helpful to assess the performance of a PWM method before actual implementation. A quick-simulation tool to simulate a variety of space-vector-based PWM strategies for a two-level VSI-fed squirrel cage induction motor drive is presented. The simulator is developed using C and Python programming languages, and has a graphical user interface (GUI) also. The prime focus being PWM strategies, the simulator developed is 40 times faster than MATLAB in terms of the actual time taken for a simulation. Simulation and experimental results are presented on a 5-hp ac motor drive.
Resumo:
3-Dimensional Diffuse Optical Tomographic (3-D DOT) image reconstruction algorithm is computationally complex and requires excessive matrix computations and thus hampers reconstruction in real time. In this paper, we present near real time 3D DOT image reconstruction that is based on Broyden approach for updating Jacobian matrix. The Broyden method simplifies the algorithm by avoiding re-computation of the Jacobian matrix in each iteration. We have developed CPU and heterogeneous CPU/GPU code for 3D DOT image reconstruction in C and MatLab programming platform. We have used Compute Unified Device Architecture (CUDA) programming framework and CUDA linear algebra library (CULA) to utilize the massively parallel computational power of GPUs (NVIDIA Tesla K20c). The computation time achieved for C program based implementation for a CPU/GPU system for 3 planes measurement and FEM mesh size of 19172 tetrahedral elements is 806 milliseconds for an iteration.
Resumo:
Plywood manufacture includes two fundamental stages. The first is to peel or separate logs into veneer sheets of different thicknesses. The second is to assemble veneer sheets into finished plywood products. At the first stage a decision must be made as to the number of different veneer thicknesses to be peeled and what these thicknesses should be. At the second stage, choices must be made as to how these veneers will be assembled into final products to meet certain constraints while minimizing wood loss. These decisions present a fundamental management dilemma. Costs of peeling, drying, storage, handling, etc. can be reduced by decreasing the number of veneer thicknesses peeled. However, a reduced set of thickness options may make it infeasible to produce the variety of products demanded by the market or increase wood loss by requiring less efficient selection of thicknesses for assembly. In this paper the joint problem of veneer choice and plywood construction is formulated as a nonlinear integer programming problem. A relatively simple optimal solution procedure is developed that exploits special problem structure. This procedure is examined on data from a British Columbia plywood mill. Restricted to the existing set of veneer thicknesses and plywood designs used by that mill, the procedure generated a solution that reduced wood loss by 79 percent, thereby increasing net revenue by 6.86 percent. Additional experiments were performed that examined the consequences of changing the number of veneer thicknesses used. Extensions are discussed that permit the consideration of more than one wood species.