989 resultados para signal-flow graphs
Resumo:
Block diagrams and signal-flow graphs are used to represent and to obtain the transfer function of interconnected systems. The reduction of signal-flow graphs is considered simpler than the reduction of block diagrams for systems with complex interrelationships. Signal-flow graphs reduction can be made without graphic manipulations of diagrams, and it is attractive for a computational implementation. In this paper the authors propose a computational method for direct reduction of signal-flow graphs. This method uses results presented in this paper about the calculation of literal determinants without symbolic mathematics tools. The Cramer's rule is applied for the solution of a set of linear equations, A program in MATLAB language for reduction of signal-flow graphs with the proposed method is presented.
Resumo:
Bibliography: p. 207-210.
Resumo:
This report describes research about flow graphs - labeled, directed, acyclic graphs which abstract representations used in a variety of Artificial Intelligence applications. Flow graphs may be derived from flow grammars much as strings may be derived from string grammars; this derivation process forms a useful model for the stepwise refinement processes used in programming and other engineering domains. The central result of this report is a parsing algorithm for flow graphs. Given a flow grammar and a flow graph, the algorithm determines whether the grammar generates the graph and, if so, finds all possible derivations for it. The author has implemented the algorithm in LISP. The intent of this report is to make flow-graph parsing available as an analytic tool for researchers in Artificial Intelligence. The report explores the intuitions behind the parsing algorithm, contains numerous, extensive examples of its behavior, and provides some guidance for those who wish to customize the algorithm to their own uses.
Resumo:
Rod signals in the mammalian retina are thought to reach ganglion cells over the circuit rod-->rod depolarizing bipolar cell-->AII amacrine cell-->cone bipolar cells-->ganglion cells. A possible alternative pathway involves gap junctions linking the rods and cones, the circuit being rod-->cone-->cone bipolar cells-->ganglion cells. It is not clear whether this second pathway indeed relays rod signals to ganglion cells. We studied signal flow in the isolated rabbit retina with a multielectrode array, which allows the activity of many identified ganglion cells to be observed simultaneously while the preparation is stimulated with light and/or exposed to drugs. When transmission between rods and rod depolarizing bipolar cells was blocked by the glutamate agonist 2-amino-4-phosphonobutyric acid (APB), rod input to all On-center and briskly responding Off-center ganglion cells was dramatically reduced as expected. Off responses persisted, however, in Off-center sluggish and On-Off direction-selective ganglion cells. Presumably these responses were generated by the alternative pathway involving rod-cone junctions. This APB-resistant pathway may carry the major rod input to Off-center sluggish and On-Off direction-selective ganglion cells.
Resumo:
Methods are presented for developing synthesizable FFT cores. These are based on a modular approach in which parameterized commutator and processor blocks are cascaded to implement the computations required in many important FFT signal flow graphs. In addition, it is shown how the use of a digital serial data organization can be used to produce systems that offer 100% processor utilization along with reductions in storage requirements. The approach has been used to create generators for the automated synthesis of FFT cores that are portable across a broad range of silicon technologies. Resulting chip designs are competitive with ones created using manual methods but with significant reductions in design times.
Resumo:
Methods are presented for developing synthesizable FFT cores. These are based on a modular approach in which parameterizable blocks are cascaded to implement the computations required across a range of typical FFT signal flow graphs. The underlying architectural approach combines the use of a digital serial data organization with generic commutator blocks to produce systems that offer 100% processor utilization with storage requirements less than previous designs. The approach has been used to create generators for the automated synthesis of FFT cores that are portable across a broad range of silicon technologies. Resulting chip designs are competitive with manual methods but with significant reductions in design times.
Resumo:
Methods are presented for developing synthesisable FFT cores. These are based on a modular approach in which parameterisable blocks are cascaded to implement the computations required across a range of typical FFT signal flow graphs. The underlying architectural approach combines the use of a digital serial data organisation with generic commutator blocks to produce systems that offer 100% processor utilisation with storage requirements less than previous designs. The approach has been used to create generators for the automated synthesis of FFT cores that are portable across a broad range of silicon technologies. Resulting chip designs are competitive with manual methods but with significant reductions in design times.
Resumo:
Data flow analysis techniques can be used to help assess threats to data confidentiality and integrity in security critical program code. However, a fundamental weakness of static analysis techniques is that they overestimate the ways in which data may propagate at run time. Discounting large numbers of these false-positive data flow paths wastes an information security evaluator's time and effort. Here we show how to automatically eliminate some false-positive data flow paths by precisely modelling how classified data is blocked by certain expressions in embedded C code. We present a library of detailed data flow models of individual expression elements and an algorithm for introducing these components into conventional data flow graphs. The resulting models can be used to accurately trace byte-level or even bit-level data flow through expressions that are normally treated as atomic. This allows us to identify expressions that safely downgrade their classified inputs and thereby eliminate false-positive data flow paths from the security evaluation process. To validate the approach we have implemented and tested it in an existing data flow analysis toolkit.
Resumo:
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.
Resumo:
The transforms dealt with in this paper are defined in terms of the transform kernels which are Kroneeker products of the two or more component kernels. The signal flow-graph for the computation of such a transform is obtained with the flow-graphs for the component transforms as building blocks.
Resumo:
The idea of extracting knowledge in process mining is a descendant of data mining. Both mining disciplines emphasise data flow and relations among elements in the data. Unfortunately, challenges have been encountered when working with the data flow and relations. One of the challenges is that the representation of the data flow between a pair of elements or tasks is insufficiently simplified and formulated, as it considers only a one-to-one data flow relation. In this paper, we discuss how the effectiveness of knowledge representation can be extended in both disciplines. To this end, we introduce a new representation of the data flow and dependency formulation using a flow graph. The flow graph solves the issue of the insufficiency of presenting other relation types, such as many-to-one and one-to-many relations. As an experiment, a new evaluation framework is applied to the Teleclaim process in order to show how this method can provide us with more precise results when compared with other representations.
Resumo:
Branch divergence is a very commonly occurring performance problem in GPGPU in which the execution of diverging branches is serialized to execute only one control flow path at a time. Existing hardware mechanism to reconverge threads using a stack causes duplicate execution of code for unstructured control flow graphs. Also the stack mechanism cannot effectively utilize the available parallelism among diverging branches. Further, the amount of nested divergence allowed is also limited by depth of the branch divergence stack. In this paper we propose a simple and elegant transformation to handle all of the above mentioned problems. The transformation converts an unstructured CFG to a structured CFG without duplicating user code. It incurs only a linear increase in the number of basic blocks and also the number of instructions. Our solution linearizes the CFG using a predicate variable. This mechanism reconverges the divergent threads as early as possible. It also reduces the depth of the reconvergence stack. The available parallelism in nested branches can be effectively extracted by scheduling the basic blocks to reduce the effect of stalls due to memory accesses. It can also increase execution efficiency of nested loops with different trip counts for different threads. We implemented the proposed transformation at PTX level using the Ocelot compiler infrastructure. We evaluated the technique using various benchmarks to show that it can be effective in handling the performance problem due to divergence in unstructured CFGs.
Resumo:
International audience
Resumo:
Significant increase in installation of rooftop Photovoltaic (PV) in the Low-Voltage (LV) residential distribution network has resulted in over voltage problems. Moreover, increasing peak demand creates voltage dip problems and make voltage profile even worse. Utilizing the reactive power capability of PV inverter (RCPVI) can improve the voltage profile to some extent. Resistive caharcteristic (higher R/X ratio) limits the effectiveness of reactive power to provide voltage support in distribution network. Battery Energy Storage (BES), whereas, can store the excess PV generation during high solar insolation time and supply the stored energy back to the grid during peak demand. A coordinated algorithm is developed in this paper to use the reactive capability of PV inverter and BES with droop control. Proposed algorithm is capable to cater the severe voltage violation problem using RCPVI and BES. A signal flow is also mentioned in this research work to ensure smooth communication between all the equipments. Finally the developed algorithm is validated in a test distribution network.
Resumo:
The use of delayed coefficient adaptation in the least mean square (LMS) algorithm has enabled the design of pipelined architectures for real-time transversal adaptive filtering. However, the convergence speed of this delayed LMS (DLMS) algorithm, when compared with that of the standard LMS algorithm, is degraded and worsens with increase in the adaptation delay. Existing pipelined DLMS architectures have large adaptation delay and hence degraded convergence speed. We in this paper, first present a pipelined DLMS architecture with minimal adaptation delay for any given sampling rate. The architecture is synthesized by using a number of function preserving transformations on the signal flow graph representation of the DLMS algorithm. With the use of carry-save arithmetic, the pipelined architecture can support high sampling rates, limited only by the delay of a full adder and a 2-to-1 multiplexer. In the second part of this paper, we extend the synthesis methodology described in the first part, to synthesize pipelined DLMS architectures whose power dissipation meets a specified budget. This low-power architecture exploits the parallelism in the DLMS algorithm to meet the required computational throughput. The architecture exhibits a novel tradeoff between algorithmic performance (convergence speed) and power dissipation. (C) 1999 Elsevier Science B.V. All rights resented.