172 resultados para computer programming
Resumo:
The ergodic or long-run average cost control problem for a partially observed finite-state Markov chain is studied via the associated fully observed separated control problem for the nonlinear filter. Dynamic programming equations for the latter are derived, leading to existence and characterization of optimal stationary policies.
Resumo:
Combining the philosophies of nonlinear model predictive control and approximate dynamic programming, a new suboptimal control design technique is presented in this paper, named as model predictive static programming (MPSP), which is applicable for finite-horizon nonlinear problems with terminal constraints. This technique is computationally efficient, and hence, can possibly be implemented online. The effectiveness of the proposed method is demonstrated by designing an ascent phase guidance scheme for a ballistic missile propelled by solid motors. A comparison study with a conventional gradient method shows that the MPSP solution is quite close to the optimal solution.
Resumo:
This paper presents the programming an FPGA (Field Programmable Gate Array) to emulate the dynamics of DC machines. FPGA allows high speed real time simulation with high precision. The described design includes block diagram representation of DC machine, which contain all arithmetic and logical operations. The real time simulation of the machine in FPGA is controlled by user interfaces they are Keypad interface, LCD display on-line and digital to analog converter. This approach provides emulation of electrical machine by changing the parameters. Separately Exited DC machine implemented and experimental results are presented.
Resumo:
Eklundh's (1972) algorithm to transpose a large matrix stored on an external device such as a disc has been programmed and tested. A simple description of computer implementation is given in this note.
Resumo:
Two algorithms that improve upon the sequent-peak procedure for reservoir capacity calculation are presented. The first incorporates storage-dependent losses (like evaporation losses) exactly as the standard linear programming formulation does. The second extends the first so as to enable designing with less than maximum reliability even when allowable shortfall in any failure year is also specified. Together, the algorithms provide a more accurate, flexible and yet fast method of calculating the storage capacity requirement in preliminary screening and optimization models.
Resumo:
Database management systems offer a very reliable and attractive data organization for fast and economical information storage and processing for diverse applications. It is much more important that the information should be easily accessible to users with varied backgrounds, professional as well as casual, through a suitable data sublanguage. The language adopted here (APPLE) is one such language for relational database systems and is completely nonprocedural and well suited to users with minimum or no programming background. This is supported by an access path model which permits the user to formulate completely nonprocedural queries expressed solely in terms of attribute names. The data description language (DDL) and data manipulation language (DML) features of APPLE are also discussed. The underlying relational database has been implemented with the help of the DATATRIEVE-11 utility for record and domain definition which is available on the PDP-11/35. The package is coded in Pascal and MACRO-11. Further, most of the limitations of the DATATRIEVE-11 utility have been eliminated in the interface package.
Resumo:
Concurrency control (CC) algorithms are important in distributed database systems to ensure consistency of the database. A number of such algorithms are available in the literature. The issue of performance evaluation of these algorithms has been recognized to be important. However, only a few studies have been carried out towards this. This paper deals with the performance evaluation of a CC algorithm proposed by Rosenkrantz et al. through a detailed simulation study. In doing so, the algorithm has been modified so that it can, within itself, take care of the redundancy in the database. The influences of various system parameters and the transaction profile on the response time and on the degree of conflict are considered. The entire study has been carried out using the programming language SIMULA on a DEC-1090 system.
Resumo:
Polytypes have been simulated, treating them as analogues of a one-dimensional spin-half Ising chain with competing short-range and infinite-range interactions. Short-range interactions are treated as random variables to approximate conditions of growth from melt as well as from vapour. Besides ordered polytypes up to 12R, short stretches of long-period polytypes (up to 33R) have been observed. Such long-period sequences could be of significance in the context of Frank's theory of polytypism. The form of short-range interactions employed in the study has been justified by carrying out model potential calculations.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.
Resumo:
Plywood manufacture includes two fundamental stages. The first is to peel or separate logs into veneer sheets of different thicknesses. The second is to assemble veneer sheets into finished plywood products. At the first stage a decision must be made as to the number of different veneer thicknesses to be peeled and what these thicknesses should be. At the second stage, choices must be made as to how these veneers will be assembled into final products to meet certain constraints while minimizing wood loss. These decisions present a fundamental management dilemma. Costs of peeling, drying, storage, handling, etc. can be reduced by decreasing the number of veneer thicknesses peeled. However, a reduced set of thickness options may make it infeasible to produce the variety of products demanded by the market or increase wood loss by requiring less efficient selection of thicknesses for assembly. In this paper the joint problem of veneer choice and plywood construction is formulated as a nonlinear integer programming problem. A relatively simple optimal solution procedure is developed that exploits special problem structure. This procedure is examined on data from a British Columbia plywood mill. Restricted to the existing set of veneer thicknesses and plywood designs used by that mill, the procedure generated a solution that reduced wood loss by 79 percent, thereby increasing net revenue by 6.86 percent. Additional experiments were performed that examined the consequences of changing the number of veneer thicknesses used. Extensions are discussed that permit the consideration of more than one wood species.
Resumo:
A hot billet in contact with relatively cold dies undergoes rapid cooling in the forging operation. This may give rise to unfilled cavities, poor surface finish and stalling of the press. A knowledge of billet-die temperatures as a function of time is therefore essential for process design. A computer code using finite difference method is written to estimate such temperature histories and validated by comparing the predicted cooling of an integral die-billet configuration with that obtained experimentally.
Resumo:
Using the link-link incidence matrix to represent a simple-jointed kinematic chain algebraic procedures have been developed to determine its structural characteristics such as the type of freedom of the chain, the number of distinct mechanisms and driving mechanisms that can be derived from the chain. A computer program incorporating these graph theory based procedures has been applied successfully for the structural analysis of several typical chains.
Resumo:
It is shown that a leaky aquifer model can be used for well field analysis in hard rock areas, treating the upper weathered and clayey layers as a composite unconfined aquitard overlying a deeper fractured aquifer. Two long-duration pump test studies are reported in granitic and schist regions in the Vedavati river basin. The validity of simplifications in the analytical solution is verified by finite difference computations.
Resumo:
An estimate of the irrigation potential over and above the existing utilization was made based on the ground water potential in the Vedavati river basin. The estimate is based on assumed crops and cropping patterns as per existing practice in the various taluks of the basin. Irrigation potential was estimated talukwise based on the available ground water potential identified from the simulation study. It is estimated that 84,100 hectares of additional land can be brought under irrigation from ground water in the entire basin.