93 resultados para Special purpose operations
Resumo:
This paper presents the architecture of a fault-tolerant, special-purpose multi-microprocessor system for solving Partial Differential Equations (PDEs). The modular nature of the architecture allows the use of hundreds of Processing Elements (PEs) for high throughput. Its performance is evaluated by both analytical and simulation methods. The results indicate that the system can achieve high operation rates and is not sensitive to inter-processor communication delay.
Resumo:
This paper reviews computational reliability, computer algebra, stochastic stability and rotating frame turbulence (RFT) in the context of predicting the blade inplane mode stability, a mode which is at best weakly damped. Computational reliability can be built into routine Floquet analysis involving trim analysis and eigenanalysis, and a highly portable special purpose processor restricted to rotorcraft dynamics analysis is found to be more economical than a multipurpose processor. While the RFT effects are dominant in turbulence modeling, the finding that turbulence stabilizes the inplane mode is based on the assumption that turbulence is white noise.
Resumo:
Even research models of helicopter dynamics often lead to a large number of equations of motion with periodic coefficients; and Floquet theory is a widely used mathematical tool for dynamic analysis. Presently, three approaches are used in generating the equations of motion. These are (1) general-purpose symbolic processors such as REDUCE and MACSYMA, (2) a special-purpose symbolic processor, DEHIM (Dynamic Equations for Helicopter Interpretive Models), and (3) completely numerical approaches. In this paper, comparative aspects of the first two purely algebraic approaches are studied by applying REDUCE and DEHIM to the same set of problems. These problems range from a linear model with one degree of freedom to a mildly non-linear multi-bladed rotor model with several degrees of freedom. Further, computational issues in applying Floquet theory are also studied, which refer to (1) the equilibrium solution for periodic forced response together with the transition matrix for perturbations about that response and (2) a small number of eigenvalues and eigenvectors of the unsymmetric transition matrix. The study showed the following: (1) compared to REDUCE, DEHIM is far more portable and economical, but it is also less user-friendly, particularly during learning phases; (2) the problems of finding the periodic response and eigenvalues are well conditioned.
Resumo:
This paper presents an introduction to neurocomputers and an overview of the history of neurocomputers. Direct implementation methods of neurocomputers using techniques from microelectronics and photonics are discussed. Emulation methods using special-purpose hardware are highlighted. The role of parallel computing systems for improved performance is introduced. Some commercially available neurocomputers and performance issues of such systems are also presented.
Resumo:
Fast content addressable data access mechanisms have compelling applications in today's systems. Many of these exploit the powerful wildcard matching capabilities provided by ternary content addressable memories. For example, TCAM based implementations of important algorithms in data mining been developed in recent years; these achieve an an order of magnitude speedup over prevalent techniques. However, large hardware TCAMs are still prohibitively expensive in terms of power consumption and cost per bit. This has been a barrier to extending their exploitation beyond niche and special purpose systems. We propose an approach to overcome this barrier by extending the traditional virtual memory hierarchy to scale up the user visible capacity of TCAMs while mitigating the power consumption overhead. By exploiting the notion of content locality (as opposed to spatial locality), we devise a novel combination of software and hardware techniques to provide an abstraction of a large virtual ternary content addressable space. In the long run, such abstractions enable applications to disassociate considerations of spatial locality and contiguity from the way data is referenced. If successful, ideas for making content addressability a first class abstraction in computing systems can open up a radical shift in the way applications are optimized for memory locality, just as storage class memories are soon expected to shift away from the way in which applications are typically optimized for disk access locality.
Resumo:
There are several areas in the plywood industry where Operations Research techniques have greatly assisted in better decision-making. These have resulted in improved profits, reduction of wood losses and better utilization of resources. Realizing these, some of the plywood manufacturing firms in the developed countries have established separate Operations Research departments or divisions. In the face of limited raw-material resources, raising costs and a competitive environment, the benefits attributable to the use of these techniques are becoming more and more significant.
Resumo:
This article discusses the design and development of GRDB (General Purpose Relational Data Base System) which has been implemented on a DEC-1090 system in Pascal. GRDB is a general purpose database system designed to be completely independent of the nature of data to be handled, since it is not tailored to the specific requirements of any particular enterprise. It can handle different types of data such as variable length records and textual data. Apart from the usual database facilities such as data definition and data manipulation, GRDB supports User Definition Language (UDL) and Security definition language. These facilities are provided through a SEQUEL-like General Purpose Query Language (GQL). GRDB provides adequate protection facilities up to the relation level. The concept of “security matrix” has been made use of to provide database protection. The concept of Unique IDentification number (UID) and Password is made use of to ensure user identification and authentication. The concept of static integrity constraints has been used to ensure data integrity. Considerable efforts have been made to improve the response time through indexing on the data files and query optimisation. GRDB is designed for an interactive use but alternate provision has been made for its use through batch mode also. A typical Air Force application (consisting of data about personnel, inventory control, and maintenance planning) has been used to test GRDB and it has been found to perform satisfactorily.
Resumo:
A special finite element (FASNEL) is developed for the analysis of a neat or misfit fastener in a two-dimensional metallic/composite (orthotropic) plate subjected to biaxial loading. The misfit fasteners could be of interference or clearance type. These fasteners, which are common in engineering structures, cause stress concentrations and are potential sources of failure. Such cases of stress concentration present considerable numerical problems for analysis with conventional finite elements. In FASNEL the shape functions for displacements are derived from series stress function solutions satisfying the governing difffferential equation of the plate and some of the boundary conditions on the hole boundary. The region of the plate outside FASNEL is filled with CST or quadrilateral elements. When a plate with a fastener is gradually loaded the fastener-plate interface exhibits a state of partial contact/separation above a certain load level. In misfit fastener, the extent of contact/separation changes with applied load, leading to a nonlinear moving boundary problem and this is handled by FASNEL using an inverse formulation. The analysis is developed at present for a filled hole in a finite elastic plate providing two axes of symmetry. Numerical studies are conducted on a smooth rigid fastener in a finite elastic plate subjected to uniaxial loading to demonstrate the capability of FASNEL.
Resumo:
This paper considers two special cases of bottleneck grouped assignment problems when n jobs belong to m distinct categories (m < n). Solving these special problems through the available branch and bound algorithms will result in a heavy computational burden. Sequentially identifying nonopitmal variables, this paper provides more efficient methods for those cases. Propositions leading to the algorithms have been established. Numerical examples illustrate the respective algorithms.
Resumo:
This paper describes an algorithm to compute the union, intersection and difference of two polygons using a scan-grid approach. Basically, in this method, the screen is divided into cells and the algorithm is applied to each cell in turn. The output from all the cells is integrated to yield a representation of the output polygon. In most cells, no computation is required and thus the algorithm is a fast one. The algorithm has been implemented for polygons but can be extended to polyhedra as well. The algorithm is shown to take O(N) time in the average case where N is the total number of edges of the two input polygons.
Resumo:
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.
Resumo:
Under certain specific assumption it has been observed that the basic equations of magneto-elasticity in the case of plane deformation lead to a biharmonic equation, as in the case of the classical plane theory of elasticity. The method of solving boundary value problems has been properly modified and a unified approach in solving such problems has been suggested with special reference to problems relating thin infinite plates with a hole. Closed form expressions have been obtained for the stresses due to a uniform magnetic field present in the plane of deformation of a thin infinite conducting plate with a circular hole, the plate being deformed by a tension acting parallel to the direction of the magnetic field.
Resumo:
Abstract is not available.
Resumo:
Algorithms are described for the basic arithmetic operations and square rooting in a negative base. A new operation called polarization that reverses the sign of a number facilitates subtraction, using addition. Some special features of the negative-base arithmetic are also mentioned.