106 resultados para Method of linear transformations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method to obtain a nonnegative integral solution of a system of linear equations, if such a solution exists is given. The method writes linear equations as an integer programming problem and then solves the problem using a combination of artificial basis technique and a method of integer forms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Backlund transformations relating the solutions of linear PDE with variable coefficients to those of PDE with constant coefficients are found, generalizing the study of Varley and Seymour [2]. Auto-Backlund transformations are also determined. To facilitate the generation of new solutions via Backlund transformation, explicit solutions of both classes of the PDE just mentioned are found using invariance properties of these equations and other methods. Some of these solutions are new.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The minimum distance of linear block codes is one of the important parameter that indicates the error performance of the code. When the code rate is less than 1/2, efficient algorithms are available for finding minimum distance using the concept of information sets. When the code rate is greater than 1/2, only one information set is available and efficiency suffers. In this paper, we investigate and propose a novel algorithm to find the minimum distance of linear block codes with the code rate greater than 1/2. We propose to reverse the roles of information set and parity set to get virtually another information set to improve the efficiency. This method is 67.7 times faster than the minimum distance algorithm implemented in MAGMA Computational Algebra System for a (80, 45) linear block code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many industrial casting processes, knowledge of the solid fraction evolution during the solidification process is a key factor in determining the process parameters such as cooling rate, stirring intensity and in estimating the total solidification time. In the present work, a new method of estimating solid fraction is presented, which is based on calorimetric principles. In this method, the cooling curve data at each point in the melt, along with the thermal boundary conditions, are used to perform energy balance in the mould, from which solid fraction generation during any time interval can be estimated. This method is applied to the case of a rheocasting process, in which Al-Si alloy (A356 alloy) is solidified by stirring in a cylindrical mould placed in the annulus of a linear electromagnetic stirrer. The metal in the mould is simultaneously cooled and stirred to produce a cylindrical billet with non-dendritic globular microstructure. Temperature is measured at key locations in the mould to assess the various heat exchange processes prevalent in the mould and to monitor the solidification rate. The results obtained by energy balance method are compared with those by the conventional procedure of calculating solid fraction using the Schiel equation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper proposes a study of symmetrical and related components, based on the theory of linear vector spaces. Using the concept of equivalence, the transformation matrixes of Clarke, Kimbark, Concordia, Boyajian and Koga are shown to be column equivalent to Fortescue's symmetrical-component transformation matrix. With a constraint on power, criteria are presented for the choice of bases for voltage and current vector spaces. In particular, it is shown that, for power invariance, either the same orthonormal (self-reciprocal) basis must be chosen for both voltage and current vector spaces, or the basis of one must be chosen to be reciprocal to that of the other. The original �¿, ��, 0 components of Clarke are modified to achieve power invariance. For machine analysis, it is shown that invariant transformations lead to reciprocal mutual inductances between the equivalent circuits. The relative merits of the various components are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Field Programmable Gate Array (FPGA) based hardware accelerator for multi-conductor parasitic capacitance extraction, using Method of Moments (MoM), is presented in this paper. Due to the prohibitive cost of solving a dense algebraic system formed by MoM, linear complexity fast solver algorithms have been developed in the past to expedite the matrix-vector product computation in a Krylov sub-space based iterative solver framework. However, as the number of conductors in a system increases leading to a corresponding increase in the number of right-hand-side (RHS) vectors, the computational cost for multiple matrix-vector products present a time bottleneck, especially for ill-conditioned system matrices. In this work, an FPGA based hardware implementation is proposed to parallelize the iterative matrix solution for multiple RHS vectors in a low-rank compression based fast solver scheme. The method is applied to accelerate electrostatic parasitic capacitance extraction of multiple conductors in a Ball Grid Array (BGA) package. Speed-ups up to 13x over equivalent software implementation on an Intel Core i5 processor for dense matrix-vector products and 12x for QR compressed matrix-vector products is achieved using a Virtex-6 XC6VLX240T FPGA on Xilinx's ML605 board.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Affine transformations have proven to be very powerful for loop restructuring due to their ability to model a very wide range of transformations. A single multi-dimensional affine function can represent a long and complex sequence of simpler transformations. Existing affine transformation frameworks like the Pluto algorithm, that include a cost function for modern multicore architectures where coarse-grained parallelism and locality are crucial, consider only a sub-space of transformations to avoid a combinatorial explosion in finding the transformations. The ensuing practical tradeoffs lead to the exclusion of certain useful transformations, in particular, transformation compositions involving loop reversals and loop skewing by negative factors. In this paper, we propose an approach to address this limitation by modeling a much larger space of affine transformations in conjunction with the Pluto algorithm's cost function. We perform an experimental evaluation of both, the effect on compilation time, and performance of generated codes. The evaluation shows that our new framework, Pluto+, provides no degradation in performance in any of the Polybench benchmarks. For Lattice Boltzmann Method (LBM) codes with periodic boundary conditions, it provides a mean speedup of 1.33x over Pluto. We also show that Pluto+ does not increase compile times significantly. Experimental results on Polybench show that Pluto+ increases overall polyhedral source-to-source optimization time only by 15%. In cases where it improves execution time significantly, it increased polyhedral optimization time only by 2.04x.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, a Field Programmable Gate Array (FPGA)-based hardware accelerator for 3D electromagnetic extraction, using Method of Moments (MoM) is presented. As the number of nets or ports in a system increases, leading to a corresponding increase in the number of right-hand-side (RHS) vectors, the computational cost for multiple matrix-vector products presents a time bottleneck in a linear-complexity fast solver framework. In this work, an FPGA-based hardware implementation is proposed toward a two-level parallelization scheme: (i) matrix level parallelization for single RHS and (ii) pipelining for multiple-RHS. The method is applied to accelerate electrostatic parasitic capacitance extraction of multiple nets in a Ball Grid Array (BGA) package. The acceleration is shown to be linearly scalable with FPGA resources and speed-ups over 10x against equivalent software implementation on a 2.4GHz Intel Core i5 processor is achieved using a Virtex-6 XC6VLX240T FPGA on Xilinx's ML605 board with the implemented design operating at 200MHz clock frequency. (c) 2016 Wiley Periodicals, Inc. Microwave Opt Technol Lett 58:776-783, 2016

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By using the method of characteristics, the effect of footing-soil interface friction angle (delta) on the bearing capacity factor N-gamma was computed for a strip footing. The analysis was performed by employing a curved trapped wedge under the footing base; this wedge joins the footing base at a distance B-t from the footing edge. For a given footing width (B), the value of B-t increases continuously with a decrease in delta. For delta = 0, no trapped wedge exists below the footing base, that is, B-t/B = 0.5. On the contrary, with delta = phi, the point of emergence of the trapped wedge approaches toward the footing edge with an increase in phi. The magnitude of N-gamma increases substantially with an increase in delta/phi. The maximum depth of the plastic zone becomes higher for greater values of delta/phi. The results from the present analysis were found to compare well with those reported in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Careful study of various aspects presented in the note reveals basic fallacies in the concept and final conclusions.The Authors claim to have presented a new method of determining C-v. However, the note does not contain a new method. In fact, the method proposed is an attempt to generate settlement vs. time data using only two values of (t,8). The Authors have used a rectangular hyperbola method to determine C-v from the predicated 8- t data. In this context, the title of the paper itself is misleading and questionable. The Authors have compared C-v values predicated with measured values, both of them being the results of the rectangular hyperbola method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arc discharge between graphite electrodes under a relatively high pressure of hydrogen yields graphene flakes generally containing 2-4 layers in the inner wall region of the arc chamber. The graphene flakes so obtained have been characterized by X-ray diffraction, atomic force microscopy, transmission electron microscopy, and Raman spectroscopy. The method is eminently suited to dope graphene with boron and nitrogen by carrying out arc discharge in the presence of diborane and pyridine respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents two new algorithms for the direct parallel solution of systems of linear equations. The algorithms employ a novel recursive doubling technique to obtain solutions to an nth-order system in n steps with no more than 2n(n −1) processors. Comparing their performance with the Gaussian elimination algorithm (GE), we show that they are almost 100% faster than the latter. This speedup is achieved by dispensing with all the computation involved in the back-substitution phase of GE. It is also shown that the new algorithms exhibit error characteristics which are superior to GE. An n(n + 1) systolic array structure is proposed for the implementation of the new algorithms. We show that complete solutions can be obtained, through these single-phase solution methods, in 5n−log2n−4 computational steps, without the need for intermediate I/O operations.