991 resultados para general matrix-matrix multiplication


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Writing the hindered rotor (hr) partition function as the trace of (rho) over cap = e(-beta(H) over cap hr), we approximate it by the sum of contributions from a set of points in position space. The contribution of the density matrix from each point is approximated by performing a local harmonic expansion around it. The highlight of this method is that it can be easily extended to multidimensional systems. Local harmonic expansion leads to a breakdown of the method a low temperatures. In order to calculate the partition function at low temperatures, we suggest a matrix multiplication procedure. The results obtained using these methods closely agree with the exact partition function at all temperature ranges. Our method bypasses the evaluation of eigenvalues and eigenfunctions and evaluates the density matrix for internal rotation directly. We also suggest a procedure to account for the antisymmetry of the total wavefunction in the same. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Acoustic modeling using mixtures of multivariate Gaussians is the prevalent approach for many speech processing problems. Computing likelihoods against a large set of Gaussians is required as a part of many speech processing systems and it is the computationally dominant phase for Large Vocabulary Continuous Speech Recognition (LVCSR) systems. We express the likelihood computation as a multiplication of matrices representing augmented feature vectors and Gaussian parameters. The computational gain of this approach over traditional methods is by exploiting the structure of these matrices and efficient implementation of their multiplication. In particular, we explore direct low-rank approximation of the Gaussian parameter matrix and indirect derivation of low-rank factors of the Gaussian parameter matrix by optimum approximation of the likelihood matrix. We show that both the methods lead to similar speedups but the latter leads to far lesser impact on the recognition accuracy. Experiments on 1,138 work vocabulary RM1 task and 6,224 word vocabulary TIMIT task using Sphinx 3.7 system show that, for a typical case the matrix multiplication based approach leads to overall speedup of 46 % on RM1 task and 115 % for TIMIT task. Our low-rank approximation methods provide a way for trading off recognition accuracy for a further increase in computational performance extending overall speedups up to 61 % for RM1 and 119 % for TIMIT for an increase of word error rate (WER) from 3.2 to 3.5 % for RM1 and for no increase in WER for TIMIT. We also express pairwise Euclidean distance computation phase in Dynamic Time Warping (DTW) in terms of matrix multiplication leading to saving of approximately of computational operations. In our experiments using efficient implementation of matrix multiplication, this leads to a speedup of 5.6 in computing the pairwise Euclidean distances and overall speedup up to 3.25 for DTW.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coarse Grained Reconfigurable Architectures (CGRA) are emerging as embedded application processing units in computing platforms for Exascale computing. Such CGRAs are distributed memory multi- core compute elements on a chip that communicate over a Network-on-chip (NoC). Numerical Linear Algebra (NLA) kernels are key to several high performance computing applications. In this paper we propose a systematic methodology to obtain the specification of Compute Elements (CE) for such CGRAs. We analyze block Matrix Multiplication and block LU Decomposition algorithms in the context of a CGRA, and obtain theoretical bounds on communication requirements, and memory sizes for a CE. Support for high performance custom computations common to NLA kernels are met through custom function units (CFUs) in the CEs. We present results to justify the merits of such CFUs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A two-step digit-set-restricted modified signed-digit (MSD) adder based on symbolic substitution is presented. In the proposed addition algorithm, carry propagation is avoided by using reference digits to restrict the intermediate MSD carry and sum digits into {(1) over bar ,0} and {0, 1}, respectively. The algorithm requires only 12 minterms to generate the final results, and no complementarity operations for nonzero outputs are involved, which simplifies the system complexity significantly. An optoelectronic shared content-addressable memory based on an incoherent correlator is used for experimental demonstration. (c) 2005 Society of Photo-Optical Instrumentation Engineers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A two-step digit-set-restricted modified signed-digit (MSD) adder based on symbolic substitution is presented. In the proposed addition algorithm, carry propagation is avoided by using reference digits to restrict the intermediate MSD carry and sum digits into {(1) over bar ,0} and {0, 1}, respectively. The algorithm requires only 12 minterms to generate the final results, and no complementarity operations for nonzero outputs are involved, which simplifies the system complexity significantly. An optoelectronic shared content-addressable memory based on an incoherent correlator is used for experimental demonstration. (c) 2005 Society of Photo-Optical Instrumentation Engineers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este trabalho que envolve matemática aplicada e processamento paralelo: seu objetivo é avaliar uma estratégia de implementação em paralelo para algoritmos de diferenças finitas que aproximam a solução de equações diferenciais de evolução. A alternativa proposta é a substituição dos produtos matriz-vetor efetuados sequencialmente por multiplicações matriz-matriz aceleradas pelo método de Strassen em paralelo. O trabalho desenvolve testes visando verificar o ganho computacional relacionado a essa estratégia de paralelização, pois as aplicacações computacionais, que empregam a estratégia sequencial, possuem como característica o longo período de computação causado pelo grande volume de cálculo. Inclusive como alternativa, nós usamos o algoritmo em paralelo convencional para solução de algoritmos explícitos para solução de equações diferenciais parciais evolutivas no tempo. Portanto, de acordo com os resultados obtidos, nós observamos as características de cada estratégia em paralelo, tendo como principal objetivo diminuir o esforço computacional despendido.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many-beam dynamical simulations and observations have been made for large-angle convergent-beam electron diffraction (LACBED) imaging of crystal defects, such as stacking faults and dislocations. The simulations are based on a general matrix formulation of dynamical electron diffraction theory by Peng and Whelan, and the results are compared with experimental LACBED images of stacking faults and dislocations of Si angle crystals. Excellent agreement is achieved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For modern FPGA, implementation of memory intensive processing applications such as high end image and video processing systems necessitates manual design of complex multilevel memory hierarchies incorporating off-chip DDR and onchip BRAM and LUT RAM. In fact, automated synthesis of multi-level memory hierarchies is an open problem facing high level synthesis technologies for FPGA devices. In this paper we describe the first automated solution to this problem.
By exploiting a novel dataflow application modelling dialect, known as Valved Dataflow, we show for the first time how, not only can such architectures be automatically derived, but also that the resulting implementations support real-time processing for current image processing application standards such as H.264. We demonstrate the viability of this approach by reporting the performance and cost of hierarchies automatically generated for Motion Estimation, Matrix Multiplication and Sobel Edge Detection applications on Virtex-5 FPGA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Realising high performance image and signal processing
applications on modern FPGA presents a challenging implementation problem due to the large data frames streaming through these systems. Specifically, to meet the high bandwidth and data storage demands of these applications, complex hierarchical memory architectures must be manually specified
at the Register Transfer Level (RTL). Automated approaches which convert high-level operation descriptions, for instance in the form of C programs, to an FPGA architecture, are unable to automatically realise such architectures. This paper
presents a solution to this problem. It presents a compiler to automatically derive such memory architectures from a C program. By transforming the input C program to a unique dataflow modelling dialect, known as Valved Dataflow (VDF), a mapping and synthesis approach developed for this dialect can
be exploited to automatically create high performance image and video processing architectures. Memory intensive C kernels for Motion Estimation (CIF Frames at 30 fps), Matrix Multiplication (128x128 @ 500 iter/sec) and Sobel Edge Detection (720p @ 30 fps), which are unrealisable by current state-of-the-art C-based synthesis tools, are automatically derived from a C description of the algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Field programmable gate array devices boast abundant resources with which custom accelerator components for signal, image and data processing may be realised; however, realising high performance, low cost accelerators currently demands manual register transfer level design. Software-programmable ’soft’ processors have been proposed as a way to reduce this design burden but they are unable to support performance and cost comparable to custom circuits. This paper proposes a new soft processing approach for FPGA which promises to overcome this barrier. A high performance, fine-grained streaming processor, known as a Streaming Accelerator Element, is proposed which realises accelerators as large scale custom multicore networks. By adopting a streaming execution approach with advanced program control and memory addressing capabilities, typical program inefficiencies can be almost completely eliminated to enable performance and cost which are unprecedented amongst software-programmable solutions. When used to realise accelerators for fast fourier transform, motion estimation, matrix multiplication and sobel edge detection it is shown how the proposed architecture enables real-time performance and with performance and cost comparable with hand-crafted custom circuit accelerators and up to two orders of magnitude beyond existing soft processors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Realising memory intensive applications such as image and video processing on FPGA requires creation of complex, multi-level memory hierarchies to achieve real-time performance; however commerical High Level Synthesis tools are unable to automatically derive such structures and hence are unable to meet the demanding bandwidth and capacity constraints of these applications. Current approaches to solving this problem can only derive either single-level memory structures or very deep, highly inefficient hierarchies, leading in either case to one or more of high implementation cost and low performance. This paper presents an enhancement to an existing MC-HLS synthesis approach which solves this problem; it exploits and eliminates data duplication at multiple levels levels of the generated hierarchy, leading to a reduction in the number of levels and ultimately higher performance, lower cost implementations. When applied to synthesis of C-based Motion Estimation, Matrix Multiplication and Sobel Edge Detection applications, this enables reductions in Block RAM and Look Up Table (LUT) cost of up to 25%, whilst simultaneously increasing throughput.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software-programmable `soft' processors have shown tremendous potential for efficient realisation of high performance signal processing operations on Field Programmable Gate Array (FPGA), whilst lowering the design burden by avoiding the need to design fine-grained custom circuit archi-tectures. However, the complex data access patterns, high memory bandwidth and computational requirements of sliding window applications, such as Motion Estimation (ME) and Matrix Multiplication (MM), lead to low performance, inefficient soft processor realisations. This paper resolves this issue, showing how by adding support for block data addressing and accelerators for high performance loop execution, performance and resource efficiency over four times better than current best-in-class metrics can be achieved. In addition, it demonstrates the first recorded real-time soft ME estimation realisation for H.263 systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Exam questions and solutions in PDF

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Exam questions and solutions in LaTex. Diagrams for the questions are all together in the support.zip file, as .eps files