818 resultados para Approximate Computing
Resumo:
A finite-difference scheme based on flux difference splitting is presented for the solution of the two-dimensional shallow-water equations of ideal fluid flow. A linearised problem, analogous to that of Riemann for gasdynamics, is defined and a scheme, based on numerical characteristic decomposition, is presented for obtaining approximate solutions to the linearised problem. The method of upwind differencing is used for the resulting scalar problems, together with a flux limiter for obtaining a second-order scheme which avoids non-physical, spurious oscillations. An extension to the two-dimensional equations with source terms, is included. The scheme is applied to a dam-break problem with cylindrical symmetry.
Resumo:
An analysis of various arithmetic averaging procedures for approximate Riemann solvers is made with a specific emphasis on efficiency and a jump capturing property. The various alternatives discussed are intended for future work, as well as the more immediate problem of steady, supercritical free-surface flows. Numerical results are shown for two test problems.
Resumo:
A weak formulation of Roe's approximate Riemann solver is applied to the equations of ‘barotropic’ flow, including the shallow water equations, and it is shown that this leads to an approximate Riemann solver recently presented for such flows.
Resumo:
A Blueprint for Affective Computing: A sourcebook and manual is the very first attempt to ground affective computing within the disciplines of psychology, affective neuroscience, and philosophy. This book illustrates the contributions of each of these disciplines to the development of the ever-growing field of affective computing. In addition, it demonstrates practical examples of cross-fertilization between disciplines in order to highlight the need for integration of computer science, engineering and the affective sciences.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? The work reported in this paper is motivated towards bridging this gap by proposing a swarm-array computing approach based on ‘Intelligent Agents’ to achieve autonomy for distributed parallel computing systems. In the proposed approach, a task to be executed on parallel computing cores is carried onto a computing core by carrier agents that can seamlessly transfer between processing cores in the event of a predicted failure. The cognitive capabilities of the carrier agents on a parallel processing core serves in achieving the self-ware objectives of autonomic computing, hence applying autonomic computing concepts for the benefit of parallel computing systems. The feasibility of the proposed approach is validated by simulation studies using a multi-agent simulator on an FPGA (Field-Programmable Gate Array) and experimental studies using MPI (Message Passing Interface) on a computer cluster. Preliminary results confirm that applying autonomic computing principles to parallel computing systems is beneficial.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.