981 resultados para Oceanographic computations
Resumo:
Fixed-node diffusion Monte Carlo computations are used to determine the ground state energy and electron density for jellium spheres with up to N = 106 electrons and background densities corresponding to the electron gas parameter 1 less than or equal to r(s)less than or equal to5.62. We analyze the density and size dependence of the surface energy, and we extrapolate our data to the thermodynamic limit. The results agree well with the predictions of density functional computations using the local density approximation. In the case of N = 20, we extend our computation to higher densities and identify a transition between atomic- and jelliumlike nodal structures occurring at the background density corresponding to r(s)=0.13. In this case the local density approximation is unable to reproduce the changes in the correlation energy due to the discontinuous transition in the ground state nodal structure. We discuss the relevance of our results for nonlocal approximations to density functional theory.
Resumo:
The accuracy and reliability of popular density functional approximations for the compounds giving origin to room temperature ionic liquids have been assessed by computing the T=0 K crystal structure of several 1-alkyl-3-methyl-imidazolium salts. Two prototypical exchange-correlation approximations have been considered, i.e., the local density approximation (LDA) and one gradient corrected scheme [PBE-GGA, Phys. Rev. Lett. 77, 3865 (1996)]. Comparison with low-temperature x-ray diffraction data shows that the equilibrium volume predicted by either approximations is affected by large errors, nearly equal in magnitude (~10%), and of opposite sign. In both cases the error can be traced to a poor description of the intermolecular interactions, while the intramolecular structure is fairly well reproduced by LDA and PBE-GGA. The PBE-GGA optimization of atomic positions within the experimental unit cell provides results in good agreement with the x-ray structure. The correct system volume can also be restored by supplementing PBE-GGA with empirical dispersion terms reproducing the r-6 attractive tail of the van der Waals interactions.
Resumo:
A high performance VLSI architecture to perform combined multiply-accumulate, divide, and square root operations is proposed. The circuit is highly regular, requires only minimal control, and can be pipelined right down to the bit level. The system can also be reconfigured on every cycle to perform one or more of these operations. The throughput rate for each operation is the same and is wordlength independent. This is achieved using redundant arithmetic. With current CMOS technology, throughput rates in excess of 80 million operations per second are expected.
Resumo:
We describe an approach aimed at addressing the issue of joint exploitation of control (stream) and data parallelism in a skeleton based parallel programming environment, based on annotations and refactoring. Annotations drive efficient implementation of a parallel computation. Refactoring is used to transform the associated skeleton tree into a more efficient, functionally equivalent skeleton tree. In most cases, cost models are used to drive the refactoring process. We show how sample use case applications/kernels may be optimized and discuss preliminary experiments with FastFlow assessing the theoretical results. © 2013 Springer-Verlag.
Resumo:
In this paper a model of grid computation that supports both heterogeneity and dynamicity is presented. The model presupposes that user sites contain software components awaiting execution on the grid. User sites and grid sites interact by means of managers which control dynamic behaviour. The orchestration language ORC [9,10] offers an abstract means of specifying operations for resource acquisition and execution monitoring while allowing for the possibility of non-responsive hardware. It is demonstrated that ORC is sufficiently expressive to model typical kinds of grid interactions.
Resumo:
As data analytics are growing in importance they are also quickly becoming one of the dominant application domains that require parallel processing. This paper investigates the applicability of OpenMP, the dominant shared-memory parallel programming model in high-performance computing, to the domain of data analytics. We contrast the performance and programmability of key data analytics benchmarks against Phoenix++, a state-of-the-art shared memory map/reduce programming system. Our study shows that OpenMP outperforms the Phoenix++ system by a large margin for several benchmarks. In other cases, however, the programming model is lacking support for this application domain.
Resumo:
Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.
Resumo:
The Acoustic Oceanographic Buoy (AOB) Telemetry System has been designed to meet acoustic rapid environmental assessment requirements. It uses a standard institute of Electrical and Electronics Engineers 802.11 wireless local area network (WLAN) to integrate the air radio network (RaN) and a hydrophone array and acoustic source to integrate the underwater acoustic network (AcN). It offers advantages including local data storage, dedicated signal processing, and global positioning system (GPS) timing and localization. The AOB can also be integrated with other similar systems, due to its WLAN transceivers, to form a flexible network and perform on-line high speed data transmissions. The AOB is a reusable system that requires less maintenance and can also work as a salt-water plug-and-play system at sea as it is designed to operate in free drifting mode. The AOB is also suitable for performing distributed digital signal processing tasks due to its digital signal processor facility.
Resumo:
Dissertação de mestrado, Biologia Marinha, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015