998 resultados para Program Compilation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Field-Programmable Gate Arrays (FPGAs) are becoming increasingly important in embedded and high-performance computing systems. They allow performance levels close to the ones obtained with Application-Specific Integrated Circuits, while still keeping design and implementation flexibility. However, to efficiently program FPGAs, one needs the expertise of hardware developers in order to master hardware description languages (HDLs) such as VHDL or Verilog. Attempts to furnish a high-level compilation flow (e.g., from C programs) still have to address open issues before broader efficient results can be obtained. Bearing in mind an FPGA available resources, it has been developed LALP (Language for Aggressive Loop Pipelining), a novel language to program FPGA-based accelerators, and its compilation framework, including mapping capabilities. The main ideas behind LALP are to provide a higher abstraction level than HDLs, to exploit the intrinsic parallelism of hardware resources, and to allow the programmer to control execution stages whenever the compiler techniques are unable to generate efficient implementations. Those features are particularly useful to implement loop pipelining, a well regarded technique used to accelerate computations in several application domains. This paper describes LALP, and shows how it can be used to achieve high-performance computing solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The measurements were obtained during two North Sea wide STAR-shaped cruises during summer 1986 and winter 1987, which were performed to investigate the circulation induced transport and biologically induced pollutant transfer within the interdisciplinary research in the project "ZISCH - Zirkulation und Schadstoffumsatz in der Nordsee / Circulation and Contaminant Fluxes in the North Sea (1984-1989)". The inventory presents parameters measured on hydrodynamics, nutrient dynamics, ecosystem dynamics and pollutant dynamics in the pelagic and benthic realm. The research program had the objective of quantifying fluxes of major budgets, especially contaminants in the North Sea. In spring 1986, following the phytoplankton spring bloom, and in late winter 1987, at minimum primary production activity, the North Sea ecosystem was investigated on a station net covering the whole North Sea. The station net was shaped like a star. Sampling started in the centre, followed by the northwest section and moving counter clockwise around the North Sea following the residual currents. By this strategy, a time series was measured in the central North Sea and more synoptic data sets were obtained in the individual sections. Generally advection processes have to be considered when comparing the data from different stations. The entire sampling period lasted for more than six weeks in each cruise. Thus, a time-lag should be considered especially when comparing the data from the eastern and the western part of the central and northern North Sea, where samples were taken at the beginning and at the end of the campaign. The ZISCH investigations represented a qualitatively and quantitatively new approach to North Sea research in several respects. (1) The first simultaneous blanket coverage of all important biological, chemical and physical parameters in the entire North Sea ecosystem; (2) the first simultaneous measurements of major contaminants (metals and organohaline compounds) in the different ecosystem compartments; (3) simultaneous determinations of atmospheric inputs of momentum, energy and matter as important ecosystem boundary conditions; (4) performance of the complex measurement program during two seasons, namely the spring plankton bloom and the subsequent winter period of minimal biological activity; and (5) support of data analysis and interpretation by oceanographic and meteorological numerical models on the same scales.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the current status of and provide performance results for a prototype compiler of Prolog to C, ciaocc. ciaocc is novel in that it is designed to accept different kinds of high-level information, typically obtained via an automatic analysis of the initial Prolog program and expressed in a standardized language of assertions. This information is used to optimize the resulting C code, which is then processed by an off-the-shelf C compiler. The basic translation process essentially mimics the unfolding of a bytecode emulator with respect to the particular bytecode corresponding to the Prolog program. This is facilitated by a flexible design of the instructions and their lower-level components. This approach allows reusing a sizable amount of the machinery of the bytecode emulator: predicates already written in C, data definitions, memory management routines and áreas, etc., as well as mixing emulated bytecode with native code in a relatively straightforward way. We report on the performance of programs compiled by the current versión of the system, both with and without analysis information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent research into the implementation of logic programming languages has demonstrated that global program analysis can be used to speed up execution by an order of magnitude. However, currently such global program analysis requires the program to be analysed as a whole: sepárate compilation of modules is not supported. We describe and empirically evalúate a simple model for extending global program analysis to support sepárate compilation of modules. Importantly, our model supports context-sensitive program analysis and multi-variant specialization of procedures in the modules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the current status of and provide preliminary performance results for a compiler of Prolog to C. The compiler is novel in that it is designed to accept different kinds of high-level information (typically obtained via an analysis of the initial Prolog program and expressed in a standardized language of assertions) and use this information to optimize the resulting C code, which is then further processed by an off-the-shelf C compiler. The basic translation process used essentially mimics an unfolding of a C-coded bytecode emúlator with respect to the particular bytecode corresponding to the Prolog program. Optimizations are then applied to this unfolded program. This is facilitated by a more flexible design of the bytecode instructions and their lower-level components. This approach allows reusing a sizable amount of the machinery of the bytecode emulator: ancillary pieces of C code, data definitions, memory management routines and áreas, etc., as well as mixing bytecode emulated code with natively compiled code in a relatively straightforward way We report on the performance of programs compiled by the current versión of the system, both with and without analysis information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ciao Prolog incorporates a module system which allows sepárate compilation and sensible creation of standalone executables. We describe some of the main aspects of the Ciao modular compiler, ciaoc, which takes advantage of the characteristics of the Ciao Prolog module system to automatically perform sepárate and incremental compilation and efficiently build small, standalone executables with competitive run-time performance, ciaoc can also detect statically a larger number of programming errors. We also present a generic code processing library for handling modular programs, which provides an important part of the functionality of ciaoc. This library allows the development of program analysis and transformation tools in a way that is to some extent orthogonal to the details of module system design, and has been used in the implementation of ciaoc and other Ciao system tools. We also describe the different types of executables which can be generated by the Ciao compiler, which offer different tradeoffs between executable size, startup time, and portability, depending, among other factors, on the linking regime used (static, dynamic, lazy, etc.). Finally, we provide experimental data which illustrate these tradeoffs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The strength and geometry of the Atlantic meridional overturning circulation is tightly coupled to climate on glacial-interglacial and millennial timescales, but has proved difficult to reconstruct, particularly for the Last Glacial Maximum. Today, the return flow from the northern North Atlantic to lower latitudes associated with the Atlantic meridional overturning circulation reaches down to approximately 4,000 m. In contrast, during the Last Glacial Maximum this return flow is thought to have occurred primarily at shallower depths. Measurements of sedimentary 231Pa/230Th have been used to reconstruct the strength of circulation in the North Atlantic Ocean, but the effects of biogenic silica on 231Pa/230Th-based estimates remain controversial. Here we use measurements of 231Pa/230Th ratios and biogenic silica in Holocene-aged Atlantic sediments and simulations with a two-dimensional scavenging model to demonstrate that the geometry and strength of the Atlantic meridional overturning circulation are the primary controls of 231Pa/230Th ratios in modern Atlantic sediments. For the glacial maximum, a simulation of Atlantic overturning with a shallow, but vigorous circulation and bulk water transport at around 2,000 m depth best matched observed glacial Atlantic 231Pa/230Th values. We estimate that the transport of intermediate water during the Last Glacial Maximum was at least as strong as deep water transport today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earlier ed., 1965, issued by U.S. Robert A. Taft Sanitary Engineering Center, Cincinnati.