725 resultados para voluntary programs
Resumo:
Memory models for shared-memory concurrent programming languages typically guarantee sequential consistency (SC) semantics for datarace-free (DRF) programs, while providing very weak or no guarantees for non-DRF programs. In effect programmers are expected to write only DRF programs, which are then executed with SC semantics. With this in mind, we propose a novel scalable solution for dataflow analysis of concurrent programs, which is proved to be sound for DRF programs with SC semantics. We use the synchronization structure of the program to propagate dataflow information among threads without requiring to consider all interleavings explicitly. Given a dataflow analysis that is sound for sequential programs and meets certain criteria, our technique automatically converts it to an analysis for concurrent programs.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
With proliferation of chip multicores (CMPs) on desktops and embedded platforms, multi-threaded programs have become ubiquitous. Existence of multiple threads may cause resource contention, such as, in on-chip shared cache and interconnects, depending upon how they access resources. Hence, we propose a tool - Thread Contention Predictor (TCP) to help quantify the number of threads sharing data and their sharing pattern. We demonstrate its use to predict a more profitable shared, last level on-chip cache (LLC) access policy on CMPs. Our cache configuration predictor is 2.2 times faster compared to the cycle-accurate simulations. We also demonstrate its use for identifying hot data structures in a program which may cause performance degradation due to false data sharing. We fix layout of such data structures and show up-to 10% and 18% improvement in execution time and energy-delay product (EDP), respectively.
Resumo:
Large software systems are developed by composing multiple programs. If the programs manip-ulate and exchange complex data, such as network packets or files, it is essential to establish that they follow compatible data formats. Most of the complexity of data formats is associated with the headers. In this paper, we address compatibility of programs operating over headers of network packets, files, images, etc. As format specifications are rarely available, we infer the format associated with headers by a program as a set of guarded layouts. In terms of these formats, we define and check compatibility of (a) producer-consumer programs and (b) different versions of producer (or consumer) programs. A compatible producer-consumer pair is free of type mismatches and logical incompatibilities such as the consumer rejecting valid outputs gen-erated by the producer. A backward compatible producer (resp. consumer) is guaranteed to be compatible with consumers (resp. producers) that were compatible with its older version. With our prototype tool, we identified 5 known bugs and 1 potential bug in (a) sender-receiver modules of Linux network drivers of 3 vendors and (b) different versions of a TIFF image library.
Resumo:
We propose a new approach for producing precise constrained slices of programs in a language such as C. We build upon a previous approach for this problem, which is based on term-rewriting, which primarily targets loop-free fragments and is fully precise in this setting. We incorporate abstract interpretation into term-rewriting, using a given arbitrary abstract lattice, resulting in a novel technique for slicing loops whose precision is linked to the power of the given abstract lattice. We address pointers in a first-class manner, including when they are used within loops to traverse and update recursive data structures. Finally, we illustrate the comparative precision of our slices over those of previous approaches using representative examples.
Resumo:
Task-parallel languages are increasingly popular. Many of them provide expressive mechanisms for intertask synchronization. For example, OpenMP 4.0 will integrate data-driven execution semantics derived from the StarSs research language. Compared to the more restrictive data-parallel and fork-join concurrency models, the advanced features being introduced into task-parallelmodels in turn enable improved scalability through load balancing, memory latency hiding, mitigation of the pressure on memory bandwidth, and, as a side effect, reduced power consumption. In this article, we develop a systematic approach to compile loop nests into concurrent, dynamically constructed graphs of dependent tasks. We propose a simple and effective heuristic that selects the most profitable parallelization idiom for every dependence type and communication pattern. This heuristic enables the extraction of interband parallelism (cross-barrier parallelism) in a number of numerical computations that range from linear algebra to structured grids and image processing. The proposed static analysis and code generation alleviates the burden of a full-blown dependence resolver to track the readiness of tasks at runtime. We evaluate our approach and algorithms in the PPCG compiler, targeting OpenStream, a representative dataflow task-parallel language with explicit intertask dependences and a lightweight runtime. Experimental results demonstrate the effectiveness of the approach.
Resumo:
Some relevant components of selection program theory and implementation are reviewed. This includes pedigree recording, genetic evaluation, balancing genetic gains and genetic diversity and tactical integration of key issues. Lessons learned are briefly described – illustrating how existing method and tools can be useful when launching a program in a novel species, and yet highlighting the importance of proper understanding and custom application according to the biology and environments of that species.
Resumo:
An early establishment of selective breeding programs on Atlantic salmon has been crucial for the success of developing efficient and sustainable salmon farming in Norway. A national selective breeding program was initiated by AKVAFORSK at the beginning of the 1970s, by collecting fertilized eggs from more than 40 Norwegian river populations. Several private selective breeding programs were also initiated in the 1970s and 1980s. While these private programs were initiated using individual selection (i.e. massselection) to genetically improve growth, the national program was designed to gradually include all economically important traits in the breeding objective (i.e. growth, age at sexual maturation, disease resistance and quality traits) using a combined family and within-family selection strategy. Independent of which selection strategy and program design used, it is important to secure and maintain a broad genetic variation in the breeding populations to maximize selection response. It has been documented that genetically improved salmon from the national selective breeding program grow twice as fast as wild Atlantic salmon and require 25 per cent less feed, while salmon representing the private breeding programs all show an intermediate growth performance. As a result of efficient dissemination of genetically improved Atlantic salmon, the Norwegian salmon farming industry has reduced its feed costs by more than US$ 230 million per year! The national selective breeding program on Atlantic salmon was commercialized into a breeding company (AquaGen) in 1992. Five years later, several private companies and the AKVAFORSK Genetics Center (AFGC) established a second breeding company (SalmoBreed) using breeding candidates from one of the private breeding programs. These two breeding companies have similar products, but different strategies on how to organize the breeding program and to disseminate the genetically improved seed to the Norwegian salmon industry. Greater competition has increased the necessity to document the genetic gain obtained from the different programs and to market the economic benefits of farming the genetically improved breeds. Both breeding companies have organized their dissemination to get a sufficient share of the economic benefits in order to sustain and improve their breeding programs.
Resumo:
Fifteen cooperative fish rearing and planting programs for salmon and steelhead were active from July 1, 1995 through June 30, 1996. For all programs, 134,213 steelhead trout,(Oncorhynchus mykiss), 7,742,577 chinook salmon,(~ tshawytscha),and 25,075 coho salmon(~ kisutch) were planted. (PDF contains 26 pages.)