1000 resultados para parallel benchmarks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computers of a non-dedicated cluster are often idle (users attend meetings, have lunch or coffee breaks) or lightly loaded (users carry out simple computations). These underutilized computers can be employed to execute parallel applications not only during weekends and at nights but also during office hours. Thus, they have to be shared by parallel and sequential applications which could lead to the improvement of their execution performance. However, there is a lack of experimental study showing the behavior and performance of parallel and sequential applications executing concurrently on clusters. We present here the result of an experimental study into load balancing based scheduling of a mixture of parallel and sequential applications on a non-dedicated cluster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dedicated clusters are becoming commonly used for high performance parallel processing. Computers of a non-dedicated cluster are often idle or lightly loaded. These under utilised computers can be employed to execute parallel applications. Thus, they have to be shared by parallel and sequential applications, which could lead to the improvement of their execution performance. There is a lack of experimental study showing the behaviour and performance of executing parallel and sequential applications concurrently on a non-dedicated cluster. We present the result of an experimental study into load balancing of a mixture of parallel and sequential applications on a non-dedicated cluster.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Computers of a non-dedicated cluster are often idle (users attend meetings, have lunch or coffee breaks) or lightly loaded (users carry out simple computations to support problem solving activities). These underutilised computers can be employed to execute parallel applications. Thus, these computers can be shared by parallel and sequential applications, which could lead to the improvement of their execution performance. However, there is a lack of experimental study showing the applications’ performance and the system utilization of executing parallel and sequential applications concurrently and concurrent execution of multiple parallel applications on a non-dedicated cluster. Here we present the result of an experimental study into load balancing based scheduling of mixtures of NAS Parallel Benchmarks and BYTE sequential applications on a very low cost non-dedicated cluster. This study showed that the proposed sharing provided performance boost as compared to the execution of the parallel load in isolation on a reduced number of computers and better cluster utilization. The results of this research were used not only to validate other researchers’ result generated by simulation but also to support our research mission of widening the use of non-dedicated clusters. Our promising results obtained could promote further research studies to convince universities, business and industry, which require a large amount of computing resources, to run parallel applications on their already owned non-dedicated clusters.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We assert that companies can make more money and research institutions can improve their performance if inexpensive clusters and enterprise grids are exploited. In this paper, we have demonstrated that our claim is valid by showing the study of how programming environments, tools and middleware could be used for the execution of parallel and sequential applications, multiple parallel applications executing simultaneously on a non-dedicated cluster, and parallel applications on an enterprise grid and that the execution performance was improved. For this purpose an execution environment, and parallel and sequential benchmark applications selected for, and used in, the experiments were characterised.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

传统集群网络(cluster area network,简称cLAN)的评测模型主要考虑了延迟、带宽、路由、拥塞、网络拓扑结构等因素.但这些因素是否足以描述实际应用程序在集群上的通信行为,或者对其在集群系统上的性能给出一个很好的预测呢?当对NAS Parallel Benchmark(2.4版本)在集群系统深腾1800(DeepComp 1800)上进行大量测试时发现,集群网络的通信性能可以被一种特殊的通信模式(LU模式)所严重影响.更深入的研究表明,这个影响LU模式的因素是独立于前面所述的如延迟、带宽、路由、拥塞、网络拓扑结构等因素的.因此有必要对集群网络的评测模型重新进行审视,并增加一个新的性能评测因子以反映这个新发现的现象.从研究结果来看,这个重新审视也将对集群系统上的并行算法设计以及实际大规模科学计算的应用程序性能的优化提供一些新的思路.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

对3个国产万亿次机群系统进行了NPB性能测试分析,重点研究大规模并行处理时(处理器数目达到上千个)的性能特点和趋势.分析了不同的处理器、互连网络等系统配置对NPB性能的影响,发现NPB的8个程序在3个万亿次机器上的性能特点和表现并不一致,表明国产高性能机群在设计上正在逐渐走出同质化的趋势,向多样化发展.进一步分析表明,目前NPB程序的可扩展性可以达到几百个处理器,但尚不能达到上千个处理器,NPB程序能发挥出的系统峰值的百分比仍然徘徊在10%左右,机群系统的并行可扩展性和应用程序对机器运算潜能的利用还需要进一步提高.对于处理器数目达到上千个的万亿次机群系统来说,对集合通信和细粒度通信能力的支持亟需提高.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Power capping is a fundamental method for reducing the energy consumption of a wide range of modern computing environments, ranging from mobile embedded systems to datacentres. Unfortunately, maximising performance and system efficiency under static power caps remains challenging, while maximising performance under dynamic power caps has been largely unexplored. We present an adaptive power capping method that reduces the power consumption and maximizes the performance of heterogeneous SoCs for mobile and server platforms. Our technique combines power capping with coordinated DVFS, data partitioning and core allocations on a heterogeneous SoC with ARM processors and FPGA resources. We design our framework as a run-time system based on OpenMP and OpenCL to utilise the heterogeneous resources. We evaluate it through five data-parallel benchmarks on the Xilinx SoC which allows fully voltage and frequency control. Our experiments show a significant performance boost of 30% under dynamic power caps with concurrent execution on ARM and FPGA, compared to a naive separate approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multicore processors are widely used in today's computer systems. Multicore virtualization technology provides an elastic solution to more efficiently utilize the multicore system. However, the Lock Holder Preemption (LHP) problem in the virtualized multicore systems causes significant CPU cycles wastes, which hurt virtual machine (VM) performance and reduces response latency. The system consolidates more VMs, the LHP problem becomes worse. In this paper, we propose an efficient consolidation-aware vCPU (CVS) scheduling scheme on multicore virtualization platform. Based on vCPU over-commitment rate, the CVS scheduling scheme adaptively selects one algorithm among three vCPU scheduling algorithms: co-scheduling, yield-to-head, and yield-to-tail based on the vCPU over-commitment rate because the actions of vCPU scheduling are split into many single steps such as scheduling vCPUs simultaneously or inserting one vCPU into the run-queue from the head or tail. The CVS scheme can effectively improve VM performance in the low, middle, and high VM consolidation scenarios. Using real-life parallel benchmarks, our experimental results show that the proposed CVS scheme improves the overall system performance while the optimization overhead remains low.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although individual PCs of a cluster are used by their owners to run sequential applications (local jobs), the cluster as a whole or its subset can also be employed to run parallel applications (cluster jobs) even during working hours. This implies that these computers have to be shared by parallel and sequential applications, which could lead to the improvement of the execution performance and resource utilization. However, there is a lack of experimental study showing the behavior and performance of executing parallel and sequential applications concurrently on a non-dedicated cluster. The result of such research would be beneficial for the development of new global scheduling algorithms. We present the result of an experimental study into scheduling of a mixture of parallel and sequential applications on a non-dedicated cluster. The aim of this study is to learn how the concurrent execution of a communication intensive parallel application and sequential applications influences their execution performance and utilization of the cluster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precise pointer analysis is a problem of interest to both the compiler and the program verification community. Flow-sensitivity is an important dimension of pointer analysis that affects the precision of the final result computed. Scaling flow-sensitive pointer analysis to millions of lines of code is a major challenge. Recently, staged flow-sensitive pointer analysis has been proposed, which exploits a sparse representation of program code created by staged analysis. In this paper we formulate the staged flow-sensitive pointer analysis as a graph-rewriting problem. Graph-rewriting has already been used for flow-insensitive analysis. However, formulating flow-sensitive pointer analysis as a graph-rewriting problem adds additional challenges due to the nature of flow-sensitivity. We implement our parallel algorithm using Intel Threading Building Blocks and demonstrate considerable scaling (upto 2.6x) for 8 threads on a set of 10 benchmarks. Compared to the sequential implementation of staged flow-sensitive analysis, a single threaded execution of our implementation performs better in 8 of the benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the design, application, and evaluation of a user friendly, flexible, scalable and inexpensive Advanced Educational Parallel (AdEPar) digital signal processing (DSP) system based on TMS320C25 digital processors to implement DSP algorithms. This system will be used in the DSP laboratory by graduate students to work on advanced topics such as developing parallel DSP algorithms. The graduating senior students who have gained some experience in DSP can also use the system. The DSP laboratory has proved to be a useful tool in the hands of the instructor to teach the mathematically oriented topics of DSP that are often difficult for students to grasp. The DSP laboratory with assigned projects has greatly improved the ability of the students to understand such complex topics as the fast Fourier transform algorithm, linear and circular convolution, the theory and design of infinite impulse response (IIR) and finite impulse response (FIR) filters. The user friendly PC software support of the AdEPar system makes it easy to develop DSP programs for students. This paper gives the architecture of the AdEPar DSP system. The communication between processors and the PC-DSP processor communication are explained. The parallel debugger kernels and the restrictions of the system are described. The programming in the AdEPar is explained, and two benchmarks (parallel FFT and DES) are presented to show the system performance.