97 resultados para Processors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the reinsurance market, the risks natural catastrophes pose to portfolios of properties must be quantified, so that they can be priced, and insurance offered. The analysis of such risks at a portfolio level requires a simulation of up to 800 000 trials with an average of 1000 catastrophic events per trial. This is sufficient to capture risk for a global multi-peril reinsurance portfolio covering a range of perils including earthquake, hurricane, tornado, hail, severe thunderstorm, wind storm, storm surge and riverine flooding, and wildfire. Such simulations are both computation and data intensive, making the application of high-performance computing techniques desirable.

In this paper, we explore the design and implementation of portfolio risk analysis on both multi-core and many-core computing platforms. Given a portfolio of property catastrophe insurance treaties, key risk measures, such as probable maximum loss, are computed by taking both primary and secondary uncertainties into account. Primary uncertainty is associated with whether or not an event occurs in a simulated year, while secondary uncertainty captures the uncertainty in the level of loss due to the use of simplified physical models and limitations in the available data. A combination of fast lookup structures, multi-threading and careful hand tuning of numerical operations is required to achieve good performance. Experimental results are reported for multi-core processors and systems using NVIDIA graphics processing unit and Intel Phi many-core accelerators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Field programmable gate array (FPGA) technology is a powerful platform for implementing computationally complex, digital signal processing (DSP) systems. Applications that are multi-modal, however, are designed for worse case conditions. In this paper, genetic sequencing techniques are applied to give a more sophisticated decomposition of the algorithmic variations, thus allowing an unified hardware architecture which gives a 10-25% area saving and 15% power saving for a digital radar receiver.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy consumption is an important concern in modern multicore processors. The energy consumed by a multicore processor during the execution of an application can be minimized by tuning the hardware state utilizing knobs such as frequency, voltage etc. The existing theoretical work on energy minimization using Global DVFS (Dynamic Voltage and Frequency Scaling), despite being thorough, ignores the time and the energy consumed by the CPU on memory accesses and the dynamic energy consumed by the idle cores. This article presents an analytical energy-performance model for parallel workloads that accounts for the time and the energy consumed by the CPU chip on memory accesses in addition to the time and energy consumed by the CPU on CPU instructions. In addition, the model we present also accounts for the dynamic energy consumed by the idle cores. The existing work on global DVFS for parallel workloads shows that using a single frequency for the entire duration of a parallel application is not energy optimal and that varying the frequency according to the changes in the parallelism of the workload can save energy. We present an analytical framework around our energy-performance model to predict the operating frequencies (that depend upon the amount of parallelism) for global DVFS that minimize the overall CPU energy consumption. We show how the optimal frequencies in our model differ from the optimal frequencies in a model that does not account for memory accesses. We further show how the memory intensity of an application affects the optimal frequencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Power capping is a fundamental method for reducing the energy consumption of a wide range of modern computing environments, ranging from mobile embedded systems to datacentres. Unfortunately, maximising performance and system efficiency under static power caps remains challenging, while maximising performance under dynamic power caps has been largely unexplored. We present an adaptive power capping method that reduces the power consumption and maximizes the performance of heterogeneous SoCs for mobile and server platforms. Our technique combines power capping with coordinated DVFS, data partitioning and core allocations on a heterogeneous SoC with ARM processors and FPGA resources. We design our framework as a run-time system based on OpenMP and OpenCL to utilise the heterogeneous resources. We evaluate it through five data-parallel benchmarks on the Xilinx SoC which allows fully voltage and frequency control. Our experiments show a significant performance boost of 30% under dynamic power caps with concurrent execution on ARM and FPGA, compared to a naive separate approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This talk explores how the runtime system and operating system can leverage metrics that express the significance and resilience of application components in order to reduce the energy footprint of parallel applications. We will explore in particular how software can tolerate and indeed exploit higher error rates in future processors and memory technologies that may operate outside their safe margins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasing research has highlighted the effects of changing climates on the occurrence and prevalence of toxigenic Aspergillus species producing aflatoxins. There is concern of the toxicological effects to human health and animal productivity following acute and chronic exposure that may affect the future ability to provide safe and sufficient food globally. Considerable research has focused on the detection of these toxins, based on the physicochemical and biochemical properties of the aflatoxin compounds, in agricultural products for human and animal consumption. As improvements in food security continue more regulations for acceptable levels of aflatoxins have arisen globally; the most stringent in Europe. These regulations are important for developing countries as aflatoxin occurrence is high significantly effecting international trade and the economy. In developed countries analytical approaches have become highly sophisticated, capable of attaining results with high precision and accuracy, suitable for regulatory laboratories. Regrettably, many countries that are affected by aflatoxin contamination do not have resources for high tech HPLC and MS instrumentation and require more affordable, yet robust equally accurate alternatives that may be used by producers, processors and traders in emerging economies. It is especially important that those companies wishing to exploit the opportunities offered by lucrative but highly regulated markets in the developed world, have access to analytical methods that will ensure that their exports meet their customers quality and safety requirements.

This work evaluates the ToxiMet system as an alternative approach to UPLC–MS/MS for the detection and determination of aflatoxins relative to current European regulatory standards. Four commodities: rice grain, maize cracked and flour, peanut paste and dried distillers grains were analysed for natural aflatoxin contamination. For B1 and total aflatoxins determination the qualitative correlation, above or below the regulatory limit, was good for all commodities with the exception of the dried distillers grain samples for B1 for which no calibration existed. For B1 the quantitative R2 correlations were 0.92, 0.92, 0.88 (<250 μg/kg) and 0.7 for rice, maize, peanuts and dried distillers grain samples respectively whereas for total aflatoxins the quantitative correlation was 0.92, 0.94, 0.88 and 0.91. The ToxiMet system could be used as an alternative for aflatoxin analysis for current legislation but some consideration should be given to aflatoxin M1 regulatory levels for these commodities considering the high levels detected in this study especially for maize and peanuts