988 resultados para Multi-core
Resumo:
Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Transactional memory (TM) is a new synchronization mechanism devised to simplify parallel programming, thereby helping programmers to unleash the power of current multicore processors. Although software implementations of TM (STM) have been extensively analyzed in terms of runtime performance, little attention has been paid to an equally important constraint faced by nearly all computer systems: energy consumption. In this work we conduct a comprehensive study of energy and runtime tradeoff sin software transactional memory systems. We characterize the behavior of three state-of-the-art lock-based STM algorithms, along with three different conflict resolution schemes. As a result of this characterization, we propose a DVFS-based technique that can be integrated into the resolution policies so as to improve the energy-delay product (EDP). Experimental results show that our DVFS-enhanced policies are indeed beneficial for applications with high contention levels. Improvements of up to 59% in EDP can be observed in this scenario, with an average EDP reduction of 16% across the STAMP workloads. © 2012 IEEE.
Resumo:
Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.
Resumo:
[EN] The accuracy and performance of current variational optical ow methods have considerably increased during the last years. The complexity of these techniques is high and enough care has to be taken for the implementation. The aim of this work is to present a comprehensible implementation of recent variational optical flow methods. We start with an energy model that relies on brightness and gradient constancy terms and a ow-based smoothness term. We minimize this energy model and derive an e cient implicit numerical scheme. In the experimental results, we evaluate the accuracy and performance of this implementation with the Middlebury benchmark database. We show that it is a competitive solution with respect to current methods in the literature. In order to increase the performance, we use a simple strategy to parallelize the execution on multi-core processors.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
I processori multi core stanno cambiando lo sviluppo dei software in tutti i settori dell'informatica poiché offrono prestazioni più elevate con un consumo energetico più basso. Abbiamo quindi la possibilità di una computazione realmente parallela, distribuita tra i diversi core del processore. Uno standard per la programmazione multithreading è sicuramente OpenMP, il quale si propone di fornire direttive semplici e chiare per lo sviluppo di programmi su sistemi a memoria condivisa, fornendo un controllo completo sulla parallelizzazione. Nella fisica moderna spesso vengono utilizzate simulazioni al computer di sistemi con alti livelli di complessità computazionale. Si ottimizzerà un software che utilizza l'algoritmo DMRG (Density Matrix Renormalization Group), un algoritmo che consente di studiare reticoli lineari di sistemi a molti corpi, al fine di renderlo più veloce nei calcoli cercando di sfruttare al meglio i core del processore. Per fare ciò verrà utilizzata l'API OpenMP, che ci permetterà in modo poco invasivo di parallelizzare l'algoritmo rendendo così più veloce l'esecuzione su architetture multi core.
Resumo:
Quando si parla di architetture di controllo in ambito Web, il Modello ad Eventi è indubbiamente quello più diffuso e adottato. L’asincronicità e l’elevata interazione con l’utente sono caratteristiche tipiche delle Web Applications, ed un architettura ad eventi, grazie all’adozione del suo tipico ciclo di controllo chiamato Event Loop, fornisce un'astrazione semplice ma sufficientemente espressiva per soddisfare tali requisiti. La crescita di Internet e delle tecnologie ad esso associate, assieme alle recenti conquiste in ambito di CPU multi-core, ha fornito terreno fertile per lo sviluppo di Web Applications sempre più complesse. Questo aumento di complessità ha portato però alla luce alcuni limiti del modello ad eventi, ancora oggi non del tutto risolti. Con questo lavoro si intende proporre un differente approccio a questa tipologia di problemi, che superi i limiti riscontrati nel modello ad eventi proponendo un architettura diversa, nata in ambito di IA ma che sta guadagno popolarità anche nel general-purpose: il Modello ad Agenti. Le architetture ad agenti adottano un ciclo di controllo simile all’Event Loop del modello ad eventi, ma con alcune profonde differenze: il Control Loop. Lo scopo di questa tesi sarà dunque approfondire le due tipologie di architetture evidenziandone le differenze, mostrando cosa significa affrontare un progetto e lo sviluppo di una Web Applications avendo tecnologie diverse con differenti cicli di controllo, mettendo in luce pregi e difetti dei due approcci.
Resumo:
Nell'ambito dello sviluppo software, la concorrenza è sempre stata vista come la strada del futuro. Tuttavia, questa è stata spesso ignorata a causa del continuo sviluppo dell'hardware che ha permesso agli sviluppatori di continuare a scrivere software sequenziale senza doversi preoccupare delle performance. In un'era in cui le nuove architetture hardware presentano processori multi-core, tutto questo non è più possibile. L'obiettivo di questa tesi è stato quello di considerare il Modello ad Attori come valida alternativa allo sviluppo di applicazioni in ambito mobile e quindi di progettare, sviluppare e distribuire un nuovo framework sulla base di tale modello. Il lavoro parte quindi da una panoramica di Swift, il nuovo linguaggio di programmazione presentato da Apple al WWDC 2014, in cui vengono analizzati nel dettaglio i meccanismi che abilitano alla concorrenza. Successivamente viene descritto il modello ad attori in termini di: attori, proprietà, comunicazione e sincronizzazione. Segue poi un'analisi delle principali implementazioni di questo modello, tra cui: Scala, Erlang ed Akka; quest'ultimo rappresenta la base su cui è stato ispirato il lavoro di progettazione e sviluppo del framework Actor Kit. Il quarto capitolo descrive tutti i concetti, le idee e i principi su cui il framework Actor Kit è stato progettato e sviluppato. Infine, l'ultimo capitolo presenta l'utilizzo del framework in due casi comuni della programmazione mobile: 1) Acquisizione dati da Web API e visualizzazione sull'interfaccia utente. 2) Acquisizione dati dai sensori del dispositivo. In conclusione Actor Kit abilita la progettazione e lo sviluppo di applicazioni secondo un approccio del tutto nuovo nell'ambito mobile. Un possibile sviluppo futuro potrebbe essere l'estensione del framework con attori che mappino i framework standard di Apple; proprio per questo sarà reso pubblico con la speranza che altri sviluppatori possano evolverlo e renderlo ancora più completo e performante.
Resumo:
La constante evolución de dispositivos portátiles multimedia que se ha producido en la última década ha provocado que hoy en día se disponga de una amplia variedad de dispositivos con capacidad para reproducir contenidos multimedia. En consecuencia, la reproducción de esos contenidos en dichos terminales lleva asociada disponer de procesadores que soporten una alta carga computacional, ya que las tareas de descodificación y presentación de video así lo requieren. Sin embargo, un procesador potente trabajando a elevadas frecuencias provoca un elevado consumo de la batería, y dado que se pretende trabajar con dispositivos portátiles, la vida útil de la batería se convierte en un asunto de especial importancia. La problemática que se plantea se ha convertido en una de las principales líneas de investigación del Grupo de Investigación GDEM (Grupo de Diseño Electrónico y Microelectrónico). En esta línea de trabajo, se persigue cómo optimizar el consumo de energía en terminales portables desde el punto de vista de la reducción de la calidad de experiencia del usuario a cambio de una mayor autonomía del terminal. Por tanto, para lograr esa reducción de la calidad de experiencia mencionada, se requiere un estándar de codificación de vídeo que así lo permita. El Grupo de Investigación GDEM cuenta con experiencia en el estándar de vídeo escalable H.264/SVC, el cual permite degradar la calidad de experiencia en función de las necesidades/características del dispositivo. Más concretamente, un video escalable contiene embebidas distintas versiones del video original que pueden ser descodificadas en diferentes resoluciones, tasas de cuadro y calidades (escalabilidades espacial, temporal y de calidad respectivamente), permitiendo una adaptación rápida y muy flexible. Seleccionado el estándar H.264/SVC para las tareas de vídeo, se propone trabajar con Mplayer, un reproductor de vídeos de código abierto (open source), al cual se le ha integrado un descodificador para vídeo escalable denominado OpenSVC. Por último, como dispositivo portable se trabajará con la plataforma de desarrollo BeagleBoard, un sistema embebido basado en el procesador OMAP3530 que permite modificar la frecuencia de reloj y la tensión de alimentación dinámicamente reduciendo de este modo el consumo del terminal. Este procesador a su vez contiene integrados un procesador de propósito general (ARM Cortex-A8) y un procesador digital de señal (DSP TMS320C64+TM). Debido a la alta carga computacional de la descodificación de vídeos escalables y la escasa optimización del ARM para procesamiento de datos, se propone llevar a cabo la ejecución de Mplayer en el ARM y encargar la tarea de descodificación al DSP, con la finalidad de reducir el consumo y por tanto aumentar la vida útil del sistema embebido sobre el cual se ejecutará la aplicación desarrollada. Una vez realizada esa integración, se llevará a cabo una caracterización del descodificador alojado en el DSP a través de una serie de medidas de rendimiento y se compararán los resultados con los obtenidos en el proceso de descodificación realizado únicamente en el ARM. ABSTRACT During the last years, the multimedia portable terminals have gradually evolved causing that nowadays a several range of devices with the ability of playing multimedia contents are easily available for everyone. Consequently, those multimedia terminals must have high-performance processors to play those contents because the coding and decoding tasks demand high computational load. However, a powerful processor performing to high frequencies implies higher battery consumption, and this issue has become one of the most important problems in the development cycle of a portable terminal. The power/energy consumption optimization on multimedia terminals has become in one the most significant work lines in the Electronic and Microelectronic Research Group of the Universidad Politécnica de Madrid. In particular, the group is researching how to reduce the user‟s Quality of Experience (QoE) quality in exchange for increased battery life. In order to reduce the Quality of Experience (QoE), a standard video coding that allows this operation is required. The H.264/SVC allows reducing the QoE according to the needs/characteristics of the terminal. Specifically, a scalable video contains different versions of original video embedded in an only one video stream, and each one of them can be decoded in different resolutions, frame rates and qualities (spatial, temporal and quality scalabilities respectively). Once the standard video coding is selected, a multimedia player with support for scalable video is needed. Mplayer has been proposed as a multimedia player, whose characteristics (open-source, enormous flexibility and scalable video decoder called OpenSVC) are the most suitable for the aims of this Master Thesis. Lastly, the embedded system BeagleBoard, based on the multi-core processor OMAP3530, will be the development platform used in this project. The multimedia terminal architecture is based on a commercial chip having a General Purpose Processor (GPP – ARM Cortex A8) and a Digital Signal Processor (DSP, TMS320C64+™). Moreover, the processor OMAP3530 has the ability to modify the operating frequency and the supply voltage in a dynamic way in order to reduce the power consumption of the embedded system. So, the main goal of this Master Thesis is the integration of the multimedia player, MPlayer, executed at the GPP, and scalable video decoder, OpenSVC, executed at the DSP in order to distribute the computational load associated with the scalable video decoding task and to reduce the power consumption of the terminal. Once the integration is accomplished, the performance of the OpenSVC decoder executed at the DSP will be measured using different combinations of scalability values. The obtained results will be compared with the scalable video decoding performed at the GPP in order to show the low optimization of this kind of architecture for decoding tasks in contrast to DSP architecture.
Resumo:
Single core capabilities have reached their maximum clock speed; new multicore architectures provide an alternative way to tackle this issue instead. The design of decoding applications running on top of these multicore platforms and their optimization to exploit all system computational power is crucial to obtain best results. Since the development at the integration level of printed circuit boards are increasingly difficult to optimize due to physical constraints and the inherent increase in power consumption, development of multiprocessor architectures is becoming the new Holy Grail. In this sense, it is crucial to develop applications that can run on the new multi-core architectures and find out distributions to maximize the potential use of the system. Today most of commercial electronic devices, available in the market, are composed of embedded systems. These devices incorporate recently multi-core processors. Task management onto multiple core/processors is not a trivial issue, and a good task/actor scheduling can yield to significant improvements in terms of efficiency gains and also processor power consumption. Scheduling of data flows between the actors that implement the applications aims to harness multi-core architectures to more types of applications, with an explicit expression of parallelism into the application. On the other hand, the recent development of the MPEG Reconfigurable Video Coding (RVC) standard allows the reconfiguration of the video decoders. RVC is a flexible standard compatible with MPEG developed codecs, making it the ideal tool to integrate into the new multimedia terminals to decode video sequences. With the new versions of the Open RVC-CAL Compiler (Orcc), a static mapping of the actors that implement the functionality of the application can be done once the application executable has been generated. This static mapping must be done for each of the different cores available on the working platform. It has been chosen an embedded system with a processor with two ARMv7 cores. This platform allows us to obtain the desired tests, get as much improvement results from the execution on a single core, and contrast both with a PC-based multiprocessor system. Las posibilidades ofrecidas por el aumento de la velocidad de la frecuencia de reloj de sistemas de un solo procesador están siendo agotadas. Las nuevas arquitecturas multiprocesador proporcionan una vía de desarrollo alternativa en este sentido. El diseño y optimización de aplicaciones de descodificación de video que se ejecuten sobre las nuevas arquitecturas permiten un mejor aprovechamiento y favorecen la obtención de mayores rendimientos. Hoy en día muchos de los dispositivos comerciales que se están lanzando al mercado están integrados por sistemas embebidos, que recientemente están basados en arquitecturas multinúcleo. El manejo de las tareas de ejecución sobre este tipo de arquitecturas no es una tarea trivial, y una buena planificación de los actores que implementan las funcionalidades puede proporcionar importantes mejoras en términos de eficiencia en el uso de la capacidad de los procesadores y, por ende, del consumo de energía. Por otro lado, el reciente desarrollo del estándar de Codificación de Video Reconfigurable (RVC), permite la reconfiguración de los descodificadores de video. RVC es un estándar flexible y compatible con anteriores codecs desarrollados por MPEG. Esto hace de RVC el estándar ideal para ser incorporado en los nuevos terminales multimedia que se están comercializando. Con el desarrollo de las nuevas versiones del compilador específico para el desarrollo de lenguaje RVC-CAL (Orcc), en el que se basa MPEG RVC, el mapeo estático, para entornos basados en multiprocesador, de los actores que integran un descodificador es posible. Se ha elegido un sistema embebido con un procesador con dos núcleos ARMv7. Esta plataforma nos permitirá llevar a cabo las pruebas de verificación y contraste de los conceptos estudiados en este trabajo, en el sentido del desarrollo de descodificadores de video basados en MPEG RVC y del estudio de la planificación y mapeo estático de los mismos.
Resumo:
Recently a new recipe for developing and deploying real-time systems has become increasingly adopted in the JET tokamak. Powered by the advent of x86 multi-core technology and the reliability of the JET’s well established Real-Time Data Network (RTDN) to handle all real-time I/O, an official Linux vanilla kernel has been demonstrated to be able to provide realtime performance to user-space applications that are required to meet stringent timing constraints. In particular, a careful rearrangement of the Interrupt ReQuests’ (IRQs) affinities together with the kernel’s CPU isolation mechanism allows to obtain either soft or hard real-time behavior depending on the synchronization mechanism adopted. Finally, the Multithreaded Application Real-Time executor (MARTe) framework is used for building applications particularly optimised for exploring multicore architectures. In the past year, four new systems based on this philosophy have been installed and are now part of the JET’s routine operation. The focus of the present work is on the configuration and interconnection of the ingredients that enable these new systems’ real-time capability and on the impact that JET’s distributed real-time architecture has on system engineering requirements, such as algorithm testing and plant commissioning. Details are given about the common real-time configuration and development path of these systems, followed by a brief description of each system together with results regarding their real-time performance. A cycle time jitter analysis of a user-space MARTe based application synchronising over a network is also presented. The goal is to compare its deterministic performance while running on a vanilla and on a Messaging Real time Grid (MRG) Linux kernel.