920 resultados para Heterogeneous platforms
Resumo:
Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a two-type heterogeneous multiprocessor platform where a task may request at most one of |R| shared resources. There are m1 processors of type-1 and m2 processors of type-2. Tasks may migrate only when requesting or releasing resources. We present a new algorithm, FF-3C-vpr, which offers a guarantee that if a task set is schedulable to meet deadlines by an optimal task assignment scheme that only allows tasks to migrate when requesting or releasing a resource, then FF-3Cvpr also meets deadlines if given processors 4+6*ceil(|R|/min(m1,m2)) times as fast. As far as we know, it is the first result for resource sharing on heterogeneous platforms with provable performance.
Resumo:
A velocidade de difusão de conteúdos numa plataforma web, assume uma elevada relevância em serviços onde a informação se pretende atualizada e em tempo real. Este projeto de Mestrado, apresenta uma abordagem de um sistema distribuído de recolher e difundir resultados em tempo real entre várias plataformas, nomeadamente sistemas móveis. Neste contexto, tempo real entende-se como uma diferença de tempo nula entre a recolha e difusão, ignorando fatores que não podem ser controlados pelo sistema, como latência de comunicação e tempo de processamento. Este projeto tem como base uma arquitetura existente de processamento e publicação de resultados desportivos, que apresentava alguns problemas relacionados com escalabilidade, segurança, tempos de entrega de resultados longos e sem integração com outras plataformas. Ao longo deste trabalho procurou-se investigar fatores que condicionassem a escalabilidade de uma aplicação web dando ênfase à implementação de uma solução baseada em replicação e escalabilidade horizontal. Procurou-se também apresentar uma solução de interoperabilidade entre sistemas e plataformas heterogêneas, mantendo sempre elevados níveis de performance e promovendo a introdução de plataformas móveis no sistema. De várias abordagens existentes para comunicação em tempo real sobre uma plataforma web, adotou-se um implementação baseada em WebSocket que elimina o tempo desperdiçado entre a recolha de informação e sua difusão. Neste projeto é descrito o processo de implementação da API de recolha de dados (Collector), da biblioteca de comunicação com o Collector, da aplicação web (Publisher) e sua API, da biblioteca de comunicação com o Publisher e por fim a implementação da aplicação móvel multi-plataforma. Com os componentes criados, avaliaram-se os resultados obtidos com a nova arquitetura de forma a aferir a escalabilidade e performance da solução criada e sua adaptação ao sistema existente.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores
Resumo:
The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto
Resumo:
In the Biodiversity World (BDW) project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to predict past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack interoperability. The BDW system brings all these disparate units together so that the user can combine tools with little thought as to their availability, data formats and interoperability. The current Web Servicesbased Grid environment enables execution of the BDW workflow tasks in remote nodes but with a limited scope. The next step in the evolution of the BDW architecture is to enable workflow tasks to utilise computational resources available within and outside the BDW domain. We describe the present BDW architecture and its transition to a new framework which provides a distributed computational environment for mapping and executing workflows in addition to bringing together heterogeneous resources and analytical tools.
Resumo:
The tool proposed, known as WSPControl, enables remote monitoring of computers across the Internet using distributed applications. Through a Web Services architecture is possible the communication between these distributed applications across heterogeneous platforms, also eliminates the need for additional settings in computer networks, such as release of ports or proxy. The tool is divided into three modules, namely: • Client Interface: developed in C Sharp, is responsible for capturing data on performance of the monitored computer also connects to the Web Services to report this data. • Web Services Interface: developed in PHP using the PHP SOAP library, is responsible for facilitating the communication between internet applications and client. • Internet Interface: developed in PHP, is responsible for reading and interpreting the information captured these available on the Internet
Resumo:
In this paper we advocate the Loop-of-stencil-reduce pattern as a way to simplify the parallel programming of heterogeneous platforms (multicore+GPUs). Loop-of-Stencil-reduce is general enough to subsume map, reduce, map-reduce, stencil, stencil-reduce, and, crucially, their usage in a loop. It transparently targets (by using OpenCL) combinations of CPU cores and GPUs, and it makes it possible to simplify the deployment of a single stencil computation kernel on different GPUs. The paper discusses the implementation of Loop-of-stencil-reduce within the FastFlow parallel framework, considering a simple iterative data-parallel application as running example (Game of Life) and a highly effective parallel filter for visual data restoration to assess performance. Thanks to the high-level design of the Loop-of-stencil-reduce, it was possible to run the filter seamlessly on a multicore machine, on multi-GPUs, and on both.
Resumo:
In this paper, we develop a fast implementation of an hyperspectral coded aperture (HYCA) algorithm on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems, which includes a wide variety of devices, from dense multicore systems from major manufactures such as Intel or ARM to new accelerators such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), the Intel Xeon Phi and other custom devices. Our proposed implementation of HYCA significantly reduces its computational cost. Our experiments have been conducted using simulated data and reveal considerable acceleration factors. This kind of implementations with the same descriptive language on different architectures are very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a heterogeneous multiprocessor platform. We use an algorithm proposed in [1] (we refer to it as LP-EE) from state-of-the-art for assigning tasks to heterogeneous multiprocessor platform and (re-)prove its performance guarantee but for a stronger adversary.We conjecture that if a task set can be scheduled to meet deadlines on a heterogeneous multiprocessor platform by an optimal task assignment scheme that allows task migrations then LP-EE meets deadlines as well with no migrations if given processors twice as fast. We illustrate this with an example.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a heterogeneous multiprocessor platform. We consider a restricted case where the maximum utilization of any task on any processor in the system is no greater than one. We use an algorithm proposed in [1] (we refer to it as LP-EE) from state-of-the-art for assigning tasks to heterogeneous multiprocessor platform and (re-)prove its performance guarantee for this restricted case but for a stronger adversary. We show that if a task set can be scheduled to meet deadlines on a heterogeneous multiprocessor platform by an optimal task assignment scheme that allows task migrations then LP-EE meets deadlines as well with no migrations if given processors twice as fast.
Resumo:
DNA microarrays are one of the most used technologies for gene expression measurement. However, there are several distinct microarray platforms, from different manufacturers, each with its own measurement protocol, resulting in data that can hardly be compared or directly integrated. Data integration from multiple sources aims to improve the assertiveness of statistical tests, reducing the data dimensionality problem. The integration of heterogeneous DNA microarray platforms comprehends a set of tasks that range from the re-annotation of the features used on gene expression, to data normalization and batch effect elimination. In this work, a complete methodology for gene expression data integration and application is proposed, which comprehends a transcript-based re-annotation process and several methods for batch effect attenuation. The integrated data will be used to select the best feature set and learning algorithm for a brain tumor classification case study. The integration will consider data from heterogeneous Agilent and Affymetrix platforms, collected from public gene expression databases, such as The Cancer Genome Atlas and Gene Expression Omnibus.
Resumo:
MSC Dissertation in Computer Engineering
Resumo:
Presented at Work in Progress Session, IEEE Real-Time Systems Symposium (RTSS 2015). 1 to 4, Dec, 2015. San Antonio, U.S.A..
Resumo:
Electricity markets worldwide are complex and dynamic environments with very particular characteristics. These are the result of electricity markets’ restructuring and evolution into regional and continental scales, along with the constant changes brought by the increasing necessity for an adequate integration of renewable energy sources. The rising complexity and unpredictability in electricity markets has increased the need for the intervenient entities in foreseeing market behaviour. Market players and regulators are very interested in predicting the market’s behaviour. Market players need to understand the market behaviour and operation in order to maximize their profits, while market regulators need to test new rules and detect market inefficiencies before they are implemented. The growth of usage of simulation tools was driven by the need for understanding those mechanisms and how the involved players' interactions affect the markets' outcomes. Multi-agent based software is particularly well fitted to analyse dynamic and adaptive systems with complex interactions among its constituents, such as electricity markets. Several modelling tools directed to the study of restructured wholesale electricity markets have emerged. Still, they have a common limitation: the lack of interoperability between the various systems to allow the exchange of information and knowledge, to test different market models and to allow market players from different systems to interact in common market environments. This dissertation proposes the development and implementation of ontologies for semantic interoperability between multi-agent simulation platforms in the scope of electricity markets. The added value provided to these platforms is given by enabling them sharing their knowledge and market models with other agent societies, which provides the means for an actual improvement in current electricity markets studies and development. The proposed ontologies are implemented in MASCEM (Multi-Agent Simulator of Competitive Electricity Markets) and tested through the interaction between MASCEM agents and agents from other multi-agent based simulators. The implementation of the proposed ontologies has also required a complete restructuring of MASCEM’s architecture and multi-agent model, which is also presented in this dissertation. The results achieved in the case studies allow identifying the advantages of the novel architecture of MASCEM, and most importantly, the added value of using the proposed ontologies. They facilitate the integration of independent multi-agent simulators, by providing a way for communications to be understood by heterogeneous agents from the various systems.