852 resultados para PARALLEL WORKSTATIONS
Resumo:
Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.
Resumo:
Many Hyperspectral imagery applications require a response in real time or near-real time. To meet this requirement this paper proposes a parallel unmixing method developed for graphics processing units (GPU). This method is based on the vertex component analysis (VCA), which is a geometrical based method highly parallelizable. VCA is a very fast and accurate method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Experimental results obtained for simulated and real hyperspectral datasets reveal considerable acceleration factors, up to 24 times.
Resumo:
In this paper, a new parallel method for sparse spectral unmixing of remotely sensed hyperspectral data on commodity graphics processing units (GPUs) is presented. A semi-supervised approach is adopted, which relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. This method is based on the spectral unmixing by splitting and augmented Lagrangian (SUNSAL) that estimates the material's abundance fractions. The parallel method is performed in a pixel-by-pixel fashion and its implementation properly exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for simulated and real hyperspectral datasets reveal significant speedup factors, up to 1 64 times, with regards to optimized serial implementation.
Resumo:
This paper presents a methodology for multi-objective day-ahead energy resource scheduling for smart grids considering intensive use of distributed generation and Vehicle- To-Grid (V2G). The main focus is the application of weighted Pareto to a multi-objective parallel particle swarm approach aiming to solve the dual-objective V2G scheduling: minimizing total operation costs and maximizing V2G income. A realistic mathematical formulation, considering the network constraints and V2G charging and discharging efficiencies is presented and parallel computing is applied to the Pareto weights. AC power flow calculation is included in the metaheuristics approach to allow taking into account the network constraints. A case study with a 33-bus distribution network and 1800 V2G resources is used to illustrate the performance of the proposed method.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Hoje em dia, com os avanços constantes na indústria, novas áreas começam cada vez mais a ser foco de atenção por parte das organizações. Motivados pela procura de melhores condições para os seus colaboradores e por todos os benefícios que este tipo de intervenção oferece, tanto a curto, como principalmente a médio e longo prazo, a Grohe Portugal, mais especificamente o seu departamento de montagem, achou relevante potenciar a aplicação da Ergonomia nos seus postos de trabalho. Posto isto, esta dissertação pretende apresentar o trabalho desenvolvido junto da organização que teve como objetivo projetar e executar uma linha de montagem que tivesse em consideração os seguintes aspetos: • Ergonomia; • Automatização ou semi-automatização de operações; • Simplificação de aspetos operacionais; • Sistemas de abastecimento mais robustos e de fácil uso; • Simplificação de setups; • Definição de dimensões normalizadas para futuros projetos. As soluções encontradas tiveram como objetivo primordial satisfazer o maior número possível de colaboradores, sendo que para tal foram utilizados dados referentes a antropometria da população Portuguesa. Para a realização e conclusão deste projeto, o trabalho foi decomposto em varias etapas, de entre as quais se destacam: • Analise e estudo dos métodos de montagem; • Levantamento de todos os componentes e operações que constituem o processo de fabrico das diversas famílias onde foram implementados novos projetos; • Definição e uniformização da estrutura das novas linhas de montagem; • Estudo e definição da disposição dos componentes na nova linha, bem como da sua forma de abastecimento; • Projeto da linha de montagem em 3D com recurso ao software SolidWorks (DassaultSystemes, 2014); • Montagem final da linha, bem como o acompanhamento da sua fase de arranque. Durante o estagio foi ainda pensado e implementado um projeto paralelo com vista a constante manutenção e melhoria do departamento de montagem cujo objetivo, através de “plant walks”, e detetar de entre outras, situações de falta de identificação de componentes ou equipamentos, degradação de ferramentas, fugas ou derrames nas linhas, etc. O balanco final do trabalho foi bastante positivo, tendo-se alcançado melhorias em alguns índices de qualidade, tempos de abastecimento e condições ergonómicas dos postos de trabalho que sofreram intervenção, tendo essas melhorias resultado numa avaliação positiva por parte dos colaboradores que integram essas mesmas linhas.
Resumo:
Euromicro Conference on Digital System Design (DSD 2015), Funchal, Portugal.
Resumo:
6th Real-Time Scheduling Open Problems Seminar (RTSOPS 2015), Lund, Sweden.
Resumo:
The 30th ACM/SIGAPP Symposium On Applied Computing (SAC 2015). 13 to 17, Apr, 2015, Embedded Systems. Salamanca, Spain.
Resumo:
Distributed real-time systems such as automotive applications are becoming larger and more complex, thus, requiring the use of more powerful hardware and software architectures. Furthermore, those distributed applications commonly have stringent real-time constraints. This implies that such applications would gain in flexibility if they were parallelized and distributed over the system. In this paper, we consider the problem of allocating fixed-priority fork-join Parallel/Distributed real-time tasks onto distributed multi-core nodes connected through a Flexible Time Triggered Switched Ethernet network. We analyze the system requirements and present a set of formulations based on a constraint programming approach. Constraint programming allows us to express the relations between variables in the form of constraints. Our approach is guaranteed to find a feasible solution, if one exists, in contrast to other approaches based on heuristics. Furthermore, approaches based on constraint programming have shown to obtain solutions for these type of formulations in reasonable time.
Resumo:
Article in Press, Corrected Proof
Resumo:
Paper/Poster presented in Work in Progress Session, 28th GI/ITG International Conference on Architecture of Computing Systems (ARCS 2015). 24 to 26, Mar, 2015. Porto, Portugal.
Resumo:
Poster presented in Work in Progress Session, 28th GI/ITG International Conference on Architecture of Computing Systems (ARCS 2015). 24 to 26, Mar, 2015. Porto, Portugal.
Resumo:
Presented at INForum - Simpósio de Informática (INFORUM 2015). 7 to 8, Sep, 2015. Portugal.
Resumo:
The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of High-Performance Computing (HPC) and Embedded Computing (EC) domains. On one side, new kinds of HPC applications are being required by markets needing huge amounts of information to be processed within a bounded amount of time. On the other side, EC systems are increasingly concerned with providing higher performance in real-time, challenging the performance capabilities of current architectures. The advent of next-generation many-core embedded platforms has the chance of intercepting this converging need for predictable high-performance, allowing HPC and EC applications to be executed on efficient and powerful heterogeneous architectures integrating general-purpose processors with many-core computing fabrics. To this end, it is of paramount importance to develop new techniques for exploiting the massively parallel computation capabilities of such platforms in a predictable way. P-SOCRATES will tackle this important challenge by merging leading research groups from the HPC and EC communities. The time-criticality and parallelisation challenges common to both areas will be addressed by proposing an integrated framework for executing workload-intensive applications with real-time requirements on top of next-generation commercial-off-the-shelf (COTS) platforms based on many-core accelerated architectures. The project will investigate new HPC techniques that fulfil real-time requirements. The main sources of indeterminism will be identified, proposing efficient mapping and scheduling algorithms, along with the associated timing and schedulability analysis, to guarantee the real-time and performance requirements of the applications.