749 resultados para Consortial Implementations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Even though Software Transactional Memory (STM) is one of the most promising approaches to simplify concurrent programming, current STM implementations incur significant overheads that render them impractical for many real-sized programs. The key insight of this work is that we do not need to use the same costly barriers for all the memory managed by a real-sized application, if only a small fraction of the memory is under contention lightweight barriers may be used in this case. In this work, we propose a new solution based on an approach of adaptive object metadata (AOM) to promote the use of a fast path to access objects that are not under contention. We show that this approach is able to make the performance of an STM competitive with the best fine-grained lock-based approaches in some of the more challenging benchmarks. (C) 2015 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Engenharia Informática, Área de Especialização em Arquiteturas, Sistemas e Redes

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente trabalho de dissertação teve como objetivo a implementação de metodologias de Lean Management e avaliação do seu impacto no processo de Desenvolvimento de Produto. A abordagem utilizada consistiu em efetuar uma revisão da literatura e levantamento do Estado da Arte para obter a fundamentação teórica necessária à implementação de metodologias Lean. Prosseguiu com o levantamento da situação inicial da organização em estudo ao nível das atividades de desenvolvimento de produto, práticas de gestão documental e operacional e ainda de atividades de suporte através da realização de inquéritos e medições experimentais. Este conhecimento permitiu criar um modelo de referência para a implementação de Lean Management nesta área específica do desenvolvimento de produto. Após implementado, este modelo foi validado pela sua experimentação prática e recolha de indicadores. A implementação deste modelo de referência permitiu introduzir na Unidade de Desenvolvimento de Produto e Sistemas (DPS) da organização INEGI, as bases do pensamento Lean, contribuindo para a criação de um ambiente de Respeito pela Humanidade e de Melhoria Contínua. Neste ambiente foi possível obter ganhos qualitativos e quantitativos nas várias áreas em estudo, contribuindo de forma global para um aumento da eficiência e eficácia da DPS. Prevê-se que este aumento de eficiência represente um aumento da capacidade instalada na Organização, pela redução anual de 2290 horas de desperdício (6.5% da capacidade total da unidade) e pela redução significativa em custos operacionais. Algumas das implementações de melhoria propostas no decorrer deste trabalho, após verificado o seu sucesso, extravasaram a unidade em estudo e foram aplicadas transversalmente à da organização. Foram também obtidos ganhos qualitativos, tais como a normalização de práticas de gestão documental e a centralização e agilização de fluxos de informação. Isso permitiu um aumento de qualidade dos serviços prestados pela redução de correções e retrabalho. Adicionalmente, com o desenvolvimento de uma nova ferramenta que permite a monitorização do estado atual dos projetos a nível da sua percentagem de execução (cumprimento de objetivos), prazos e custos, bem como a estimação das datas de conclusão dos projetos possibilitando o replaneamento do projeto bem como a detecção atempada de desvios. A ferramenta permite também a criação de um histórico que identifica o esforço horário associado à realização das atividades/tarefas das várias áreas de Desenvolvimento de Produto e desta forma pode ser usada como suporte à orçamentação futura de atividades similares. No decorrer do projeto, foram também criados os mecanismos que permitem o cálculo de indicadores das competências técnicas e motivações intrínsecas individuais da equipa DPS. Estes indicadores podem ser usados na definição por parte dos gestores dos projetos da composição das equipas de trabalho, dos executantes de tarefas individuais do projeto e dos destinatários de ações de formação. Com esta informação é expectável que se consiga um maior aproveitamento do potencial humano e como consequência um aumento do desempenho e da satisfação pessoal dos recursos humanos da organização. Este caso de estudo veio demonstrar que o potencial de melhoria dos processos associados ao desenvolvimento de produto através de metodologias de Lean Management é muito significativo, e que estes resultam em ganhos visíveis para a organização bem como para os seus elementos individualmente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering by the Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Demand response concept has been gaining increasing importance while the success of several recent implementations makes this resource benefits unquestionable. This happens in a power systems operation environment that also considers an intensive use of distributed generation. However, more adequate approaches and models are needed in order to address the small size consumers and producers aggregation, while taking into account these resources goals. The present paper focuses on the demand response programs and distributed generation resources management by a Virtual Power Player that optimally aims to minimize its operation costs taking the consumption shifting constraints into account. The impact of the consumption shifting in the distributed generation resources schedule is also considered. The methodology is applied to three scenarios based on 218 consumers and 4 types of distributed generation, in a time frame of 96 periods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of demand response has drawing attention to the active participation in the economic operation of power systems, namely in the context of recent electricity markets and smart grid models and implementations. In these competitive contexts, aggregators are necessary in order to make possible the participation of small size consumers and generation units. The methodology proposed in the present paper aims to address the demand shifting between periods, considering multi-period demand response events. The focus is given to the impact in the subsequent periods. A Virtual Power Player operates the network, aggregating the available resources, and minimizing the operation costs. The illustrative case study included is based on a scenario of 218 consumers including generation sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste documento ´e feita a descrição detalhada da integração modular de um script no software OsiriX. O objectivo deste script ´e determinar o diâmetro central da artéria aorta a partir de uma Tomografia Computorizada. Para tal são abordados conceitos relacionados com a temática do processamento de imagem digital, tecnologias associadas, e.g., a norma DICOM e desenvolvimento de software. Como estudo preliminar, são analisados diversos visualizadores de imagens médica, utilizados para investigação ou mesmo comercializados. Foram realizadas duas implementações distintas do plugin. A primeira versão do plugin faz a invocação do script de processamento usando o ficheiro de estudo armazenado em disco; a segunda versão faz a passagem de dados através de um bloco de memória partilhada e utiliza o framework Java Native Interface. Por fim, é demonstrado todo o processo de aposição da Marcação CE de um dispositivo médico de classe IIa e obtenção da declaração de conformidade por parte de um Organismo Notificado. Utilizaram-se os Sistemas Operativos Mac OS X e Linux e as linguagens de programação Java, Objective-C e Python.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Computational Logic

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coarse Grained Reconfigurable Architectures (CGRAs) are emerging as enabling platforms to meet the high performance demanded by modern applications (e.g. 4G, CDMA, etc.). Recently proposed CGRAs offer time-multiplexing and dynamic applications parallelism to enhance device utilization and reduce energy consumption at the cost of additional memory (up to 50% area of the overall platform). To reduce the memory overheads, novel CGRAs employ either statistical compression, intermediate compact representation, or multicasting. Each compaction technique has different properties (i.e. compression ratio, decompression time and decompression energy) and is best suited for a particular class of applications. However, existing research only deals with these methods separately. Moreover, they only analyze the compaction ratio and do not evaluate the associated energy overheads. To tackle these issues, we propose a polymorphic compression architecture that interleaves these techniques in a unique platform. The proposed architecture allows each application to take advantage of a separate compression/decompression hierarchy (consisting of various types and implementations of hardware/software decoders) tailored to its needs. Simulation results, using different applications (FFT, Matrix multiplication, and WLAN), reveal that the choice of compression hierarchy has a significant impact on compression ratio (up to 52%), decompression energy (up to 4 orders of magnitude), and configuration time (from 33 n to 1.5 s) for the tested applications. Synthesis results reveal that introducing adaptivity incurs negligible additional overheads (1%) compared to the overall platform area.