55 resultados para Java Remote Method Invocation
em Reposit
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
Resumo:
One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.
Resumo:
Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.
Resumo:
As comunicações electrónicas são cada vez mais o meio de eleição para negócios entre entidades e para as relações entre os cidadãos e o Estado (e-government). Esta diversidade de transacções envolve, muitas vezes, informação sensível e com possível valor legal. Neste contexto, as assinaturas electrónicas são uma importante base de confiança, fornecendo garantias de integridade e autenticação entre os intervenientes. A produção de uma assinatura digital resulta não só no valor da assinatura propriamente dita, mas também num conjunto de informação adicional acerca da mesma, como o algoritmo de assinatura, o certificado de validação ou a hora e local de produção. Num cenário heterogéneo como o descrito anteriormente, torna-se necessária uma forma flexível e interoperável de descrever esse tipo de informação. A linguagem XML é uma forma adequada de representar uma assinatura neste contexto, não só pela sua natureza estruturada, mas principalmente por ser baseada em texto e ter suporte generalizado. A recomendação XML Signature Syntax and Processing (ou apenas XML Signature) foi o primeiro passo na representação de assinaturas em XML. Nela são definidas sintaxe e regras de processamento para criar, representar e validar assinaturas digitais. As assinaturas XML podem ser aplicadas a qualquer tipo de conteúdos digitais identificáveis por um URI, tanto no mesmo documento XML que a assinatura, como noutra qualquer localização. Além disso, a mesma assinatura XML pode englobar vários recursos, mesmo de tipos diferentes (texto livre, imagens, XML, etc.). À medida que as assinaturas electrónicas foram ganhando relevância tornou-se evidente que a especificação XML Signature não era suficiente, nomeadamente por não dar garantias de validade a longo prazo nem de não repudiação. Esta situação foi agravada pelo facto da especificação não cumprir os requisitos da directiva 1999/93/EC da União Europeia, onde é estabelecido um quadro legal para as assinaturas electrónicas a nível comunitário. No seguimento desta directiva da União Europeia foi desenvolvida a especificação XML Advanced Electronic Signatures que define formatos XML e regras de processamento para assinaturas electrónicas não repudiáveis e com validade verificável durante períodos de tempo extensos, em conformidade com a directiva. Esta especificação estende a recomendação XML Signature, definindo novos elementos que contêm informação adicional acerca da assinatura e dos recursos assinados (propriedades qualificadoras). A plataforma Java inclui, desde a versão 1.6, uma API de alto nível para serviços de assinaturas digitais em XML, de acordo com a recomendação XML Signature. Contudo, não existe suporte para assinaturas avançadas. Com este projecto pretende-se desenvolver uma biblioteca Java para a criação e validação de assinaturas XAdES, preenchendo assim a lacuna existente na plataforma. A biblioteca desenvolvida disponibiliza uma interface com alto nível de abstracção, não tendo o programador que lidar directamente com a estrutura XML da assinatura nem com os detalhes do conteúdo das propriedades qualificadoras. São definidos tipos que representam os principais conceitos da assinatura, nomeadamente as propriedades qualificadoras e os recursos assinados, sendo os aspectos estruturais resolvidos internamente. Neste trabalho, a informação que compõe uma assinatura XAdES é dividia em dois grupos: o primeiro é formado por características do signatário e da assinatura, tais como a chave e as propriedades qualificadoras da assinatura. O segundo grupo é composto pelos recursos assinados e as correspondentes propriedades qualificadoras. Quando um signatário produz várias assinaturas em determinado contexto, o primeiro grupo de características será semelhante entre elas. Definiu-se o conjunto invariante de características da assinatura e do signatário como perfil de assinatura. O conceito é estendido à verificação de assinaturas englobando, neste caso, a informação a usar nesse processo, como por exemplo os certificados raiz em que o verificador confia. Numa outra perspectiva, um perfil constitui uma configuração do serviço de assinatura correspondente. O desenho e implementação da biblioteca estão também baseados no conceito de fornecedor de serviços. Um fornecedor de serviços é uma entidade que disponibiliza determinada informação ou serviço necessários à produção e verificação de assinaturas, nomeadamente: selecção de chave/certificado de assinatura, validação de certificados, interacção com servidores de time-stamp e geração de XML. Em vez de depender directamente da informação em causa, um perfil — e, consequentemente, a operação correspondente — é configurado com fornecedores de serviços que são invocados quando necessário. Para cada tipo de fornecedor de serviços é definida um interface, podendo as correspondentes implementações ser configuradas de forma independente. A biblioteca inclui implementações de todos os fornecedores de serviços, sendo algumas delas usadas for omissão na produção e verificação de assinaturas. Uma vez que o foco do projecto é a especificação XAdES, o processamento e estrutura relativos ao formato básico são delegados internamente na biblioteca Apache XML Security, que disponibiliza uma implementação da recomendação XML Signature. Para validar o funcionamento da biblioteca, nomeadamente em termos de interoperabilidade, procede-se, entre outros, à verificação de um conjunto de assinaturas produzidas por Estados Membros da União Europeia, bem como por outra implementação da especificação XAdES.
Resumo:
One of the major problems that prevents the spread of elections with the possibility of remote voting over electronic networks, also called Internet Voting, is the use of unreliable client platforms, such as the voter's computer and the Internet infrastructure connecting it to the election server. A computer connected to the Internet is exposed to viruses, worms, Trojans, spyware, malware and other threats that can compromise the election's integrity. For instance, it is possible to write a virus that changes the voter's vote to a predetermined vote on election's day. Another possible attack is the creation of a fake election web site where the voter uses a malicious vote program on the web site that manipulates the voter's vote (phishing/pharming attack). Such attacks may not disturb the election protocol, therefore can remain undetected in the eyes of the election auditors. We propose the use of Code Voting to overcome insecurity of the client platform. Code Voting consists in creating a secure communication channel to communicate the voter's vote between the voter and a trusted component attached to the voter's computer. Consequently, no one controlling the voter's computer can change the his/her's vote. The trusted component can then process the vote according to a cryptographic voting protocol to enable cryptographic verification at the server's side.
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
This paper is on the problem of short-term hydro scheduling, particularly concerning head-dependent reservoirs under competitive environment. We propose a new nonlinear optimization method to consider hydroelectric power generation as a function of water discharge and also of the head. Head-dependency is considered on short-term hydro scheduling in order to obtain more realistic and feasible results. The proposed method has been applied successfully to solve a case study based on one of the main Portuguese cascaded hydro systems, providing a higher profit at a negligible additional computation time in comparison with a linear optimization method that ignores head-dependency.
Resumo:
Since collaborative networked organisations are usually formed by independent and heterogeneous entities, it is natural that each member holds his own set of values, and that conflicts among partners might emerge because of some misalignment of values. In contrast, it is often stated in literature that the alignment between the value systems of members involved in collaborative processes is a prerequisite for successful co-working. As a result, the issue of core value alignment in collaborative networks started to attract attention. However, methods to analyse such alignment are lacking mainly because the concept of 'alignment' in this context is still ill defined and shows a multifaceted nature. As a contribution to the area, this article introduces an approach based on causal models and graph theory for the analysis of core value alignment in collaborative networks. The potential application of the approach is then discussed in the virtual organisations' breeding environment context.
Resumo:
Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
This paper presents a direct power control (DPC) for three-phase matrix converters operating as unified power flow controllers (UPFCs). Matrix converters (MCs) allow the direct ac/ac power conversion without dc energy storage links; therefore, the MC-based UPFC (MC-UPFC) has reduced volume and cost, reduced capacitor power losses, together with higher reliability. Theoretical principles of direct power control (DPC) based on sliding mode control techniques are established for an MC-UPFC dynamic model including the input filter. As a result, line active and reactive power, together with ac supply reactive power, can be directly controlled by selecting an appropriate matrix converter switching state guaranteeing good steady-state and dynamic responses. Experimental results of DPC controllers for MC-UPFC show decoupled active and reactive power control, zero steady-state tracking error, and fast response times. Compared to an MC-UPFC using active and reactive power linear controllers based on a modified Venturini high-frequency PWM modulator, the experimental results of the advanced DPC-MC guarantee faster responses without overshoot and no steady-state error, presenting no cross-coupling in dynamic and steady-state responses.
Resumo:
This study explores a large set of OC and EC measurements in PM(10) and PM(2.5) aerosol samples, undertaken with a long term constant analytical methodology, to evaluate the capability of the OC/EC minimum ratio to represent the ratio between the OC and EC aerosol components resulting from fossil fuel combustion (OC(ff)/EC(ff)). The data set covers a wide geographical area in Europe, but with a particular focus upon Portugal, Spain and the United Kingdom, and includes a great variety of sites: urban (background, kerbside and tunnel), industrial, rural and remote. The highest minimum ratios were found in samples from remote and rural sites. Urban background sites have shown spatially and temporally consistent minimum ratios, of around 1.0 for PM(10) and 0.7 for PM(2.5).The consistency of results has suggested that the method could be used as a tool to derive the ratio between OC and EC from fossil fuel combustion and consequently to differentiate OC from primary and secondary sources. To explore this capability, OC and EC measurements were performed in a busy roadway tunnel in central Lisbon. The OC/EC ratio, which reflected the composition of vehicle combustion emissions, was in the range of 03-0.4. Ratios of OC/EC in roadside increment air (roadside minus urban background) in Birmingham, UK also lie within the range 03-0.4. Additional measurements were performed under heavy traffic conditions at two double kerbside sites located in the centre of Lisbon and Madrid. The OC/EC minimum ratios observed at both sites were found to be between those of the tunnel and those of urban background air, suggesting that minimum values commonly obtained for this parameter in open urban atmospheres over-predict the direct emissions of OC(ff) from road transport. Possible reasons for this discrepancy are explored. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Mestrado em Medicina Nuclear.
Resumo:
The conventional methods used to evaluate chitin content in fungi, such as biochemical assessment of glucosamine release after acid hydrolysis or epifluorescence microscopy, are low throughput, laborious, time-consuming, and cannot evaluate a large number of cells. We developed a flow cytometric assay, efficient, and fast, based on Calcofluor White staining to measure chitin content in yeast cells. A staining index was defined, its value was directly related to chitin amount and taking into consideration the different levels of autofluorecence. Twenty-two Candida spp. and four Cryptococcus neoformans clinical isolates with distinct susceptibility profiles to caspofungin were evaluated. Candida albicans clinical isolate SC5314, and isogenic strains with deletions in chitin synthase 3 (chs3Δ/chs3Δ) and genes encoding predicted Glycosyl Phosphatidyl Inositol (GPI)-anchored proteins (pga31Δ/Δ and pga62Δ/Δ), were used as controls. As expected, the wild-type strain displayed a significant higher chitin content (P < 0.001) than chs3Δ/chs3Δ and pga31Δ/Δ especially in the presence of caspofungin. Ca. parapsilosis, Ca. tropicalis, and Ca. albicans showed higher cell wall chitin content. Although no relationship between chitin content and antifungal drug susceptibility phenotype was found, an association was established between the paradoxical growth effect in the presence of high caspofungin concentrations and the chitin content. This novel flow cytometry protocol revealed to be a simple and reliable assay to estimate cell wall chitin content of fungi.
Resumo:
In order to evaluate the capacity of laser scanning cytometry (LSC) to detect acid-fast bacilli directly on clinical samples, a comparison between Kinyoun-stained smears analyzed under light microscopy and propidium iodide-auramine-stained smears analyzed by LSC was performed. The results were compared with those for culture on BACTEC MGIT 960. LSC is a new, reliable methodology to detect Mycobacteria.