924 resultados para Imulation and Real Experiment


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many Hyperspectral imagery applications require a response in real time or near-real time. To meet this requirement this paper proposes a parallel unmixing method developed for graphics processing units (GPU). This method is based on the vertex component analysis (VCA), which is a geometrical based method highly parallelizable. VCA is a very fast and accurate method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Experimental results obtained for simulated and real hyperspectral datasets reveal considerable acceleration factors, up to 24 times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new hyperspectral unmixing method called Dependent Component Analysis (DECA). This method decomposes a hyperspectral image into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA performance is illustrated using simulated and real data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance fractions at each pixel in the image. This paper introduces a new unmixing method termed dependent component analysis (DECA). This method is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process. The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a new parallel method for sparse spectral unmixing of remotely sensed hyperspectral data on commodity graphics processing units (GPUs) is presented. A semi-supervised approach is adopted, which relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. This method is based on the spectral unmixing by splitting and augmented Lagrangian (SUNSAL) that estimates the material's abundance fractions. The parallel method is performed in a pixel-by-pixel fashion and its implementation properly exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for simulated and real hyperspectral datasets reveal significant speedup factors, up to 1 64 times, with regards to optimized serial implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear unmixing decomposes an hyperspectral image into a collection of re ectance spectra, called endmember signatures, and a set corresponding abundance fractions from the respective spatial coverage. This paper introduces vertex component analysis, an unsupervised algorithm to unmix linear mixtures of hyperpsectral data. VCA exploits the fact that endmembers occupy vertices of a simplex, and assumes the presence of pure pixels in data. VCA performance is illustrated using simulated and real data. VCA competes with state-of-the-art methods with much lower computational complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

É cada vez mais proeminente abordarmos a temática da medicina veterinária devido à necessidade de formação dos profissionais e dos cuidados requeridos nas vertentes homem/animal/ambiente. Com o principal objetivo de conhecer se a base de formação dos profissionais de farmácia é suficiente e tem resultados na satisfação das necessidades de todos os utentes com animais, foi levantado um estudo de artigos já publicados sobre o tema e posteriormente um questionário online para possível comparação dos resultados com a realidade. Abordam-se temas como: intoxicações em animais, regulamentação sobre este tipo de medicamentos, aconselhamento e consumo de medicamentos veterinários, saúde pública, algumas terapias complementares e finalmente a formação dos profissionais de farmácia. Os resultados obtidos a nível da formação destes profissionais não foram os mais satisfatórios, embora este resultado no futuro possa ser melhorado visto que há interesse por parte dos inquiridos em obter formação sobre a área. Aplicou-se um estudo observacional, do tipo transversal e analítico. A população alvo são os profissionais de farmácia portugueses. Para a recolha destas informações foi utilizado um questionário online, anónimo, confidencial e voluntário a 400profissionais. A amostra é constituída maioritariamente por indivíduos do género feminino (75%), sendo a faixa etária mais frequente dos 23 aos 25 anos (41%). Verificou-se que mais de 70% dos profissionais trabalham num local onde se vendem medicamentos veterinários e embora apenas 21% tenha tido uma formação na área. É de notar que 69% sentiu dificuldades no preenchimento do questionário e 94% considera importante obter uma formação extra nesta área, principalmente em temáticas como medicação, prevenção de doenças e alimentação.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O Geocaching é um jogo, criado pela Groundspeak, que consiste em esconder e encontrar objetos geolocalizados conhecidos como geocaches. A busca das geocaches é na realidade uma aventura que promove a vivência de novas experiências, o convívio entre utilizadores, a descoberta de novos espaços na natureza, a realização de jogos em tempo e cenário real, entre outros. Existem geocaches espalhadas por todo o mundo e milhares de utilizadores estão já registados no jogo. Além de passatempo, o Geocaching consegue ser uma ferramenta de marketing digital, quer para a própria Groundspeak, mas também para diferentes empresas/instituições por todo o mundo normalmente associadas à localização das geocaches. A Groundspeak é, naturalmente, a mais beneficiada uma vez que, praticamente sem investir em publicidade, conseguiu que o jogo tenha cada vez mais adeptos. A sua divulgação é essencialmente feita pelos próprios utilizadores, quer através da comunicação direta com um não utilizador, quer através de redes sociais, de eventos organizados, mas também através de outras empresas que desenvolveram aplicações com funcionalidades extra que permitem ao utilizador uma melhor experiência. O objetivo desta dissertação foi o de demonstrar como é que o Geocaching pode ser usado como uma ferramenta de Marketing Digital. Inicialmente, foi analisada a questão do Marketing Digital e das suas ferramentas, focando o Geocaching e a sua dimensão no mundo, explicando os diferentes tipos de caches e de que forma as mesmas podem ser utilizadas como ferramentas de marketing. Como elemento de validação, foi concebida, desenvolvida e validada uma wherigo (um tipo de geocache), que consiste num jogo virtual onde o progresso do jogador depende das tarefas realizadas e da sua movimentação geolocalizada. A wherigo criada no âmbito do projeto é um meio de marketing digital, de divulgação do Castelo de Santa Maria da Feira, realizada de uma forma interativa e divertida, através de questionários, desafios e fantasia. O jogo incita a percorrer os jardins que rodeiam o Castelo bem como o interior do mesmo e permite ainda o acesso dos jogadores ao Castelo com desconto de geocacher na aquisição do ingresso. Os objetivos propostos inicialmente foram completamente cumpridos, sendo que o jogo já se encontra disponível para ser jogado por geocachers e foi por eles avaliado muito positivamente.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A malária constitui um problema de saúde pública, que tem vindo a agravar-se, sendo crescente a necessidade de estratégias renovadas para o seu controlo, como a interrupção do ciclo esporogónico. Deste modo, é essencial compreender as respostas imunológicas de Anopheles anti-Plasmodium. Demonstrou-se anteriormente, que a inibição de transglutaminases, enzimas que participam em vários processos biológicos ao catalisarem a formação de ligações covalentes entre péptidos, agrava a infecção em mosquitos pelo parasita. O presente trabalho tem por objectivo caracterizar as transglutaminases AGAP009098 e AGAP009100 de Anopheles gambiae. Os métodos utilizados para este efeito foram: a sequenciação de regiões dos genes AGAP009098 e AGAP009100; a clonagem molecular de fragmentos da região codificante do gene AGAP009098, usando o vector plasmídico pET–28a(+) e Escherichia coli como sistema de expressão; e PCR em Tempo Real para analisar a expressão relativa dos genes AGAP009098 e AGAP009100 nos diferentes os estádios de desenvolvimento. AGAP009098 é expressa ubiquamente e AGAP009100 a partir do estádio pupa. Estes resultados apontam para a conclusão de que AGAP009098 e AGAP009100 poderão desempenhar funções em processos biológicos relevantes, por exemplo na defesa imunitária, ou no desenvolvimento. Os péptidos recombinantes, obtidos a partir da clonagem com sucesso de fragmentos da região codificante do gene AGAP009098, constituem uma ferramenta importante para averiguar a função destas TGases, no futuro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intensive use of distributed generation based on renewable resources increases the complexity of power systems management, particularly the short-term scheduling. Demand response, storage units and electric and plug-in hybrid vehicles also pose new challenges to the short-term scheduling. However, these distributed energy resources can contribute significantly to turn the shortterm scheduling more efficient and effective improving the power system reliability. This paper proposes a short-term scheduling methodology based on two distinct time horizons: hour-ahead scheduling, and real-time scheduling considering the point of view of one aggregator agent. In each scheduling process, it is necessary to update the generation and consumption operation, and the storage and electric vehicles status. Besides the new operation condition, more accurate forecast values of wind generation and consumption are available, for the resulting of short-term and very short-term methods. In this paper, the aggregator has the main goal of maximizing his profits while, fulfilling the established contracts with the aggregated and external players.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy systems worldwide are complex and challenging environments. Multi-agent based simulation platforms are increasing at a high rate, as they show to be a good option to study many issues related to these systems, as well as the involved players at act in this domain. In this scope the authors research group has developed three multi-agent systems: MASCEM, which simulates the electricity markets; ALBidS that works as a decision support system for market players; and MASGriP, which simulates the internal operations of smart grids. To take better advantage of these systems, their integration is mandatory. For this reason, is proposed the development of an upper-ontology which allows an easier cooperation and adequate communication between them. Additionally, the concepts and rules defined by this ontology can be expanded and complemented by the needs of other simulation and real systems in the same areas as the mentioned systems. Each system’s particular ontology must be extended from this top-level ontology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cyanobacteria deteriorate the water quality and are responsible for emerging outbreaks and epidemics causing harmful diseases in Humans and animals because of their toxins. Microcystin-LR (MCT) is one of the most relevant cyanotoxin, being the most widely studied hepatotoxin. For safety purposes, the World Health Organization recommends a maximum value of 1 μg L−1 of MCT in drinking water. Therefore, there is a great demand for remote and real-time sensing techniques to detect and quantify MCT. In this work a Fabry–Pérot sensing probe based on an optical fibre tip coated with a MCT selective thin film is presented. The membranes were developed by imprinting MCT in a sol–gel matrix that was applied over the tip of the fibre by dip coating. The imprinting effect was obtained by curing the sol–gel membrane, prepared with (3-aminopropyl) trimethoxysilane (APTMS), diphenyl-dimethoxysilane (DPDMS), tetraethoxysilane (TEOS), in the presence of MCT. The imprinting effect was tested by preparing a similar membrane without template. In general, the fibre Fabry–Pérot with a Molecular Imprinted Polymer (MIP) sensor showed low thermal effect, thus avoiding the need of temperature control in field applications. It presented a linear response to MCT concentration within 0.3–1.4 μg L−1 with a sensitivity of −12.4 ± 0.7 nm L μg−1. The corresponding Non-Imprinted Polymer (NIP) displayed linear behaviour for the same MCT concentration range, but with much less sensitivity, of −5.9 ± 0.2 nm L μg−1. The method shows excellent selectivity for MCT against other species co-existing with the analyte in environmental waters. It was successfully applied to the determination of MCT in contaminated samples. The main advantages of the proposed optical sensor include high sensitivity and specificity, low-cost, robustness, easy preparation and preservation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper formulates a novel expression for entropy inspired in the properties of Fractional Calculus. The characteristics of the generalized fractional entropy are tested both in standard probability distributions and real world data series. The results reveal that tuning the fractional order allow an high sensitivity to the signal evolution, which is useful in describing the dynamics of complex systems. The concepts are also extended to relative distances and tested with several sets of data, confirming the goodness of the generalization.