50 resultados para vertex operators
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
There exist striking analogies in the behaviour of eigenvalues of Hermitian compact operators, singular values of compact operators and invariant factors of homomorphisms of modules over principal ideal domains, namely diagonalization theorems, interlacing inequalities and Courant-Fischer type formulae. Carlson and Sa [D. Carlson and E.M. Sa, Generalized minimax and interlacing inequalities, Linear Multilinear Algebra 15 (1984) pp. 77-103.] introduced an abstract structure, the s-space, where they proved unified versions of these theorems in the finite-dimensional case. We show that this unification can be done using modular lattices with Goldie dimension, which have a natural structure of s-space in the finite-dimensional case, and extend the unification to the countable-dimensional case.
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Resumo:
International Conference with Peer Review 2012 IEEE International Conference in Geoscience and Remote Sensing Symposium (IGARSS), 22-27 July 2012, Munich, Germany
Resumo:
Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
A área de comercialização de energia eléctrica conheceu uma profunda mudança após a liberalização do sector eléctrico, que levou à criação de algumas entidades, as quais gerem os mercados de electricidade europeus. Relativamente a Portugal e Espanha, durante esse processo de liberalização, deu-se também um acordo que os levou à criação de um mercado conjunto, um mercado Ibérico (MIBEL). Dentro deste mercado estão contemplados dois operadores, sendo que um deles representa o pólo Português (OMIP) e o outro representa o pólo Espanhol (OMEL). O OMIP contempla os mercados a prazo, ou futuros, normalmente apresenta contratos de energia comercializada com durabilidade de semanas, meses, trimestres, semestres ou mesmo anos. Diariamente estes contratos poderão vencer no OMEL, que engloba os mercados, diário e intradiário. Este, ao contrário do OMIP negoceia para o dia seguinte (mercado diário) ou para uma determinada altura do dia (mercado intra diário). O mercado diário será o exemplo usado para a criação do simulador interactivo do mercado de energia eléctrica. Este será composto por diversos utilizadores (jogadores), que através de uma plataforma HTML irão investir em centrais de energia eléctrica, negociar licitações e analisar o funcionamento e resultados deste mercado. Este jogo subdividir-se-á então em 3 fases: 1. Fase de investimento; 2. Fase de venda (licitações); 3. Fase de mercado. Na fase do investimento, o jogador terá a possibilidade de adquirir unidades de geração de energia eléctrica de seis tipos de tecnologia: 1. Central a Carvão; 2. Central de Ciclo Combinado; 3. Central Hídrica; 4. Central Eólica; 5. Central Solar; 6. Central Nuclear. Com o decorrer das jogadas o jogador poderá aumentar a sua capacidade de investimento, com a venda de energia, sendo o vencedor aquele que mais saldo tiver no fim do número de jogadas previamente definidos, ou aquele que mais depressa atingir o saldo definido como limite pelo administrador do jogo. A nível pedagógico este simulador é muito interessante pois para além de o utilizador ficar a conhecer as tecnologias em causa e as vantagens e desvantagens das centrais de energia renovável e das centrais a combustíveis fósseis, este ganha igualmente uma sensibilidade para questões de nível ambiental, tais como o aumento dos gases de estufa e o degelo resultante do aquecimento global provocado por esses gases. Para além do conhecimento adquirido na parte de energia eléctrica este jogo dará a conhecer ao utilizador o funcionamento do mercado da energia eléctrica, bem como as tácticas que este poderá usar a seu favor neste tipo de mercado.
Resumo:
Este trabalho sugere uma solução de integração de dados em tempo real no contexto dos transportes públicos. Com o aumento das alternativas oferecidas aos utilizadores dos transportes públicos é importante que estes conheçam todas as alternativas com base em informação em tempo real para que realizem a escolha que melhor se enquadre às suas necessidades. Por outro lado, os operadores de transportes públicos deverão ser capazes de disponibilizar toda a informação pretendida com o mínimo de esforço ou de alterações ao sistema que têm implementado. Neste trabalho serão utilizadas ferramentas que permitem fornecer uma visão homogénea das várias fontes de dados heterogéneas, sendo essa homogeneidade o ponto de integração de todas as fontes de dados com as aplicações cliente.
Resumo:
Com a realização deste trabalho, pretende-se essencialmente dar a conhecer a influência que a acção do vento possui no dimensionamento de determinadas estruturas, neste caso específico, em torres tubulares de telecomunicações. É por todos sabido, da importância de um capaz e evoluído sistema de comunicações, no desenvolvimento sustentado do mundo moderno. Nesse sentido, o avanço galopante, principalmente nas últimas duas décadas, das tecnologias de Telecomunicações, implicou uma rápida resposta em consonância na implantação e proliferação de infraestruturas de suporte aos equipamentos dessas tecnologias. Assim, a estrutura em forma de torre tubular, entre as demais variadas secções que as constituem, foi adquirindo preponderância neste campo, constituindo a mais vasta utilização de estruturas de suporte aos equipamentos de telecomunicações, nomeadamente em meios rurais, onde escasseiam edificações com alturas suficientes para fazer face às necessidades das operadoras licenciadas para os devidos efeitos. Será efectuada uma breve descrição sobre as diversas e mais comuns tipologias de torres utilizadas no âmbito do suporte de equipamentos de telecomunicações. Descreve-se, em forma de relatório e levantamento fotográfico, a ocorrência do colapso de uma torre tubular de telecomunicações. Por último, e na sequência do incidente referido no ponto anterior, será efectuada com detalhe, a análise estrutural da torre tubular que foi instalada na posição da anteriormente instalada.
Resumo:
The Tevatron has measured a discrepancy relative to the standard model prediction in the forward-backward asymmetry in top quark pair production. This asymmetry grows with the rapidity difference of the two top quarks. It also increases with the invariant mass of the t (t) over bar pair, reaching, for high invariant masses, 3.4 standard deviations above the next-to-leading order prediction for the charge asymmetry of QCD. However, perfect agreement between experiment and the standard model was found in both total and differential cross section of top quark pair production. As this result could be a sign of new physics we have parametrized this new physics in terms of a complete set of dimension six operators involving the top quark. We have then used a Markov chain Monte Carlo approach in order to find the best set of parameters that fits the data, using all available data regarding top quark pair production at the Tevatron. We have found that just a very small number of operators are able to fit the data better than the standard model.
Resumo:
Dissertação de natureza científica realizada para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Mestrado em Fiscalidade
Resumo:
Mestrado em Contabilidade e Gestão das Instituições Financeiras
Resumo:
In general, modern networks are analysed by taking several Key Performance Indicators (KPIs) into account, their proper balance being required in order to guarantee a desired Quality of Service (QoS), particularly, cellular wireless heterogeneous networks. A model to integrate a set of KPIs into a single one is presented, by using a Cost Function that includes these KPIs, providing for each network node a single evaluation parameter as output, and reflecting network conditions and common radio resource management strategies performance. The proposed model enables the implementation of different network management policies, by manipulating KPIs according to users' or operators' perspectives, allowing for a better QoS. Results show that different policies can in fact be established, with a different impact on the network, e.g., with median values ranging by a factor higher than two.