16 resultados para snail vectors
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
This paper presents a predictive optimal matrix converter controller for a flywheel energy storage system used as Dynamic Voltage Restorer (DVR). The flywheel energy storage device is based on a steel seamless tube mounted as a vertical axis flywheel to store kinetic energy. The motor/generator is a Permanent Magnet Synchronous Machine driven by the AC-AC Matrix Converter. The matrix control method uses a discrete-time model of the converter system to predict the expected values of the input and output currents for all the 27 possible vectors generated by the matrix converter. An optimal controller minimizes control errors using a weighted cost functional. The flywheel and control process was tested as a DVR to mitigate voltage sags and swells. Simulation results show that the DVR is able to compensate the critical load voltage without delays, voltage undershoots or overshoots, overcoming the input/output coupling of matrix converters.
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
O pinheiro tem um papel importante na ecologia e economia nacional. O Pinheiro sofre de uma praga severa, denominada por doença da murchidão dos pinheiros, causada pelo nemátodo da madeira do pinheiro (NMP). Apresenta-se como um verme microscópico, invertebrado, medindo menos de 1,5 mm de comprimento. O contágio entre árvores deve-se a vectores biologicamente conhecidos por longicórneo e capricórnio do pinheiro. Os produtores de madeira de pinho são desta forma obrigados a efectuar tratamentos térmicos (HT), de eliminação do NMP e dos seus vectores para que a exportação da madeira serrada cumpra com a norma NP 4487. De modo a manter a competitividade internacional das empresas nacionais, o impacto dos custos do HT deve ser minimizado. O objectivo desta dissertação é efectuar o estudo técnico-económico da implementação de um sistema de cogeração capaz produzir calor para efectuar o tratamento ao NMP e simultaneamente energia eléctrica para vender à rede pública. As receitas da venda de energia eléctrica poderão contribuir para a minimização dos custos do HT. Tendo em conta que os resíduos das serrações de madeira podem ser usados como combustível consideraram-se para avaliação duas tecnologias de cogeração, um sistema de turbina a vapor clássico (ciclo Rankine) e um sistema Organic Rankine Cycle (ORC), permitindo ambas a queima dos resíduos das serrações de madeira. No que diz respeito à avaliação económica, foi desenvolvido um simulador de tecnologia/modalidade de remuneração que efectua cálculos consoante as necessidades térmicas de cada produtor, a potência eléctrica a instalar e indicadores económicos, VAL, TIR e PAYBACK da instalação do sistema de cogeração. O simulador desenvolvido aplica a nova legislação que enquadra o sistema jurídico e remuneratório da cogeração (DL 23/2010), na qual se consideram duas modalidades, geral e especial. A metodologia desenvolvida foi aplicada num caso real de uma serração de madeira e os principais resultados mostram que as soluções apresentadas, turbina a vapor e sistema ORC, não apresentam viabilidade económica. Através da análise de sensibilidade, conclui-se que um dos factores que mais influência a viabilidade económica do projecto é o tempo de funcionamento reduzido. Sendo uma das soluções apresentada a criação de uma central de cogeração para vários produtores de madeira. Uma possível solução para o problema do reduzido tempo de utilização seria o fornecimento do serviço de tratamentos térmicos a outros produtores de paletes de madeira que não possuem estufa própria.
Resumo:
The conjugation of antigens with ligands of pattern recognition receptors (PRR) is emerging as a promising strategy for the modulation of specific immunity. Here, we describe a new Escherichia coli system for the cloning and expression of heterologous antigens in fusion with the OprI lipoprotein, a TLR ligand from the Pseudomonas aeruginosa outer membrane (OM). Analysis of the OprI expressed by this system reveals a triacylated lipid moiety mainly composed by palmitic acid residues. By offering a tight regulation of expression and allowing for antigen purification by metal affinity chromatography, the new system circumvents the major drawbacks of former versions. In addition, the anchoring of OprI to the OM of the host cell is further explored for the production of novel recombinant bacterial cell wall-derived formulations (OM fragments and OM vesicles) with distinct potential for PRR activation. As an example, the African swine fever virus ORF A104R was cloned and the recombinant antigen was obtained in the three formulations. Overall, our results validate a new system suitable for the production of immunogenic formulations that can be used for the development of experimental vaccines and for studies on the modulation of acquired immunity.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica - ramo de Energia
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial
Resumo:
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Relatório do Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Relatório Final de Estágio apresentado à Escola Superior de Dança, com vista à obtenção do grau de Mestre em Ensino de Dança.
Resumo:
This paper deals with the computing simulation of the impact on permanent magnet synchronous generator wind turbines due to fifth harmonic content and grid voltage decrease. Power converter topologies considered in the simulations are the two-level and the three-level ones. The three-level converters are limited by unbalance voltages in the DC-link capacitors. In order to lessen this limitation, a new control strategy for the selection of the output voltage vectors is proposed. Controller strategies considered in the simulation are respectively based on proportional integral and fractional-order controllers. Finally, a comparison between the results of the simulations with the two controller strategies is presented in order to show the main advantage of the proposed strategy. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Manutenção
Resumo:
The localization of magma melting areas at the lithosphere bottom in extensional volcanic domains is poorly understood. Large polygenetic volcanoes of long duration and their associated magma chambers suggest that melting at depth may be focused at specific points within the mantle. To validate the hypothesis that the magma feeding a mafic crust, comes from permanent localized crustal reservoirs, it is necessary to map the fossilized magma flow within the crustal planar intrusions. Using the AMS, we obtain magmatic flow vectors from 34 alkaline basaltic dykes from São Jorge, São Miguel and Santa Maria islands in the Azores Archipelago, a hot-spot related triple junction. The dykes contain titanomagnetite showing a wide spectrum of solid solution ranging from Ti-rich to Ti-poor compositions with vestiges of maghemitization. Most of the dykes exhibit a normal magnetic fabric. The orientation of the magnetic lineation k1 axis is more variable than that of the k3 axis, which is generally well grouped. The dykes of São Jorge and São Miguel show a predominance of subhorizontal magmatic flows. In Santa Maria the deduced flow pattern is less systematic changing from subhorizontal in the southern part of the island to oblique in north. These results suggest that the ascent of magma beneath the islands of Azores is predominantly over localized melting sources and then collected within shallow magma chambers. According to this concept, dykes in the upper levels of the crust propagate laterally away from these magma chambers thus feeding the lava flows observed at the surface.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
In hyperspectral imagery a pixel typically consists mixture of spectral signatures of reference substances, also called endmembers. Linear spectral mixture analysis, or linear unmixing, aims at estimating the number of endmembers, their spectral signatures, and their abundance fractions. This paper proposes a framework for hyperpsectral unmixing. A blind method (SISAL) is used for the estimation of the unknown endmember signature and their abundance fractions. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The proposed framework simultaneously estimates the number of endmembers present in the hyperspectral image by an algorithm based on the minimum description length (MDL) principle. Experimental results on both synthetic and real hyperspectral data demonstrate the effectiveness of the proposed algorithm.