984 resultados para compressed sensing theory (CS)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A correlation and predictive scheme for the viscosity and self-diffusivity of liquid dialkyl adipates is presented. The scheme is based on the kinetic theory for dense hard-sphere fluids, applied to the van der Waals model of a liquid to predict the transport properties. A "universal" curve for a dimensionless viscosity of dialkyl adipates was obtained using recently published experimental viscosity and density data of compressed liquid dimethyl (DMA), dipropyl (DPA), and dibutyl (DBA) adipates. The experimental data are described by the correlation scheme with a root-mean-square deviation of +/- 0.34 %. The parameters describing the temperature dependence of the characteristic volume, V-0, and the roughness parameter, R-eta, for each adipate are well correlated with one single molecular parameter. Recently published experimental self-diffusion coefficients of the same set of liquid dialkyl adipates at atmospheric pressure were correlated using the characteristic volumes obtained from the viscosity data. The roughness factors, R-D, are well correlated with the same single molecular parameter found for viscosity. The root-mean-square deviation of the data from the correlation is less than 1.07 %. Tests are presented in order to assess the capability of the correlation scheme to estimate the viscosity of compressed liquid diethyl adipate (DEA) in a range of temperatures and pressures by comparison with literature data and of its self-diffusivity at atmospheric pressure in a range of temperatures. It is noteworthy that no data for DEA were used to build the correlation scheme. The deviations encountered between predicted and experimental data for the viscosity and self-diffusivity do not exceed 2.0 % and 2.2 %, respectively, which are commensurate with the estimated experimental measurement uncertainty, in both cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electricity markets are complex environments, involving a large number of different entities, with specific characteristics and objectives, making their decisions and interacting in a dynamic scene. Game-theory has been widely used to support decisions in competitive environments; therefore its application in electricity markets can prove to be a high potential tool. This paper proposes a new scenario analysis algorithm, which includes the application of game-theory, to evaluate and preview different scenarios and provide players with the ability to strategically react in order to exhibit the behavior that better fits their objectives. This model includes forecasts of competitor players’ actions, to build models of their behavior, in order to define the most probable expected scenarios. Once the scenarios are defined, game theory is applied to support the choice of the action to be performed. Our use of game theory is intended for supporting one specific agent and not for achieving the equilibrium in the market. MASCEM (Multi-Agent System for Competitive Electricity Markets) is a multi-agent electricity market simulator that models market players and simulates their operation in the market. The scenario analysis algorithm has been tested within MASCEM and our experimental findings with a case study based on real data from the Iberian Electricity Market are presented and discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Um dos principais objetivos da ciência é perceber a natureza, i.e., descobrir e explicar o funcionamento do mundo que nos rodeia. Para tal, os cientistas precisam de coligir dados e monitorar o meio ambiente. Em particular, considerando que cerca de 70% da Terra é coberta por água, a coleta de parâmetros de caracterização da água de grandes superfícies é uma prioridade. A monitorização das condições da água é feita principalmente através de bóias. No entanto, as bóias disponíveis no mercado não satisfazem as necessidades existentes. Esta é uma das principais razões que levaram o Laboratório de Sistemas Autónomos (LSA) do Instituto Superior de Engenharia do Porto a lançarem um projeto para o desenvolvimento de uma bóia reconfigurável e com dois modos de funcionamento: monitorização ambiental e baliza ativa de regata. O segundo modo é destinado a regatas de veleiros autónomos. O projeto começou há um ano com um projeto do European Project Project [1] (EPS), realizado por quatro estudantes internacionais, destinado à construção da estrutura da bóia e à selecção dos componentes mais adequados para o sistema de medição e controlo. A arquitetura que foi definida para este sistema é do tipo mestre-escravo e é composta por uma unidade de controlo mestre para a telemetria e configuração e uma unidade de controlo escrava para a medição e armazenamento de dados. O desenvolvimento do projeto continuou com dois estudantes belgas que trabalharam na comunicação e no armazenamento de dados. Este projeto, que prossegue com o desenvolvimento da medição e do armazenamento de dados do lado da unidade de controlo escrava, tem os seguintes objetivos: (i ) implementar o protocolo de comunicação na unidade de controlo escrava; (ii ) coligir e armazenar os dados dos sensores no cartão SD em tempo real; (iii ) fornecer dados em tempo útil; e (iv) recuperar dados do cartão SD em tempo diferido. As contribuições anteriores foram estudadas e foi feito um levantamento dos projetos congéneres existentes. O desenvolvimento do projeto atual começou com o protocolo de comunicação. Este protocolo, que foi projetado pelos alunos anteriores, foi um bom ponto de partida. No entanto, o protocolo foi atualizado e melhorado com novas funcionalidades. Esta última componente foi um trabalho conjunto com Laurens Allart, que esteve a trabalhar no subsistema de telemetria e de configuração durante este semestre. O protocolo foi implementado do lado da unidade de controlo escrava através de uma estrutura de múltiplas actividades paralelas (multithreaded). Esta estrutura recebe as mensagens da unidade mestre, executa as ações solicitadas e envia de volta o resultado. A bóia é um dispositivo reconfigurável multimodo que pode ser expandido com novos modos de operação no futuro. Infelizmente, sofre de algumas limitações: suporta uma carga máxima de 40 kg e tem uma área de implantação limitada pela distância máxima à estacão base.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Doutoramento em Informática, como requisito parcial para obtenção do grau de Doutor em Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for degree of Master in Statistics and Information Management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ecological Water Quality - Water Treatment and Reuse

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cyanobacteria deteriorate the water quality and are responsible for emerging outbreaks and epidemics causing harmful diseases in Humans and animals because of their toxins. Microcystin-LR (MCT) is one of the most relevant cyanotoxin, being the most widely studied hepatotoxin. For safety purposes, the World Health Organization recommends a maximum value of 1 μg L−1 of MCT in drinking water. Therefore, there is a great demand for remote and real-time sensing techniques to detect and quantify MCT. In this work a Fabry–Pérot sensing probe based on an optical fibre tip coated with a MCT selective thin film is presented. The membranes were developed by imprinting MCT in a sol–gel matrix that was applied over the tip of the fibre by dip coating. The imprinting effect was obtained by curing the sol–gel membrane, prepared with (3-aminopropyl) trimethoxysilane (APTMS), diphenyl-dimethoxysilane (DPDMS), tetraethoxysilane (TEOS), in the presence of MCT. The imprinting effect was tested by preparing a similar membrane without template. In general, the fibre Fabry–Pérot with a Molecular Imprinted Polymer (MIP) sensor showed low thermal effect, thus avoiding the need of temperature control in field applications. It presented a linear response to MCT concentration within 0.3–1.4 μg L−1 with a sensitivity of −12.4 ± 0.7 nm L μg−1. The corresponding Non-Imprinted Polymer (NIP) displayed linear behaviour for the same MCT concentration range, but with much less sensitivity, of −5.9 ± 0.2 nm L μg−1. The method shows excellent selectivity for MCT against other species co-existing with the analyte in environmental waters. It was successfully applied to the determination of MCT in contaminated samples. The main advantages of the proposed optical sensor include high sensitivity and specificity, low-cost, robustness, easy preparation and preservation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

IEEE International Conference on Cyber Physical Systems, Networks and Applications (CPSNA'15), Hong Kong, China.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a result of the stressful conditions in aquaculture facilities there is a high risk of bacterial infections among cultured fish. Chlortetracycline (CTC) is one of the antimicrobials used to solve this problem. It is a broad spectrum antibacterial active against a wide range of Gram-positive and Gram-negative bacteria. Numerous analytical methods for screening, identifying, and quantifying CTC in animal products have been developed over the years. An alternative and advantageous method should rely on expeditious and efficient procedures providing highly specific and sensitive measurements in food samples. Ion-selective electrodes (ISEs) could meet these criteria. The only ISE reported in literature for this purpose used traditional electro-active materials. A selectivity enhancement could however be achieved after improving the analyte recognition by molecularly imprinted polymers (MIPs). Several MIP particles were synthesized and used as electro-active materials. ISEs based in methacrylic acid monomers showed the best analytical performance according to slope (62.5 and 68.6 mV/decade) and detection limit (4.1 × 10−5 and 5.5 × 10−5 mol L−1). The electrodes displayed good selectivity. The ISEs are not affected by pH changes ranging from 2.5 to 13. The sensors were successfully applied to the analysis of serum, urine and fish samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster in 12th European Conference on Wireless Sensor Networks (EWSN 2015). 9 to 11, Feb, 2015, pp 24-25. Porto, Portugal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The trend to have more cooperative play and the increase of game dynamics in Robocup MSL League motivates the improvement of skills for ball passing and reception. Currently the majority of the MSL teams uses ball handling devices with rollers to have more precise kicks but limiting the capability to kick a moving ball without stopping it and grabbing it. This paper addresses the problem to receive and kick a fast moving ball without having to grab it with a roller based ball handling device. Here, the main difficulty is the high latency and low rate of the measurements of the ball sensing systems, based in vision or laser scanner sensors.Our robots use a geared leg coupled to a motor that acts simultaneously as the kicking device and low level ball sensor. This paper proposes a new method to improve the capability for ball sensing in the kicker, by combining high rate measurements from the torque and energy in the motor and angular position of the kicker leg. The developed method endows the kicker device with an effective ball detection ability, validated in several game situations like in an interception to a fast pass or when chasing the ball where the relative speed from robot to ball is low. This can be used to optimize the kick instant or by the embedded kicker control system to absorb the ball energy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Etnográfica, 15 (2): 313-336