20 resultados para mixtures of gaussians
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
We investigate the effect of distinct bonding energies on the onset of criticality of low functionality fluid mixtures. We focus on mixtures ofparticles with two and three patches as this includes the mixture where "empty" fluids were originally reported. In addition to the number of patches, thespecies differ in the type of patches or bonding sites. For simplicity, we consider that the patches on each species are identical: one species has threepatches of type A and the other has two patches of type B. We have found a rich phase behavior with closed miscibility gaps, liquid-liquid demixing, and negative azeotropes. Liquid-liquid demixing was found to pre-empt the "empty" fluid regime, of these mixtures, when the AB bonds are weaker than the AA or BB bonds. By contrast, mixtures in this class exhibit "empty" fluid behavior when the AB bonds are stronger than at least one of the other two. Mixtureswith bonding energies epsilon(BB) = epsilon(AB) and epsilon(AA) < epsilon(BB), were found to exhibit an unusual negative azeotrope. (C) 2011 American Institute of Physics. [doi:10.1063/1.3561396]
Resumo:
This paper introduces a new unsupervised hyperspectral unmixing method conceived to linear but highly mixed hyperspectral data sets, in which the simplex of minimum volume, usually estimated by the purely geometrically based algorithms, is far way from the true simplex associated with the endmembers. The proposed method, an extension of our previous studies, resorts to the statistical framework. The abundance fraction prior is a mixture of Dirichlet densities, thus automatically enforcing the constraints on the abundance fractions imposed by the acquisition process, namely, nonnegativity and sum-to-one. A cyclic minimization algorithm is developed where the following are observed: 1) The number of Dirichlet modes is inferred based on the minimum description length principle; 2) a generalized expectation maximization algorithm is derived to infer the model parameters; and 3) a sequence of augmented Lagrangian-based optimizations is used to compute the signatures of the endmembers. Experiments on simulated and real data are presented to show the effectiveness of the proposed algorithm in unmixing problems beyond the reach of the geometrically based state-of-the-art competitors.
Resumo:
We investigate the thermodynamics and percolation regimes of model binary mixtures of patchy colloidal particles. The particles of each species have three sites of two types, one of which promotes bonding of particles of the same species while the other promotes bonding of different species. We find up to four percolated structures at low temperatures and densities: two gels where only one species percolates, a mixed gel where particles of both species percolate but neither species percolates separately, and a bicontinuous gel where particles of both species percolate separately forming two interconnected networks. The competition between the entropy and the energy of bonding drives the stability of the different percolating structures. Appropriate mixtures exhibit one or more connectivity transitions between the mixed and bicontinuous gels, as the temperature and/or the composition changes.
Resumo:
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
We investigate the phase behaviour of 2D mixtures of bi-functional and three-functional patchy particles and 3D mixtures of bi-functional and tetra-functional patchy particles by means of Monte Carlo simulations and Wertheim theory. We start by computing the critical points of the pure systems and then we investigate how the critical parameters change upon lowering the temperature. We extend the successive umbrella sampling method to mixtures to make it possible to extract information about the phase behaviour of the system at a fixed temperature for the whole range of densities and compositions of interest. (C) 2013 AIP Publishing LLC.
Resumo:
We have generalized earlier work on anchoring of nematic liquid crystals by Sullivan, and Sluckin and Poniewierski, in order to study transitions which may occur in binary mixtures of nematic liquid crystals as a function of composition. Microscopic expressions have been obtained for the anchoring energy of (i) a liquid crystal in contact with a solid aligning surface; (ii) a liquid crystal in contact with an immiscible isotropic medium; (iii) a liquid crystal mixture in contact with a solid aligning surface. For (iii), possible phase diagrams of anchoring angle versus dopant concentration have been calculated using a simple liquid crystal model. These exhibit some interesting features including re-entrant conical anchoring, for what are believed to be realistic values of the molecular parameters. A way of relaxing the most drastic approximation implicit in the above approach is also briefly discussed.
Resumo:
The present study is focused on the characterization of ultrafine particles emitted in welding of steel using mixtures of Ar+CO2, and intends to analyze which are the main process parameters which may have influence on the emission itself. It was found that the amount of emitted ultrafine particles (measured by particle number and alveolar deposited surface area) are clearly dependent from the distance to the welding front and also from the main welding parameters, namely the current intensity and heat input in the welding process. The emission of airborne ultrafine particles seem to increase with the current intensity as fume formation rate does. When comparing the tested gas mixtures, higher emissions are observed for more oxidant mixtures, that is, mixtures with higher CO2 content, which result in higher arc stability. The later mixtures originate higher concentrations of ultrafine particles (as measured by number of particles by cm3 of air) and higher values of alveolar deposited surface area of particles, thus resulting in a more hazardous condition regarding worker's exposure. © 2014 Sociedade Portuguesa de Materiais (SPM). Published by Elsevier España, S.L. All rights reserved.
Resumo:
The reactions between 4'-phenyl-terpyridine (L) and nitrate, acetate or chloride Cu(II) salts led to the formation of [Cu(NO3)(2)L] (1), [Cu(OCOCH3)(2)L]center dot CH2Cl2 (2 center dot CH2Cl2)and [CuCl2L]center dot[Cu(Cl)(mu-Cl)L](2) (3), respectively. Upon dissolving 1 in mixtures of DMSO-MeOH or EtOH-DMF the compounds [Cu(H2O){OS(CH3)(2)}L]-(NO3)(2) (4) and [Cu(HO)(CH3CH2OH)L](NO3) (5) were obtained, in this order. Reaction of 3 with AgSO3CF3 led to [CuCl(OSO2CF3)L] (6). The compounds were characterized by ESI-MS, IR, elemental analysis, electrochemical techniques and, for 2-6, also by single crystal X-ray diffraction. They undergo, by cyclic voltammetry, two single-electron irreversible reductions assigned to Cu(II) -> Cu(I)and Cu(I) -> Cu(0) and, for those of the same structural type, the reduction potential appears to correlate with the summation of the values of the Lever electrochemical EL ligand parameter, which is reported for the first time for copper complexes. Complexes 1-6 in combination with TEMPO (2,2,6,6-tetramethylpiperidinyl-1-oxyl radical) can exhibit a high catalytic activity, under mild conditions and in alkaline aqueous solution, for the aerobic oxidation of benzylic alcohols. Molar yields up to 94% (based on the alcohol) with TON values up to 320 were achieved after 22 h.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the linear unmixing. The key to linear unmixing is to find the set of reference substances, also called endmembers, that are representative of a given scene. This paper presents the vertex component analysis (VCA) a new method to unmix linear mixtures of hyperspectral sources. The algorithm is unsupervised and exploits a simple geometric fact: endmembers are vertices of a simplex. The algorithm complexity, measured in floating points operations, is O (n), where n is the sample size. The effectiveness of the proposed scheme is illustrated using simulated data.
Resumo:
Linear unmixing decomposes an hyperspectral image into a collection of re ectance spectra, called endmember signatures, and a set corresponding abundance fractions from the respective spatial coverage. This paper introduces vertex component analysis, an unsupervised algorithm to unmix linear mixtures of hyperpsectral data. VCA exploits the fact that endmembers occupy vertices of a simplex, and assumes the presence of pure pixels in data. VCA performance is illustrated using simulated and real data. VCA competes with state-of-the-art methods with much lower computational complexity.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.
Resumo:
Risk assessment considerations - The concept that “safe levels of exposure” for humans can be identified for individual chemicals is central to the risk assessment of compounds with known toxicological profiles. Selection of agents for combination chemotherapy regimens involves minimize overlapping of mechanisms of action, antitumor activity and toxicity profile. Although the toxicological profile and mechanism of action of each individual drug is well characterized, the toxicological interactions between drugs are likely, but poorly established at occupational exposure context. The synergistic nature of interactions may help in understanding the adverse health effects observed in healthcare workers, where exposure situations are characterized by complex mixtures of chemical agents, and the levels of individual exposing agents are often not sufficiently high to explain the health complaints. However, if a substance is a genotoxic carcinogen, this would be the “lead effect”; normally, no OEL based on a NOEL would be derived and the level would be set so low that it would be unlikely that other effects would be expected. Aim of the study - Recently research project developed in Portuguese Hospitals characterize the occupational exposure to antineoplastic agents and the health effects related. The project aimed to assess exposure of the different risk groups that handle antineoplastic agents in the hospital setting, namely during preparation and administration of these drugs. Here it is presented and discussed the results in a study developed in two hospitals from Lisbon.