968 resultados para 4-component gaussian basis sets


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnetotactic bacteria intracellularly biomineralize magnetite of an ideal grain size for recording palaeomagnetic signals. However, bacterial magnetite has only been reported in a few pre-Quaternary records because progressive burial into anoxic diagenetic environments causes its dissolution. Deep-sea carbonate sequences provide optimal environments for preserving bacterial magnetite due to low rates of organic carbon burial and expanded pore-water redox zonations. Such sequences often do not become anoxic for tens to hundreds of metres below the seafloor. Nevertheless, the biogeochemical factors that control magnetotactic bacterial populations in such settings are not well known. We document the preservation of bacterial magnetite, which dominates the palaeomagnetic signal throughout Eocene pelagic carbonates from the southern Kerguelen Plateau, Southern Ocean. We provide evidence that iron fertilization, associated with increased aeolian dust flux, resulted in surface water eutrophication in the late Eocene that controlled bacterial magnetite abundance via export of organic carbon to the seafloor. Increased flux of aeolian iron-bearing phases also delivered iron to the seafloor, some of which became bioavailable through iron reduction. Our results suggest that magnetotactic bacterial populations in pelagic settings depend crucially on particulate iron and organic carbon delivery to the seafloor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a reformulation of the hairy-probe method for introducing electronic open boundaries that is appropriate for steady-state calculations involving nonorthogonal atomic basis sets. As a check on the correctness of the method we investigate a perfect atomic wire of Cu atoms and a perfect nonorthogonal chain of H atoms. For both atom chains we find that the conductance has a value of exactly one quantum unit and that this is rather insensitive to the strength of coupling of the probes to the system, provided values of the coupling are of the same order as the mean interlevel spacing of the system without probes. For the Cu atom chain we find in addition that away from the regions with probes attached, the potential in the wire is uniform, while within them it follows a predicted exponential variation with position. We then apply the method to an initial investigation of the suitability of graphene as a contact material for molecular electronics. We perform calculations on a carbon nanoribbon to determine the correct coupling strength of the probes to the graphene and obtain a conductance of about two quantum units corresponding to two bands crossing the Fermi surface. We then compute the current through a benzene molecule attached to two graphene contacts and find only a very weak current because of the disruption of the π conjugation by the covalent bond between the benzene and the graphene. In all cases we find that very strong or weak probe couplings suppress the current.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An Ab Initio/RRKM study of the reaction mechanism and product branching ratios of neutral-radical ethynyl (C2H) and cyano (CN) radical species with unsaturated hydrocarbons is performed. The reactions studied apply to cold conditions such as planetary atmospheres including Titan, the Interstellar Medium (ISM), icy bodies and molecular clouds. The reactions of C2H and CN additions to gaseous unsaturated hydrocarbons are an active area of study. NASA’s Cassini/Huygens mission found a high concentration of C2H and CN from photolysis of ethyne (C2H2) and hydrogen cyanide (HCN), respectively, in the organic haze layers of the atmosphere of Titan. The reactions involved in the atmospheric chemistry of Titan lead to a vast array of larger, more complex intermediates and products and may also serve as a chemical model of Earth’s primordial atmospheric conditions. The C2H and CN additions are rapid and exothermic, and often occur barrierlessly to various carbon sites of unsaturated hydrocarbons. The reaction mechanism is proposed on the basis of the resulting potential energy surface (PES) that includes all the possible intermediates and transition states that can occur, and all the products that lie on the surface. The B3LYP/6-311g(d,p) level of theory is employed to determine optimized electronic structures, moments of inertia, vibrational frequencies, and zero-point energy. They are followed by single point higher-level CCSD(T)/cc-vtz calculations, including extrapolations to complete basis sets (CBS) of the reactants and products. A microcanonical RRKM study predicts single-collision (zero-pressure limit) rate constants of all reaction paths on the potential energy surface, which is then used to compute the branching ratios of the products that result. These theoretical calculations are conducted either jointly or in parallel to experimental work to elucidate the chemical composition of Titan’s atmosphere, the ISM, and cold celestial bodies.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In contradiction to sexual selection theory, several studies showed that although the expression of melanin-based ornaments is usually under strong genetic control and weakly sensitive to the environment and body condition, they can signal individual quality. Covariation between a melanin-based ornament and phenotypic quality may result from pleiotropic effects of genes involved in the production of melanin pigments. Two categories of genes responsible for variation in melanin production may be relevant, namely those that trigger melanin production (yes or no response) and those that determine the amount of pigments produced. To investigate which of these two hypotheses is the most likely, I reanalysed data collected from barn owls ( Tyto alba). The underparts of this bird vary from immaculate to heavily marked with black spots of varying size. Published cross-fostering experiments have shown that the proportion of the plumage surface covered with black spots, a eumelanin composite trait so-called "plumage spottiness", in females positively covaries with offspring humoral immunocompetence, and negatively with offspring parasite resistance (i.show $132#e. the ability to reduce fecundity of ectoparasites) and fluctuating asymmetry of wing feathers. However, it is unclear which component of plumage spottiness causes these relationships, namely genes responsible for variation in number of spots or in spot diameter. Number of spots reflects variation in the expression of genes triggering the switch from no eumelanin production to production, whereas spot diameter reflects variation in the expression of genes determining the amount of eumelanin produced per spot. In the present study, multiple regression analyses, performed on the same data sets, showed that humoral immunocompetence, parasite resistance and wing fluctuating asymmetry of cross-fostered offspring covary with spot diameter measured in their genetic mother, but not with number of spots. This suggests that genes responsible for variation in the quantity of eumelanin produced per spot are responsible for covariation between a melanin ornament and individual attributes. In contrast, genes responsible for variation in number of black spots may not play a significant role. Covariation between a eumelanin female trait and offspring quality may therefore be due to an indirect effect of melanin production.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Complete basis set and Gaussian-n methods were combined with Barone and Cossi's implementation of the polarizable conductor model (CPCM) continuum solvation methods to calculate pKa values for six carboxylic acids. Four different thermodynamic cycles were considered in this work. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol, to calculate pKa values with cycle 1. The complete basis set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. Thermodynamic cycles that include an explicit water in the cycle are not accurate when the free energy of solvation of a water molecule is used, but appear to become accurate when the experimental free energy of vaporization of water is used. This apparent improvement is an artifact of the standard state used in the calculation. Geometry relaxation in solution does not improve the results when using these later cycles. The use of cycle 1 and the complete basis set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit. © 2001 John Wiley & Sons, Inc. Int J Quantum Chem, 2001

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Complete Basis Set and Gaussian-n methods were combined with CPCM continuum solvation methods to calculate pKa values for six carboxylic acids. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol to calculate pKa values with Cycle 1. The Complete Basis Set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. The use of Cycle 1 and the Complete Basis Set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The complete basis set methods CBS-4, CBS-QB3, and CBS-APNO, and the Gaussian methods G2 and G3 were used to calculate the gas phase energy differences between six different carboxylic acids and their respective anions. Two different continuum methods, SM5.42R and CPCM, were used to calculate the free energy differences of solvation for the acids and their anions. Relative pKa values were calculated for each acid using one of the acids as a reference point. The CBS-QB3 and CBS-APNO gas phase calculations, combined with the CPCM/HF/6-31+G(d)//HF/6-31G(d) or CPCM/HF/6-31+G(d)//HF/6-31+G(d) continuum solvation calculations on the lowest energy gas phase conformer, and with the conformationally averaged values, give results accurate to ½ pKa unit. © 2001 American Institute of Physics.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Funding The International Primary Care Respiratory Group (IPCRG) provided funding for this research project as an UNLOCK group study for which the funding was obtained through an unrestricted grant by Novartis AG, Basel, Switzerland. The latter funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Database access for the OPCRD was provided by the Respiratory Effectiveness Group (REG) and Research in Real Life; the OPCRD statistical analysis was funded by REG. The Bocholtz Study was funded by PICASSO for COPD, an initiative of Boehringer Ingelheim, Pfizer and the Caphri Research Institute, Maastricht University, The Netherlands.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Interleukins 4 (IL-4) and 13 (IL-13) have been found previously to share receptor components on some cells, as revealed by receptor cross-competition studies. In the present study, the cloning is described of murine NR4, a previously unrecognized receptor identified on the basis of sequence similarity with members of the hemopoietin receptor family. mRNA encoding NR4 was found in a wide range of murine cells and tissues. By using transient expression in COS-7 cells, NR4 was found to encode the IL-13 receptor alpha chain, a low-affinity receptor capable of binding IL-13 but not IL-4 or interleukins 2, -7, -9, or -15. Stable expression of the IL-13 receptor alpha chain (NR4) in CTLL-2 cells resulted in the generation of high-affinity IL-13 receptors capable of transducing a proliferative signal in response to IL-13 and, moreover, led to competitive cross-reactivity in the binding of IL-4 and IL-13. These results suggest that the IL-13 receptor alpha chain (NR4) is the primary binding subunit of the IL-13 receptor and may also be a component of IL-4 receptors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A correlated many-body basis function is used to describe the (4)He trimer and small helium clusters ((4)HeN) with N = 4-9. A realistic helium dimer potential is adopted. The ground state results of the (4)He dimer and trimer are in close agreement with earlier findings. But no evidence is found for the existence of Efimov state in the trimer for the actual (4)He-(4)He interaction. However, decreasing the potential strength we calculate several excited states of the trimer which exhibit Efimov character. We also solve for excited state energies of these clusters which are in good agreement with Monte Carlo hyperspherical description. (C) 2011 American Institute of Physics. [doi:10.1063/1.3583365]

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article presents maximum likelihood estimators (MLEs) and log-likelihood ratio (LLR) tests for the eigenvalues and eigenvectors of Gaussian random symmetric matrices of arbitrary dimension, where the observations are independent repeated samples from one or two populations. These inference problems are relevant in the analysis of diffusion tensor imaging data and polarized cosmic background radiation data, where the observations are, respectively, 3 x 3 and 2 x 2 symmetric positive definite matrices. The parameter sets involved in the inference problems for eigenvalues and eigenvectors are subsets of Euclidean space that are either affine subspaces, embedded submanifolds that are invariant under orthogonal transformations or polyhedral convex cones. We show that for a class of sets that includes the ones considered in this paper, the MLEs of the mean parameter do not depend on the covariance parameters if and only if the covariance structure is orthogonally invariant. Closed-form expressions for the MLEs and the associated LLRs are derived for this covariance structure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite its importance to agriculture, the genetic basis of heterosis is still not well understood. The main competing hypotheses include dominance, overdominance, and epistasis. NC design III is an experimental design that. has been used for estimating the average degree of dominance of quantitative trait 106 (QTL) and also for studying heterosis. In this study, we first develop a multiple-interval mapping (MIM) model for design III that provides a platform to estimate the number, genomic positions, augmented additive and dominance effects, and epistatic interactions of QTL. The model can be used for parents with any generation of selling. We apply the method to two data sets, one for maize and one for rice. Our results show that heterosis in maize is mainly due to dominant gene action, although overdominance of individual QTL could not completely be ruled out due to the mapping resolution and limitations of NC design III. For rice, the estimated QTL dominant effects could not explain the observed heterosis. There is evidence that additive X additive epistatic effects of QTL could be the main cause for the heterosis in rice. The difference in the genetic basis of heterosis seems to be related to open or self pollination of the two species. The MIM model for NC design III is implemented in Windows QTL Cartographer, a freely distributed software.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The magnetic resonance imaging contrast agent, the so-called Endorem (TM) colloidal suspension on the basis of superparamagnetic iron oxide nanoparticles (mean diameter of 5.5 nm) coated with dextran, were characterized on the basis of several measurement techniques to determine the parameters of their most important physical and chemical properties. It is assumed that each nanoparticle is consisted of Fe(3)O(4) monodomain and it was observed that its oxidation to gamma-Fe(2)O(3) occurs at 253.1 degrees C. The Mossbauer spectroscopy have shown a superparamagnetic behavior of the magnetic nanoparticles. The Magnetic Resonance results show an increase of the relaxation times T(1), T(2), and T(2)* with decreasing concentration of iron oxide nanoparticles. The relaxation effects of SPIONs contrast agents are influenced by their local concentration as well as the applied field strength and the environment in which these agents interact with surrounding protons. The proton relaxation rates presented a linear behavior with concentration. The measured values of thermooptic coefficient partial derivative n/partial derivative T, thermal conductivity K, optical birefringence Delta n(0), nonlinear refractive index n(2), nonlinear absorption beta` and third-order nonlinear susceptibility vertical bar chi((3))vertical bar are also reported.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.