17 resultados para SCALE MIXTURES OF SKEW-NORMAL DISTRIBUTIONS
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
We investigate the effect of distinct bonding energies on the onset of criticality of low functionality fluid mixtures. We focus on mixtures ofparticles with two and three patches as this includes the mixture where "empty" fluids were originally reported. In addition to the number of patches, thespecies differ in the type of patches or bonding sites. For simplicity, we consider that the patches on each species are identical: one species has threepatches of type A and the other has two patches of type B. We have found a rich phase behavior with closed miscibility gaps, liquid-liquid demixing, and negative azeotropes. Liquid-liquid demixing was found to pre-empt the "empty" fluid regime, of these mixtures, when the AB bonds are weaker than the AA or BB bonds. By contrast, mixtures in this class exhibit "empty" fluid behavior when the AB bonds are stronger than at least one of the other two. Mixtureswith bonding energies epsilon(BB) = epsilon(AB) and epsilon(AA) < epsilon(BB), were found to exhibit an unusual negative azeotrope. (C) 2011 American Institute of Physics. [doi:10.1063/1.3561396]
Resumo:
This paper introduces a new unsupervised hyperspectral unmixing method conceived to linear but highly mixed hyperspectral data sets, in which the simplex of minimum volume, usually estimated by the purely geometrically based algorithms, is far way from the true simplex associated with the endmembers. The proposed method, an extension of our previous studies, resorts to the statistical framework. The abundance fraction prior is a mixture of Dirichlet densities, thus automatically enforcing the constraints on the abundance fractions imposed by the acquisition process, namely, nonnegativity and sum-to-one. A cyclic minimization algorithm is developed where the following are observed: 1) The number of Dirichlet modes is inferred based on the minimum description length principle; 2) a generalized expectation maximization algorithm is derived to infer the model parameters; and 3) a sequence of augmented Lagrangian-based optimizations is used to compute the signatures of the endmembers. Experiments on simulated and real data are presented to show the effectiveness of the proposed algorithm in unmixing problems beyond the reach of the geometrically based state-of-the-art competitors.
Resumo:
We investigate the thermodynamics and percolation regimes of model binary mixtures of patchy colloidal particles. The particles of each species have three sites of two types, one of which promotes bonding of particles of the same species while the other promotes bonding of different species. We find up to four percolated structures at low temperatures and densities: two gels where only one species percolates, a mixed gel where particles of both species percolate but neither species percolates separately, and a bicontinuous gel where particles of both species percolate separately forming two interconnected networks. The competition between the entropy and the energy of bonding drives the stability of the different percolating structures. Appropriate mixtures exhibit one or more connectivity transitions between the mixed and bicontinuous gels, as the temperature and/or the composition changes.
Resumo:
We investigate the phase behaviour of 2D mixtures of bi-functional and three-functional patchy particles and 3D mixtures of bi-functional and tetra-functional patchy particles by means of Monte Carlo simulations and Wertheim theory. We start by computing the critical points of the pure systems and then we investigate how the critical parameters change upon lowering the temperature. We extend the successive umbrella sampling method to mixtures to make it possible to extract information about the phase behaviour of the system at a fixed temperature for the whole range of densities and compositions of interest. (C) 2013 AIP Publishing LLC.
Resumo:
We have generalized earlier work on anchoring of nematic liquid crystals by Sullivan, and Sluckin and Poniewierski, in order to study transitions which may occur in binary mixtures of nematic liquid crystals as a function of composition. Microscopic expressions have been obtained for the anchoring energy of (i) a liquid crystal in contact with a solid aligning surface; (ii) a liquid crystal in contact with an immiscible isotropic medium; (iii) a liquid crystal mixture in contact with a solid aligning surface. For (iii), possible phase diagrams of anchoring angle versus dopant concentration have been calculated using a simple liquid crystal model. These exhibit some interesting features including re-entrant conical anchoring, for what are believed to be realistic values of the molecular parameters. A way of relaxing the most drastic approximation implicit in the above approach is also briefly discussed.
Resumo:
We calculate the equilibrium thermodynamic properties, percolation threshold, and cluster distribution functions for a model of associating colloids, which consists of hard spherical particles having on their surfaces three short-ranged attractive sites (sticky spots) of two different types, A and B. The thermodynamic properties are calculated using Wertheim's perturbation theory of associating fluids. This also allows us to find the onset of self-assembly, which can be quantified by the maxima of the specific heat at constant volume. The percolation threshold is derived, under the no-loop assumption, for the correlated bond model: In all cases it is two percolated phases that become identical at a critical point, when one exists. Finally, the cluster size distributions are calculated by mapping the model onto an effective model, characterized by a-state-dependent-functionality (f) over bar and unique bonding probability (p) over bar. The mapping is based on the asymptotic limit of the cluster distributions functions of the generic model and the effective parameters are defined through the requirement that the equilibrium cluster distributions of the true and effective models have the same number-averaged and weight-averaged sizes at all densities and temperatures. We also study the model numerically in the case where BB interactions are missing. In this limit, AB bonds either provide branching between A-chains (Y-junctions) if epsilon(AB)/epsilon(AA) is small, or drive the formation of a hyperbranched polymer if epsilon(AB)/epsilon(AA) is large. We find that the theoretical predictions describe quite accurately the numerical data, especially in the region where Y-junctions are present. There is fairly good agreement between theoretical and numerical results both for the thermodynamic (number of bonds and phase coexistence) and the connectivity properties of the model (cluster size distributions and percolation locus).
Resumo:
Num mercado de electricidade competitivo onde existe um ambiente de incerteza, as empresas de geração adoptam estratégias que visam a maximização do lucro, e a minimização do risco. Neste contexto, é de extrema importância para desenvolver uma estratégia adequada de gestão de risco ter em conta as diferentes opções de negociação de energia num mercado liberalizado, de forma a suportar a tomada de decisões na gestão de risco. O presente trabalho apresenta um modelo que avalia a melhor estratégia de um produtor de energia eléctrica que comercializa num mercado competitivo, onde existem dois mercados possíveis para a transacção de energia: o mercado organizado (bolsa) e o mercado de contratos bilaterais. O produtor tenta maximizar seus lucros e minimizar os riscos correspondentes, seleccionando o melhor equilíbrio entre os dois mercados possíveis (bolsa e bilateral). O mercado de contratos bilaterais visa gerir adequadamente os riscos inerentes à operação de mercados no curto prazo (mercado organizado) e dar o vendedor / comprador uma capacidade real de escolher o fornecedor com que quer negociar. O modelo apresentado neste trabalho faz uma caracterização explícita do risco no que diz respeito ao agente de mercado na questão da sua atitude face ao risco, medido pelo Value at Risk (VaR), descrito neste trabalho por Lucro-em-Risco (PAR). O preço e os factores de risco de volume são caracterizados por um valor médio e um desvio padrão, e são modelizados por distribuições normais. Os resultados numéricos são obtidos utilizando a simulação de Monte Carlo implementado em Matlab, e que é aplicado a um produtor que mantém uma carteira diversificada de tecnologias de geração, para um horizonte temporal de um ano. Esta dissertação está organizada da seguinte forma: o capítulo 1, 2 e 3 descrevem o estado-da-arte relacionado com a gestão de risco na comercialização de energia eléctrica. O capítulo 4 descreve o modelo desenvolvido e implementado, onde é também apresentado um estudo de caso com uma aplicação do modelo para avaliar o risco de negociação de um produtor. No capítulo 5 são apresentadas as principais conclusões.
The use of non-standard CT conversion ramps for Monte Carlo verification of 6 MV prostate IMRT plans
Resumo:
Monte Carlo (MC) dose calculation algorithms have been widely used to verify the accuracy of intensity-modulated radiotherapy (IMRT) dose distributions computed by conventional algorithms due to the ability to precisely account for the effects of tissue inhomogeneities and multileaf collimator characteristics. Both algorithms present, however, a particular difference in terms of dose calculation and report. Whereas dose from conventional methods is traditionally computed and reported as the water-equivalent dose (Dw), MC dose algorithms calculate and report dose to medium (Dm). In order to compare consistently both methods, the conversion of MC Dm into Dw is therefore necessary. This study aims to assess the effect of applying the conversion of MC-based Dm distributions to Dw for prostate IMRT plans generated for 6 MV photon beams. MC phantoms were created from the patient CT images using three different ramps to convert CT numbers into material and mass density: a conventional four material ramp (CTCREATE) and two simplified CT conversion ramps: (1) air and water with variable densities and (2) air and water with unit density. MC simulations were performed using the BEAMnrc code for the treatment head simulation and the DOSXYZnrc code for the patient dose calculation. The conversion of Dm to Dw by scaling with the stopping power ratios of water to medium was also performed in a post-MC calculation process. The comparison of MC dose distributions calculated in conventional and simplified (water with variable densities) phantoms showed that the effect of material composition on dose-volume histograms (DVH) was less than 1% for soft tissue and about 2.5% near and inside bone structures. The effect of material density on DVH was less than 1% for all tissues through the comparison of MC distributions performed in the two simplified phantoms considering water. Additionally, MC dose distributions were compared with the predictions from an Eclipse treatment planning system (TPS), which employed a pencil beam convolution (PBC) algorithm with Modified Batho Power Law heterogeneity correction. Eclipse PBC and MC calculations (conventional and simplified phantoms) agreed well (<1%) for soft tissues. For femoral heads, differences up to 3% were observed between the DVH for Eclipse PBC and MC calculated in conventional phantoms. The use of the CT conversion ramp of water with variable densities for MC simulations showed no dose discrepancies (0.5%) with the PBC algorithm. Moreover, converting Dm to Dw using mass stopping power ratios resulted in a significant shift (up to 6%) in the DVH for the femoral heads compared to the Eclipse PBC one. Our results show that, for prostate IMRT plans delivered with 6 MV photon beams, no conversion of MC dose from medium to water using stopping power ratio is needed. In contrast, MC dose calculations using water with variable density may be a simple way to solve the problem found using the dose conversion method based on the stopping power ratio.
Resumo:
Micro-generation is the small scale production of heat and/or electricity from a low carbon source and can be a powerful driver for carbon reduction, behavior change, security of supply and economic value. The energy conversion technologies can include photovoltaic panels, micro combined heat and power, micro wind, heat pumps, solar thermal systems, fuel cells and micro hydro schemes. In this paper, a small research of the availability of the conversion apparatus and the prices for the micro wind turbines and photovoltaic systems is made and a comparison between these two technologies is performed in terms of the availability of the resource and costs. An analysis of the new legal framework published in Portugal is done to realize if the incentives to individualspsila investment in sustainable and local energy production is worth for their point of view. An economic evaluation for these alternatives, accounting with the governmentpsilas incentives should lead, in most cases, into attractive return rates for the investment. Apart from the attractiveness of the investment there are though other aspects that should be taken into account and those are the benefits that these choices have to us all. The idea is that micro-generation will not only make a significant direct contribution to carbon reduction targets, it will also trigger a multiplier effect in behavior change by engaging hearts and minds, and providing more efficient use of energy by householders. The diversified profile of power generation by micro-generators, both in terms of location and timing, should reduce the impact of intermittency or plant failures with significant gains for security of supply.
Resumo:
The emergence of smartphones with Wireless LAN (WiFi) network interfaces brought new challenges to application developers. The expected increase of users connectivity will impact their expectations for example on the performance of background applications. Unfortunately, the number and breadth of the studies on the new patterns of user mobility and connectivity that result from the emergence of smartphones is still insufficient to support this claim. This paper contributes with preliminary results on a large scale study of the usage pattern of about 49000 devices and 31000 users who accessed at least one access point of the eduroam WiFi network on the campuses of the Lisbon Polytechnic Institute. Results confirm that the increasing number of smartphones resulted in significant changes to the pattern of use, with impact on the amount of traffic and users connection time.
Resumo:
The present study is focused on the characterization of ultrafine particles emitted in welding of steel using mixtures of Ar+CO2, and intends to analyze which are the main process parameters which may have influence on the emission itself. It was found that the amount of emitted ultrafine particles (measured by particle number and alveolar deposited surface area) are clearly dependent from the distance to the welding front and also from the main welding parameters, namely the current intensity and heat input in the welding process. The emission of airborne ultrafine particles seem to increase with the current intensity as fume formation rate does. When comparing the tested gas mixtures, higher emissions are observed for more oxidant mixtures, that is, mixtures with higher CO2 content, which result in higher arc stability. The later mixtures originate higher concentrations of ultrafine particles (as measured by number of particles by cm3 of air) and higher values of alveolar deposited surface area of particles, thus resulting in a more hazardous condition regarding worker's exposure. © 2014 Sociedade Portuguesa de Materiais (SPM). Published by Elsevier España, S.L. All rights reserved.
Resumo:
The reactions between 4'-phenyl-terpyridine (L) and nitrate, acetate or chloride Cu(II) salts led to the formation of [Cu(NO3)(2)L] (1), [Cu(OCOCH3)(2)L]center dot CH2Cl2 (2 center dot CH2Cl2)and [CuCl2L]center dot[Cu(Cl)(mu-Cl)L](2) (3), respectively. Upon dissolving 1 in mixtures of DMSO-MeOH or EtOH-DMF the compounds [Cu(H2O){OS(CH3)(2)}L]-(NO3)(2) (4) and [Cu(HO)(CH3CH2OH)L](NO3) (5) were obtained, in this order. Reaction of 3 with AgSO3CF3 led to [CuCl(OSO2CF3)L] (6). The compounds were characterized by ESI-MS, IR, elemental analysis, electrochemical techniques and, for 2-6, also by single crystal X-ray diffraction. They undergo, by cyclic voltammetry, two single-electron irreversible reductions assigned to Cu(II) -> Cu(I)and Cu(I) -> Cu(0) and, for those of the same structural type, the reduction potential appears to correlate with the summation of the values of the Lever electrochemical EL ligand parameter, which is reported for the first time for copper complexes. Complexes 1-6 in combination with TEMPO (2,2,6,6-tetramethylpiperidinyl-1-oxyl radical) can exhibit a high catalytic activity, under mild conditions and in alkaline aqueous solution, for the aerobic oxidation of benzylic alcohols. Molar yields up to 94% (based on the alcohol) with TON values up to 320 were achieved after 22 h.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the linear unmixing. The key to linear unmixing is to find the set of reference substances, also called endmembers, that are representative of a given scene. This paper presents the vertex component analysis (VCA) a new method to unmix linear mixtures of hyperspectral sources. The algorithm is unsupervised and exploits a simple geometric fact: endmembers are vertices of a simplex. The algorithm complexity, measured in floating points operations, is O (n), where n is the sample size. The effectiveness of the proposed scheme is illustrated using simulated data.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.