963 resultados para Weighted by Sum Assured
Resumo:
Mestrado em Ciências Actuariais
Resumo:
The molecular arrangement in organic thin films is crucial for their increasing technological applications. Here, we use vibrational spectroscopy by sum-frequency generation (SFG) to study the ordering of polyelectrolyte layers adsorbed on silica for all steps of layer-by-layer (LbL) self-assembly. In situ measurements during adsorption and rinsing showed that the adsorbed polymer has a disordered conformation and confirmed surface charge overcompensation upon polyelectrolyte adsorption by probing the interfacial electric field. In dry films, the polymer chains acquired a net orientational ordering, which was affected, however, by the adsorption of subsequent layers. Such a detailed characterization may allow the control of LbL film structure and functionality with unprecedented power.
Resumo:
Sum-Frequency Vibrational Spectroscopy (SFVS) has been used to investigate the effect of nitrogen-flow drying on the molecular ordering of Layer-by-Layer (LbL) films of poly(allylamine hydrochloride) (PAH) alternated with poly(styrene sulfonate) (PSS). We find that films dried by spontaneous water evaporation are more ordered and homogeneous than films dried by nitrogen flow. The latter are quite inhomogeneous and may have regions with highly disordered polymer conformation. We propose that drying by spontaneous water evaporation reduces the effect of drag by the drying front, while during nitrogen-flow drying the fast evaporation of water ""freezes"" the disordered conformation of adsorbed polyelectrolyte molecules. These findings are important for many applications of LbL films, since device performance usually depends on film morphology and its molecular structure.
Resumo:
La génération des fréquences somme (SFG), une technique spectroscopique spécifique aux interfaces, a été utilisée pour caractériser les changements de la structure macromoléculaire du surfactant cationique chlorure de dodécyltriméthylammonium (DTAC) à l’interface silice/eau dans une plage de pH variant entre 3 et 11. Les conditions expérimentales ont été choisies pour imiter les conditions les plus communes trouvées pendant les opérations de récupération assistée du pétrole. Particulièrement, la silice a été étudiée, car elle est un des composantes des surfaces minérales des réservoirs de grès, et l’adsorption du surfactant a été étudiée avec une force ionique pertinente pour les fluides de la fracturation hydraulique. Les spectres SFG ont présenté des pics détectables avec une amplitude croissante dans la région des étirements des groupes méthylène et méthyle lorsque le pH est diminué jusqu’à 3 ou augmenté jusqu’à 11, ce qui suggère des changements de la structure des agrégats de surfactant à l’interface silice/eau à une concentration de DTAC au-delà de la concentration micellaire critique. De plus, des changements dans l’intensité SFG ont été observés pour le spectre de l’eau quand la concentration de DTAC augmente de 0,2 à 50 mM dans les conditions acide, neutre et alcaline. À pH 3, près du point de charge zéro de la surface de silice, l’excès de charge positive en raison de l’adsorption du surfactant cationique crée un champ électrostatique qui oriente les molécules d’eau à l’interface. À pH 7 et 11, ce qui sont des valeurs au-dessus du point de charge zéro de la surface de silice, le champ électrostatique négatif à l’interface silice/eau diminue par un ordre de grandeur avec l’adsorption du surfactant comme résultat de la compensation de la charge négative à la surface par la charge positive du DTAC. Les résultats SFG ont été corrélés avec des mesures de l’angle de contact et de la tension interfaciale à pH 3, 7 et 11.
Resumo:
La génération des fréquences somme (SFG), une technique spectroscopique spécifique aux interfaces, a été utilisée pour caractériser les changements de la structure macromoléculaire du surfactant cationique chlorure de dodécyltriméthylammonium (DTAC) à l’interface silice/eau dans une plage de pH variant entre 3 et 11. Les conditions expérimentales ont été choisies pour imiter les conditions les plus communes trouvées pendant les opérations de récupération assistée du pétrole. Particulièrement, la silice a été étudiée, car elle est un des composantes des surfaces minérales des réservoirs de grès, et l’adsorption du surfactant a été étudiée avec une force ionique pertinente pour les fluides de la fracturation hydraulique. Les spectres SFG ont présenté des pics détectables avec une amplitude croissante dans la région des étirements des groupes méthylène et méthyle lorsque le pH est diminué jusqu’à 3 ou augmenté jusqu’à 11, ce qui suggère des changements de la structure des agrégats de surfactant à l’interface silice/eau à une concentration de DTAC au-delà de la concentration micellaire critique. De plus, des changements dans l’intensité SFG ont été observés pour le spectre de l’eau quand la concentration de DTAC augmente de 0,2 à 50 mM dans les conditions acide, neutre et alcaline. À pH 3, près du point de charge zéro de la surface de silice, l’excès de charge positive en raison de l’adsorption du surfactant cationique crée un champ électrostatique qui oriente les molécules d’eau à l’interface. À pH 7 et 11, ce qui sont des valeurs au-dessus du point de charge zéro de la surface de silice, le champ électrostatique négatif à l’interface silice/eau diminue par un ordre de grandeur avec l’adsorption du surfactant comme résultat de la compensation de la charge négative à la surface par la charge positive du DTAC. Les résultats SFG ont été corrélés avec des mesures de l’angle de contact et de la tension interfaciale à pH 3, 7 et 11.
Resumo:
This paper develops a general framework for valuing a wide range of derivative securities. Rather than focusing on the stochastic process of the underlying security and developing an instantaneously-riskless hedge portfolio, we focus on the terminal distribution of the underlying security. This enables the derivative security to be valued as the weighted sum of a number of component pieces. The component pieces are simply the different payoffs that the security generates in different states of the world, and they are weighted by the probability of the particular state of the world occurring. A full set of derivations is provided. To illustrate its use, the valuation framework is applied to plain-vanilla call and put options, as well as a range of derivatives including caps, floors, collars, supershares, and digital options.
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance fractions at each pixel in the image. This paper introduces a new unmixing method termed dependent component analysis (DECA). This method is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process. The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.
Resumo:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Las actividades agropecuarias ejercen diferentes presiones sobre los recursos naturales. Esto ha llevado, en algunas áreas, a un deterioro del suelo que provoca un impacto sobre la sustentabilidad en los sistemas agropecuarios. Para evaluar la degradación del suelo se han propuesto listas de indicadores, sin embargo, se carece de una herramienta metodológica robusta, adaptada a las condiciones edafoclimáticas regionales. Además, existe una demanda de productores e instituciones interesados en orientar acciones para preservar el suelo. El objetivo de este proyecto es evaluar la degradación física, química y biológica de los suelos en agroecosistemas del centro-sur de Córdoba. Por ello se propone desarrollar una herramienta metodológica que consiste en un set de indicadores físicos, químicos y biológicos, con valores umbrales, integrados en índices de degradación, que asistan a los agentes tomadores de decisiones y productores, en la toma de decisiones respecto de la degradación del suelo. El área de trabajo será una región agrícola del centro-sur de Córdoba con más de 100 años de agricultura. La metodología comienza con la caracterización del uso del territorio y sistemas de manejo, su clasificación y la obtención de mapas base de usos y manejos, mediante sensores remotos y encuestas. Se seleccionarán sitios de muestreo mediante una metodología semi-dirigida usando un SIG, asegurando un mínimo de un punto de muestreo por unidad de mapeo. Se elegirán sitios de referencia lo más cercano a una condición natural. Los indicadores a evaluar surgen de listas propuestas en trabajos previos del grupo, seleccionados en base a criterios internacionales y a adecuados a suelos de la región. Se usarán indicadores núcleo y complementarios. Para la obtención de umbrales, se usarán por un lado valores provenientes de la bibliografía y por otro, umbrales generados a partir de la distribución estadística del indicador en suelos de referencia. Para estandarizar cada indicador se definirá una función de transformación. Luego serán ponderarán mediante análisis estadísticos mulivariados e integrados en índices de degradación física, química y biológica, y un índice general de degradación. El abordaje concluirá con el desarrollo de dos instrumentos para la toma de decisiones: uno a escala regional, que consistirá en mapas de degradación en base a unidades cartográficas ambientales, de uso del territorio y de sistemas de manejo y otro a escala predial que informará sobre la degradación del suelo de un lote en particular, en comparación con suelos de referencia. Los actores interesados contarán con herramientas robustas para la toma de decisiones respecto de la degradación del suelo tanto a escala regional como local. Agricultural activities exert different pressures on natural resources. In some areas this has led to soil degradation and has an impact on agricultural sustainability. To assess soil degradation a robust methodological tool, adapted to regional soil and climatic conditions, is lacking. In addition, there is a demand from farmers and institutions interested in direct actions to preserve the soil. The objective of this project is to assess physical, chemical and biological soil degradation in agroecosystems of Córdoba. We propose to develop a tool that consists of a set of physical, chemical and biological indicators, with threshold values, integrated in soil degradation indices. The study area is a region with more than 100 years of agriculture. The methodology begins with the characterization of land use and management systems and the obtaining of base maps by means of remote sensing and survey. Sampling sites will be selected through a semi-directed methodology using GIS, ensuring at least one sampling point by mapping unit. Reference sites will be chosen as close to a natural condition. The proposed indicators emerge from previous works of the group, selected based on international standards and appropriate for the local soils. To obtain the thresholds, we will use, by one side, values from the literature, and by the other, values generated from the statistical distribution of the indicator in the reference soils. To standardize indicators transformation functions will be defined. Indicators will be weighted by mans of multivariate analysis and integrated in soil degradation indices. The approach concluded with the development of two instruments for decision making: a regional scale one, consisting in degradation maps based on environmental, land use and management systems mapping units; and an instrument at a plot level which will report on soil degradation of a particular plot compared to reference soils.
Resumo:
Explicitly correlated coupled-cluster calculations of intermolecular interaction energies for the S22 benchmark set of Jurecka, Sponer, Cerny, and Hobza (Chem. Phys. Phys. Chem. 2006, 8, 1985) are presented. Results obtained with the recently proposed CCSD(T)-F12a method and augmented double-zeta basis sets are found to be in very close agreement with basis set extrapolated conventional CCSD(T) results. Furthermore, we propose a dispersion-weighted MP2 (DW-MP2) approximation that combines the good accuracy of MP2 for complexes with predominately electrostatic bonding and SCS-MP2 for dispersion-dominated ones. The MP2-F12 and SCS-MP2-F12 correlation energies are weighted by a switching function that depends on the relative HF and correlation contributions to the interaction energy. For the S22 set, this yields a mean absolute deviation of 0.2 kcal/mol from the CCSD(T)-F12a results. The method, which allows obtaining accurate results at low cost, is also tested for a number of dimers that are not in the training set.
Resumo:
We explore the ability of the recently established quasilocal density functional theory for describing the isoscalar giant monopole resonance. Within this theory we use the scaling approach and perform constrained calculations for obtaining the cubic and inverse energy weighted moments (sum rules) of the RPA strength. The meaning of the sum rule approach in this case is discussed. Numerical calculations are carried out using Gogny forces and an excellent agreement is found with HF+RPA results previously reported in literature. The nuclear matter compression modulus predicted in our model lies in the range 210230 MeV which agrees with earlier findings. The information provided by the sum rule approach in the case of nuclei near the neutron drip line is also discussed.
Resumo:
1. Species distribution models (SDMs) have become a standard tool in ecology and applied conservation biology. Modelling rare and threatened species is particularly important for conservation purposes. However, modelling rare species is difficult because the combination of few occurrences and many predictor variables easily leads to model overfitting. A new strategy using ensembles of small models was recently developed in an attempt to overcome this limitation of rare species modelling and has been tested successfully for only a single species so far. Here, we aim to test the approach more comprehensively on a large number of species including a transferability assessment. 2. For each species numerous small (here bivariate) models were calibrated, evaluated and averaged to an ensemble weighted by AUC scores. These 'ensembles of small models' (ESMs) were compared to standard Species Distribution Models (SDMs) using three commonly used modelling techniques (GLM, GBM, Maxent) and their ensemble prediction. We tested 107 rare and under-sampled plant species of conservation concern in Switzerland. 3. We show that ESMs performed significantly better than standard SDMs. The rarer the species, the more pronounced the effects were. ESMs were also superior to standard SDMs and their ensemble when they were independently evaluated using a transferability assessment. 4. By averaging simple small models to an ensemble, ESMs avoid overfitting without losing explanatory power through reducing the number of predictor variables. They further improve the reliability of species distribution models, especially for rare species, and thus help to overcome limitations of modelling rare species.