918 resultados para least squares method
Resumo:
Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.
Resumo:
The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50μgcm(-2).
Resumo:
X-ray fluorescence (XRF) is a fast, low-cost, nondestructive, and truly multielement analytical technique. The objectives of this study are to quantify the amount of Na(+) and K(+) in samples of table salt (refined, marine, and light) and to compare three different methodologies of quantification using XRF. A fundamental parameter method revealed difficulties in quantifying accurately lighter elements (Z < 22). A univariate methodology based on peak area calibration is an attractive alternative, even though additional steps of data manipulation might consume some time. Quantifications were performed with good correlations for both Na (r = 0.974) and K (r = 0.992). A partial least-squares (PLS) regression method with five latent variables was very fast. Na(+) quantifications provided calibration errors lower than 16% and a correlation of 0.995. Of great concern was the observation of high Na(+) levels in low-sodium salts. The presented application may be performed in a fast and multielement fashion, in accordance with Green Chemistry specifications.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
P>Soil bulk density values are needed to convert organic carbon content to mass of organic carbon per unit area. However, field sampling and measurement of soil bulk density are labour-intensive, costly and tedious. Near-infrared reflectance spectroscopy (NIRS) is a physically non-destructive, rapid, reproducible and low-cost method that characterizes materials according to their reflectance in the near-infrared spectral region. The aim of this paper was to investigate the ability of NIRS to predict soil bulk density and to compare its performance with published pedotransfer functions. The study was carried out on a dataset of 1184 soil samples originating from a reforestation area in the Brazilian Amazon basin, and conventional soil bulk density values were obtained with metallic ""core cylinders"". The results indicate that the modified partial least squares regression used on spectral data is an alternative method for soil bulk density predictions to the published pedotransfer functions tested in this study. The NIRS method presented the closest-to-zero accuracy error (-0.002 g cm-3) and the lowest prediction error (0.13 g cm-3) and the coefficient of variation of the validation sets ranged from 8.1 to 8.9% of the mean reference values. Nevertheless, further research is required to assess the limits and specificities of the NIRS method, but it may have advantages for soil bulk density predictions, especially in environments such as the Amazon forest.
Resumo:
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A novel flow-based strategy for implementing simultaneous determinations of different chemical species reacting with the same reagent(s) at different rates is proposed and applied to the spectrophotometric catalytic determination of iron and vanadium in Fe-V alloys. The method relies on the influence of Fe(II) and V(IV) on the rate of the iodide oxidation by Cr(VI) under acidic conditions, the Jones reducing agent is then needed Three different plugs of the sample are sequentially inserted into an acidic KI reagent carrier stream, and a confluent Cr(VI) solution is added downstream Overlap between the inserted plugs leads to a complex sample zone with several regions of maximal and minimal absorbance values. Measurements performed on these regions reveal the different degrees of reaction development and tend to be more precise Data are treated by multivariate calibration involving the PLS algorithm The proposed system is very simple and rugged Two latent variables carried out ca 95% of the analytical information and the results are in agreement with ICP-OES. (C) 2010 Elsevier B V. All rights reserved.
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.
Resumo:
Chlorpheniramine maleate (CLOR) enantiomers were quantified by ultraviolet spectroscopy and partial least squares regression. The CLOR enantiomers were prepared as inclusion complexes with beta-cyclodextrin and 1-butanol with mole fractions in the range from 50 to 100%. For the multivariate calibration the outliers were detected and excluded and variable selection was performed by interval partial least squares and a genetic algorithm. Figures of merit showed results for accuracy of 3.63 and 2.83% (S)-CLOR for root mean square errors of calibration and prediction, respectively. The ellipse confidence region included the point for the intercept and the slope of 1 and 0, respectively. Precision and analytical sensitivity were 0.57 and 0.50% (S)-CLOR, respectively. The sensitivity, selectivity, adjustment, and signal-to-noise ratio were also determined. The model was validated by a paired t test with the results obtained by high-performance liquid chromatography proposed by the European pharmacopoeia and circular dichroism spectroscopy. The results showed there was no significant difference between the methods at the 95% confidence level, indicating that the proposed method can be used as an alternative to standard procedures for chiral analysis.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
A literatura internacional que analisa os fatores impactantes das transações com partes relacionadas concentra-se no Reino Unido, nos EUA e no continente asiático, sendo o Brasil um ambiente pouco investigado. Esta pesquisa tem por objetivo investigar tanto os fatores impactantes dos contratos com partes relacionadas, quanto o impacto dessas transações no desempenho das empresas brasileiras. Estudos recentes que investigaram as determinantes das transações com partes relacionadas (TPRs), assim como seus impactos no desempenho das empresas, levaram em consideração as vertentes apresentadas por Gordon, Henry e Palia (2004): (a) de conflitos de interesses, as quais apoiam a visão de que as TPRs são danosas para os acionistas minoritários, implicando expropriação da riqueza deles, por parte dos controladores (acionistas majoritários); e (b) transações eficientes que podem ser benéficas às empresas, atendendo, desse modo, aos objetivos econômicos subjacentes delas. Esta pesquisa apoia-se na vertente de conflito de interesses, com base na teoria da agência e no fato de que o cenário brasileiro apresenta ter como característica uma estrutura de propriedade concentrada e ser um país emergente com ambiente legal caracterizado pela baixa proteção aos acionistas minoritários. Para operacionalizar a pesquisa, utilizou-se uma amostra inicial composta de 70 empresas com ações listadas na BM&FBovespa, observando o período de 2010 a 2012. Os contratos relacionados foram identificados e quantificados de duas formas, de acordo com a metodologia aplicada por Kohlbeck e Mayhew (2004; 2010) e Silveira, Prado e Sasso (2009). Como principais determinantes foram investigadas proxies para captar os efeitos dos mecanismos de governança corporativa e ambiente legal, do desempenho das empresas, dos desvios entre direitos sobre controle e direitos sobre fluxo de caixa e do excesso de remuneração executiva. Também foram adicionadas variáveis de controle para isolar as características intrínsecas das firmas. Nas análises econométricas foram estimados os modelos pelos métodos de Poisson, corte transversal agrupado (Pooled-OLS) e logit. A estimação foi feita pelo método dos mínimos quadrados ordinários (MQO), e para aumentar a robustez das estimativas econométricas, foram utilizadas variáveis instrumentais estimadas pelo método dos momentos generalizados (MMG). As evidências indicam que os fatores investigados impactam diferentemente as diversas medidas de TPRs das empresas analisadas. Verificou-se que os contratos relacionados, em geral, são danosos às empresas, impactando negativamente o desempenho delas, desempenho este que é aumentado pela presença de mecanismos eficazes de governança corporativa. Os resultados do impacto das medidas de governança corporativa e das características intrínsecas das firmas no desempenho das empresas são robustos à presença de endogeneidade com base nas regressões com variáveis instrumentais.
Resumo:
Esta dissertação foi desenvolvida com o objetivo de investigar os efeitos da instalação e das características do Conselho Fiscal e do Comitê de Auditoria sobre a qualidade das informações contábeis no Brasil. As características estudadas foram à independência e a qualificação dos membros. As proxies da qualidade da informação contábil foram relevância, tempestividade e conservadorismo condicional. A amostra utilizada foi composta por empresas brasileiras, listadas na Bolsa de Valores, Mercadorias e Futuros de São Paulo (BM&FBovespa), com liquidez anual superior a 0,001, no período de 2010 a 2013. Os dados foram coletados na base de dados Comdinheiro e nos Formulários de Referência das empresas, disponíveis no sítio eletrônico da Comissão de Valores Mobiliários (CVM) ou BM&FBovespa. Os modelos de qualidade da informação foram adaptados ao recorte metodológico e estimados pelo método dos mínimos quadrados ordinários (MQO), com erros-padrão robustos clusterizados por firma. Os resultados revelaram efeitos da instalação dos órgãos analisados sobre as proxies de qualidade da informação contábil. A instalação do Conselho Fiscal impactou positivamente a relevância do patrimônio líquido, enquanto a instalação do Comitê de Auditoria, a relevância do lucro. Esses resultados podem indicar diferenças no direcionamento da atenção desses órgãos: em proteger o patrimônio da entidade para os acionistas (Conselho Fiscal) ou em assegurar números mais confiáveis sobre o desempenho dos administradores (Comitê de Auditoria). Paralelamente, os resultados para a instalação do Conselho Fiscal de forma permanente inferiu força desse órgão como mecanismo de controle, ao invés da instalação somente a pedido dos acionistas. Já, a implementação do Conselho Fiscal Turbinado se mostrou ineficiente no controle da qualidade das informações contábeis. Na análise das características, a independência dos membros do Comitê de Auditoria impactou a relevância do lucro. Ao passo que a independência do Conselho Fiscal impactou a relevância do patrimônio líquido e o conservadorismo condicional (reconhecimento oportuno de perdas econômicas). Essas associações foram mais significantes quando os membros do Conselho Fiscal eram independentes dos acionistas controladores. Na análise da qualificação dos membros, foram encontradas evidências positivas na relação entre a relevância do patrimônio líquido e a maior proporção de membros do Conselho Fiscal com qualificação em Business (Contabilidade, Administração e Economia). O conservadorismo condicional foi maior na medida em que a qualificação dos membros do Conselho Fiscal convergia para a Contabilidade. Os resultados da qualificação dos membros do Comitê de Auditoria demonstraram relevância do lucro na presença de, ao menos, um Contador e na maior proporção de membros com qualificação tanto em Contabilidade como em Business; sendo mais significante conforme a qualificação dos membros do Comitê de Auditoria convergia para a Contabilidade.
Resumo:
The portfolio generating the iTraxx EUR index is modeled by coupled Markov chains. Each of the industries of the portfolio evolves according to its own Markov transition matrix. Using a variant of the method of moments, the model parameters are estimated from a data set of Standard and Poor's. Swap spreads are evaluated by Monte-Carlo simulations. Along with an actuarially fair spread, at least squares spread is considered.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.