918 resultados para least-squares


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O modelo matemático de um sistema real permite o conhecimento do seu comportamento dinâmico e é geralmente utilizado em problemas de engenharia. Por vezes os parâmetros utilizados pelo modelo são desconhecidos ou imprecisos. O envelhecimento e o desgaste do material são fatores a ter em conta pois podem causar alterações no comportamento do sistema real, podendo ser necessário efetuar uma nova estimação dos seus parâmetros. Para resolver este problema é utilizado o software desenvolvido pela empresa MathWorks, nomeadamente, o Matlab e o Simulink, em conjunto com a plataforma Arduíno cujo Hardware é open-source. A partir de dados obtidos do sistema real será aplicado um Ajuste de curvas (Curve Fitting) pelo Método dos Mínimos Quadrados de forma a aproximar o modelo simulado ao modelo do sistema real. O sistema desenvolvido permite a obtenção de novos valores dos parâmetros, de uma forma simples e eficaz, com vista a uma melhor aproximação do sistema real em estudo. A solução encontrada é validada com recurso a diferentes sinais de entrada aplicados ao sistema e os seus resultados comparados com os resultados do novo modelo obtido. O desempenho da solução encontrada é avaliado através do método das somas quadráticas dos erros entre resultados obtidos através de simulação e resultados obtidos experimentalmente do sistema real.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Submitted in partial fulfillment for the Requirements for the Degree of PhD in Mathematics, in the Speciality of Statistics in the Faculdade de Ciências e Tecnologia

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Durante as últimas décadas observou-se o crescimento da importância das avaliações fornecidas pelas agências de rating, sendo este um fator decisivo na tomada de decisão dos investidores. Também os emitentes de dívida são largamente afetados pelas alterações das classificações atribuídas por estas agências. Esta investigação pretende, por um lado, compreender se estas agências têm poder para conseguirem influenciar a evolução da dívida pública e qual o seu papel no mercado financeiro. Por outro, pretende compreender quais os fatores determinantes da dívida pública portuguesa, bem como a realização de uma análise por percentis com o objetivo de lhe atribuir um rating. Para a análise dos fatores que poderão influenciar a dívida pública, a metodologia utilizada é uma regressão linear múltipla estimada através do Método dos Mínimos Quadrados (Ordinary Least Squares – OLS), em que num cenário inicial era composta por onze variáveis independentes, sendo a dívida pública a variável dependente, para um período compreendido entre 1996 e 2013. Foram realizados vários testes ao modelo inicial, com o objetivo de encontrar um modelo que fosse o mais explicativo possível. Conseguimos ainda identificar uma relação inversa entre o rating atribuído por estas agências e a evolução da dívida pública, no sentido em que para períodos em que o rating desce, o crescimento da dívida é mais acentuado. Não nos foi, no entanto, possível atribuir um rating à dívida pública através de uma análise de percentis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

4th International Conference on Future Generation Communication Technologies (FGCT 2015), Luton, United Kingdom.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In health related research it is common to have multiple outcomes of interest in a single study. These outcomes are often analysed separately, ignoring the correlation between them. One would expect that a multivariate approach would be a more efficient alternative to individual analyses of each outcome. Surprisingly, this is not always the case. In this article we discuss different settings of linear models and compare the multivariate and univariate approaches. We show that for linear regression models, the estimates of the regression parameters associated with covariates that are shared across the outcomes are the same for the multivariate and univariate models while for outcome-specific covariates the multivariate model performs better in terms of efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Mat`ern models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To determine whether the slope of a maximal bronchial challenge test (in which FEV1 falls by over 50%) could be extrapolated from a standard bronchial challenge test (in which FEV1 falls up to 20%), 14 asthmatic children performed a single maximal bronchial challenge test with methacholin(dose range: 0.097–30.08 umol) by the dosimeter method. Maximal dose-response curves were included according to the following criteria: (1) at least one more dose beyond a FEV1 ù 20%; and (2) a MFEV1 ù 50%. PD20 FEV1 was calculated, and the slopes of the early part of the dose-response curve (standard dose-response slopes) and of the entire curve (maximal dose-response slopes) were calculated by two methods: the two-point slope (DRR) and the least squares method (LSS) in % FEV1 × umol−1. Maximal dose-response slopes were compared with the corresponding standard dose-response slopes by a paired Student’s t test after logarithmic transformation of the data; the goodness of fit of the LSS was also determined. Maximal dose-response slopes were significantly different (p < 0.0001) from those calculated on the early part of the curve: DRR20% (91.2 ± 2.7 FEV1% z umol−1)was 2.88 times higher than DRR50% (31.6 ± 3.4 DFEV1% z umol−1), and the LSS20% (89.1 ± 2.8% FEV1 z umol−1) was 3.10 times higher than LSS 50% (28.8 ± 1.5%FEV1 z umol−1). The goodness of fit of LSS 50% was significant in all cases, whereas LSS 20% failed to be significant in one. These results suggest that maximal dose-response slopes cannot be predicted from the data of standard bronchial challenge tests.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho de Projeto apresentado como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O objetivo deste estudo é o desenvolvimento e validação de métodos espectroscópicos (espectroscopia NIR) que possam vir a substituir os métodos químicos convencionais, para quantificação de grupos hidróxilo em resinas alquídicas. As resinas alquídicas estudadas neste trabalho são normalmente utilizadas em sistemas de revestimento de dois componentes, em que os seus grupos hidróxilo reagem com pré-polímeros de isocianato para formar revestimentos de alta dureza. Por este motivo e por questões processuais ligadas à estequiometria da reação existente na aplicação referida, é extremamente importante a quantificação destes grupos. O método mais comum de quantificação de grupos hidróxilo é conhecido como método de titulação. Este é um método demorado, pois cada medição implica um procedimento experimental de cerca de duas horas, para além de ser muito dispendioso, a nível económico. Foram estudadas as influências da temperatura, heterogeneidade e nível de enchimento da célula na recolha do espectro. As conclusões dos estudos mencionados levaram à fixação de um tempo ideal de permanência da célula dentro da câmara do espectrofotómetro antes da medição do espectro. Para além disto, conclui-se que para lotes standard, a heterogeneidade não é uma variável significativa. O nível da célula deve ser mantido constante. Os métodos desenvolvidos, baseados na norma de qualidade ISO 15063:2011, foram construídos a partir de algoritmos de Partial Least Squares Regression (PLS), utilizando um equipamento NIRVIS, Büchi©. Foram obtidos bons coeficientes de regressão linear para a Resina A (R2>0,9). Quanto aos restantes resultados, estes indicam a possibilidade de aplicação em resinas do mesmo tipo. Este método proporciona resultados 8 vezes mais rápidos e com custos em material que representam 1% do método standard.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O objetivo deste estudo é o desenvolvimento e validação de métodos espectroscópicos (espectroscopia NIR) que possam vir a substituir os métodos químicos convencionais, para quantificação de grupos hidróxilo em resinas alquídicas. As resinas alquídicas estudadas neste trabalho são normalmente utilizadas em sistemas de revestimento de dois componentes, em que os seus grupos hidróxilo reagem com pré-polímeros de isocianato para formar revestimentos de alta dureza. Por este motivo e por questões processuais ligadas à estequiometria da reação existente na aplicação referida, é extremamente importante a quantificação destes grupos. O método mais comum de quantificação de grupos hidróxilo é conhecido como método de titulação. Este é um método demorado, pois cada medição implica um procedimento experimental de cerca de duas horas, para além de ser muito dispendioso, a nível económico. Foram estudadas as influências da temperatura, heterogeneidade e nível de enchimento da célula na recolha do espectro. As conclusões dos estudos mencionados levaram à fixação de um tempo ideal de permanência da célula dentro da câmara do espectrofotómetro antes da medição do espectro. Para além disto, conclui-se que para lotes standard, a heterogeneidade não é uma variável significativa. O nível da célula deve ser mantido constante. Os métodos desenvolvidos, baseados na norma de qualidade ISO 15063:2011, foram construídos a partir de algoritmos de Partial Least Squares Regression (PLS), utilizando um equipamento NIRVIS, Büchi©. Foram obtidos bons coeficientes de regressão linear para a Resina A (R2>0,9). Quanto aos restantes resultados, estes indicam a possibilidade de aplicação em resinas do mesmo tipo. Este método proporciona resultados 8 vezes mais rápidos e com custos em material que representam 1% do método standard.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, reducing energy consumption is one of the highest priorities and biggest challenges faced worldwide and in particular in the industrial sector. Given the increasing trend of consumption and the current economical crisis, identifying cost reductions on the most energy-intensive sectors has become one of the main concerns among companies and researchers. Particularly in industrial environments, energy consumption is affected by several factors, namely production factors(e.g. equipments), human (e.g. operators experience), environmental (e.g. temperature), among others, which influence the way of how energy is used across the plant. Therefore, several approaches for identifying consumption causes have been suggested and discussed. However, the existing methods only provide guidelines for energy consumption and have shown difficulties in explaining certain energy consumption patterns due to the lack of structure to incorporate context influence, hence are not able to track down the causes of consumption to a process level, where optimization measures can actually take place. This dissertation proposes a new approach to tackle this issue, by on-line estimation of context-based energy consumption models, which are able to map operating context to consumption patterns. Context identification is performed by regression tree algorithms. Energy consumption estimation is achieved by means of a multi-model architecture using multiple RLS algorithms, locally estimated for each operating context. Lastly, the proposed approach is applied to a real cement plant grinding circuit. Experimental results prove the viability of the overall system, regarding both automatic context identification and energy consumption estimation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Geographic information systems give us the possibility to analyze, produce, and edit geographic information. Furthermore, these systems fall short on the analysis and support of complex spatial problems. Therefore, when a spatial problem, like land use management, requires a multi-criteria perspective, multi-criteria decision analysis is placed into spatial decision support systems. The analytic hierarchy process is one of many multi-criteria decision analysis methods that can be used to support these complex problems. Using its capabilities we try to develop a spatial decision support system, to help land use management. Land use management can undertake a broad spectrum of spatial decision problems. The developed decision support system had to accept as input, various formats and types of data, raster or vector format, and the vector could be polygon line or point type. The support system was designed to perform its analysis for the Zambezi river Valley in Mozambique, the study area. The possible solutions for the emerging problems had to cover the entire region. This required the system to process large sets of data, and constantly adjust to new problems’ needs. The developed decision support system, is able to process thousands of alternatives using the analytical hierarchy process, and produce an output suitability map for the problems faced.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Madine Darby Canine Kidney (MDCK) cell lines have been extensively evaluated for their potential as host cells for influenza vaccine production. Recent studies allowed the cultivation of these cells in a fully defined medium and in suspension. However, reaching high cell densities in animal cell cultures still remains a challenge. To address this shortcoming, a combined methodology allied with knowledge from systems biology was reported to study the impact of the cell environment on the flux distribution. An optimization of the medium composition was proposed for both a batch and a continuous system in order to reach higher cell densities. To obtain insight into the metabolic activity of these cells, a detailed metabolic model previously developed by Wahl A. et. al was used. The experimental data of four cultivations of MDCK suspension cells, grown under different conditions and used in this work came from the Max Planck Institute, Magdeburg, Germany. Classical metabolic flux analysis (MFA) was used to estimate the intracellular flux distribution of each cultivation and then combined with partial least squares (PLS) method to establish a link between the estimated metabolic state and the cell environment. The validation of the MFA model was made and its consistency checked. The resulted PLS model explained almost 70% of the variance present in the flux distribution. The medium optimization for the continuous system and for the batch system resulted in higher biomass growth rates than the ones obtained experimentally, 0.034 h-1 and 0.030 h-1, respectively, thus reducing in almost 10 hours the duplication time. Additionally, the optimal medium obtained for the continuous system almost did not consider pyruvate. Overall the proposed methodology seems to be effective and both proposed medium optimizations seem to be promising to reach high cell densities.