976 resultados para least weighted squares


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lithium-ion batteries have been widely adopted in electric vehicles (EVs), and accurate state of charge (SOC) estimation is of paramount importance for the EV battery management system. Though a number of methods have been proposed, the SOC estimation for Lithium-ion batteries, such as LiFePo4 battery, however, faces two key challenges: the flat open circuit voltage (OCV) vs SOC relationship for some SOC ranges and the hysteresis effect. To address these problems, an integrated approach for real-time model-based SOC estimation of Lithium-ion batteries is proposed in this paper. Firstly, an auto-regression model is adopted to reproduce the battery terminal behaviour, combined with a non-linear complementary model to capture the hysteresis effect. The model parameters, including linear parameters and non-linear parameters, are optimized off-line using a hybrid optimization method that combines a meta-heuristic method (i.e., the teaching learning based optimization method) and the least square method. Secondly, using the trained model, two real-time model-based SOC estimation methods are presented, one based on the real-time battery OCV regression model achieved through weighted recursive least square method, and the other based on the state estimation using the extended Kalman filter method (EKF). To tackle the problem caused by the flat OCV-vs-SOC segments when the OCV-based SOC estimation method is adopted, a method combining the coulombic counting and the OCV-based method is proposed. Finally, modelling results and SOC estimation results are presented and analysed using the data collected from LiFePo4 battery cell. The results confirmed the effectiveness of the proposed approach, in particular the joint-EKF method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an efficient learning mechanism to build fuzzy rule-based systems through the construction of sparse least-squares support vector machines (LS-SVMs). In addition to the significantly reduced computational complexity in model training, the resultant LS-SVM-based fuzzy system is sparser while offers satisfactory generalization capability over unseen data. It is well known that the LS-SVMs have their computational advantage over conventional SVMs in the model training process; however, the model sparseness is lost, which is the main drawback of LS-SVMs. This is an open problem for the LS-SVMs. To tackle the nonsparseness issue, a new regression alternative to the Lagrangian solution for the LS-SVM is first presented. A novel efficient learning mechanism is then proposed in this paper to extract a sparse set of support vectors for generating fuzzy IF-THEN rules. This novel mechanism works in a stepwise subset selection manner, including a forward expansion phase and a backward exclusion phase in each selection step. The implementation of the algorithm is computationally very efficient due to the introduction of a few key techniques to avoid the matrix inverse operations to accelerate the training process. The computational efficiency is also confirmed by detailed computational complexity analysis. As a result, the proposed approach is not only able to achieve the sparseness of the resultant LS-SVM-based fuzzy systems but significantly reduces the amount of computational effort in model training as well. Three experimental examples are presented to demonstrate the effectiveness and efficiency of the proposed learning mechanism and the sparseness of the obtained LS-SVM-based fuzzy systems, in comparison with other SVM-based learning techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many AMS systems can measure 14C, 13C and 12C simultaneously thus providing δ13C values which can be used for fractionation normalization without the need for offline 13C /12C measurements on isotope ratio mass spectrometers (IRMS). However AMS δ13C values on our 0.5MV NEC Compact Accelerator often differ from IRMS values on the same material by 4-5‰ or more. It has been postulated that the AMS δ13C values account for the potential graphitization and machine induced fractionation, in addition to natural fractionation, but how much does this affect the 14C ages or F14C? We present an analysis of F14C as a linear least squares fit with AMS δ13C results for several of our secondary standards. While there are samples for which there is an obvious correlation between AMS δ13C and F14C, as quantified with the calculated probability of no correlation, we find that the trend lies within one standard deviation of the variance on our F14C measurements. Our laboratory produces both zinc and hydrogen reduced graphite, and we present our results for each type. Additionally, we show the variance on our AMS δ13C measurements of our secondary standards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A forward and backward least angle regression (LAR) algorithm is proposed to construct the nonlinear autoregressive model with exogenous inputs (NARX) that is widely used to describe a large class of nonlinear dynamic systems. The main objective of this paper is to improve model sparsity and generalization performance of the original forward LAR algorithm. This is achieved by introducing a replacement scheme using an additional backward LAR stage. The backward stage replaces insignificant model terms selected by forward LAR with more significant ones, leading to an improved model in terms of the model compactness and performance. A numerical example to construct four types of NARX models, namely polynomials, radial basis function (RBF) networks, neuro fuzzy and wavelet networks, is presented to illustrate the effectiveness of the proposed technique in comparison with some popular methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a multiloop robust control strategy is proposed based on H∞ control and a partial least squares (PLS) model (H∞_PLS) for multivariable chemical processes. It is developed especially for multivariable systems in ill-conditioned plants and non-square systems. The advantage of PLS is to extract the strongest relationship between the input and the output variables in the reduced space of the latent variable model rather than in the original space of the highly dimensional variables. Without conventional decouplers, the dynamic PLS framework automatically decomposes the MIMO process into multiple single-loop systems in the PLS subspace so that the controller design can be simplified. Since plant/model mismatch is almost inevitable in practical applications, to enhance the robustness of this control system, the controllers based on the H∞ mixed sensitivity problem are designed in the PLS latent subspace. The feasibility and the effectiveness of the proposed approach are illustrated by the simulation results of a distillation column and a mixing tank process. Comparisons between H∞_PLS control and conventional individual control (either H∞ control or PLS control only) are also made

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the study was to investigate the potential of a metabolomics platform to distinguish between pigs treated with ronidazole, dimetridazole and metronidazole and non-medicated animals (controls), at two withdrawal periods (day 0 and 5). Livers from each animal were biochemically profiled using UHPLC–QTof-MS in ESI+ mode of acquisition. Several Orthogonal Partial Least Squares-Discriminant Analysis models were generated from the acquired mass spectrometry data. The models classified the two groups control and treated animals. A total of 42 ions of interest explained the variation in ESI+. It was possible to find the identity of 3 of the ions and to positively classify 4 of the ionic features, which can be used as potential biomarkers of illicit 5-nitroimidazole abuse. Further evidence of the toxic mechanisms of 5-nitroimidazole drugs has been revealed, which may be of substantial importance as metronidazole is widely used in human medicine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many problems in artificial intelligence can be encoded as answer set programs (ASP) in which some rules are uncertain. ASP programs with incorrect rules may have erroneous conclusions, but due to the non-monotonic nature of ASP, omitting a correct rule may also lead to errors. To derive the most certain conclusions from an uncertain ASP program, we thus need to consider all situations in which some, none, or all of the least certain rules are omitted. This corresponds to treating some rules as optional and reasoning about which conclusions remain valid regardless of the inclusion of these optional rules. While a version of possibilistic ASP (PASP) based on this view has recently been introduced, no implementation is currently available. In this paper we propose a simulation of the main reasoning tasks in PASP using (disjunctive) ASP programs, allowing us to take advantage of state-of-the-art ASP solvers. Furthermore, we identify how several interesting AI problems can be naturally seen as special cases of the considered reasoning tasks, including cautious abductive reasoning and conformant planning. As such, the proposed simulation enables us to solve instances of the latter problem types that are more general than what current solvers can handle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The UK’s transportation network is supported by critical geotechnical assets (cuttings/embankments/dams) that require sustainable, cost-effective management, while maintaining an appropriate service level to meet social, economic, and environmental needs. Recent effects of extreme weather on these geotechnical assets have highlighted their vulnerability to climate variations. We have assessed the potential of surface wave data to portray the climate-related variations in mechanical properties of a clay-filled railway embankment. Seismic data were acquired bimonthly from July 2013 to November 2014 along the crest of a heritage railway embankment in southwest England. For each acquisition, the collected data were first processed to obtain a set of Rayleigh-wave dispersion and attenuation curves, referenced to the same spatial locations. These data were then analyzed to identify a coherent trend in their spatial and temporal variability. The relevance of the observed temporal variations was also verified with respect to the experimental data uncertainties. Finally, the surface wave dispersion data sets were inverted to reconstruct a time-lapse model of S-wave velocity for the embankment structure, using a least-squares laterally constrained inversion scheme. A key point of the inversion process was constituted by the estimation of a suitable initial model and the selection of adequate levels of spatial regularization. The initial model and the strength of spatial smoothing were then kept constant throughout the processing of all available data sets to ensure homogeneity of the procedure and comparability among the obtained VS sections. A continuous and coherent temporal pattern of surface wave data, and consequently of the reconstructed VS models, was identified. This pattern is related to the seasonal distribution of precipitation and soil water content measured on site.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objectivo principal da presente tese consiste no desenvolvimento de estimadores robustos do variograma com boas propriedades de eficiência. O variograma é um instrumento fundamental em Geoestatística, pois modela a estrutura de dependência do processo em estudo e influencia decisivamente a predição de novas observações. Os métodos tradicionais de estimação do variograma não são robustos, ou seja, são sensíveis a pequenos desvios das hipóteses do modelo. Essa questão é importante, pois as propriedades que motivam a aplicação de tais métodos, podem não ser válidas nas vizinhanças do modelo assumido. O presente trabalho começa por conter uma revisão dos principais conceitos em Geoestatística e da estimação tradicional do variograma. De seguida, resumem-se algumas noções fundamentais sobre robustez estatística. No seguimento, apresenta-se um novo método de estimação do variograma que se designou por estimador de múltiplos variogramas. O método consiste em quatro etapas, nas quais prevalecem, alternadamente, os critérios de robustez ou de eficiência. A partir da amostra inicial, são calculadas, de forma robusta, algumas estimativas pontuais do variograma; com base nessas estimativas pontuais, são estimados os parâmetros do modelo pelo método dos mínimos quadrados; as duas fases anteriores são repetidas, criando um conjunto de múltiplas estimativas da função variograma; por fim, a estimativa final do variograma é definida pela mediana das estimativas obtidas anteriormente. Assim, é possível obter um estimador que tem boas propriedades de robustez e boa eficiência em processos Gaussianos. A investigação desenvolvida revelou que, quando se usam estimativas discretas na primeira fase da estimação do variograma, existem situações onde a identificabilidade dos parâmetros não está assegurada. Para os modelos de variograma mais comuns, foi possível estabelecer condições, pouco restritivas, que garantem a unicidade de solução na estimação do variograma. A estimação do variograma supõe sempre a estacionaridade da média do processo. Como é importante que existam procedimentos objectivos para avaliar tal condição, neste trabalho sugere-se um teste para validar essa hipótese. A estatística do teste é um estimador-MM, cuja distribuição é desconhecida nas condições de dependência assumidas. Tendo em vista a sua aproximação, apresenta-se uma versão do método bootstrap adequada ao estudo de observações dependentes de processos espaciais. Finalmente, o estimador de múltiplos variogramas é avaliado em termos da sua aplicação prática. O trabalho contém um estudo de simulação que confirma as propriedades estabelecidas. Em todos os casos analisados, o estimador de múltiplos variogramas produziu melhores resultados do que as alternativas usuais, tanto para a distribuição assumida, como para distribuições contaminadas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Os Modelos de Equações Simultâneas (SEM) são modelos estatísticos com muita tradição em estudos de Econometria, uma vez que permitem representar e estudar uma vasta gama de processos económicos. Os estimadores mais usados em SEM resultam da aplicação do Método dos Mínimos Quadrados ou do Método da Máxima Verosimilhança, os quais não são robustos. Em Maronna e Yohai (1997), os autores propõem formas de “robustificar” esses estimadores. Um outro método de estimação com interesse nestes modelos é o Método dos Momentos Generalizado (GMM), o qual também conduz a estimadores não robustos. Estimadores que sofrem de falta de robustez são muito inconvenientes uma vez que podem conduzir a resultados enganadores quando são violadas as hipóteses subjacentes ao modelo assumido. Os estimadores robustos são de grande valor, em particular quando os modelos em estudo são complexos, como é o caso dos SEM. O principal objectivo desta investigação foi o de procurar tais estimadores tendo-se construído um estimador robusto a que se deu o nome de GMMOGK. Trata-se de uma versão robusta do estimador GMM. Para avaliar o desempenho do novo estimador foi feito um adequado estudo de simulação e foi também feita a aplicação do estimador a um conjunto de dados reais. O estimador robusto tem um bom desempenho nos modelos heterocedásticos considerados e, nessas condições, comporta-se melhor do que os estimadores não robustos usados no estudo. Contudo, quando a análise é feita em cada equação separadamente, a especificidade de cada equação individual e a estrutura de dependência do sistema são dois aspectos que influenciam o desempenho do estimador, tal como acontece com os estimadores usuais. Para enquadrar a investigação, o texto inclui uma revisão de aspectos essenciais dos SEM, o seu papel em Econometria, os principais métodos de estimação, com particular ênfase no GMM, e uma curta introdução à estimação robusta.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A análise das séries temporais de valores inteiros tornou-se, nos últimos anos, uma área de investigação importante, não só devido à sua aplicação a dados de contagem provenientes de diversos campos da ciência, mas também pelo facto de ser uma área pouco explorada, em contraste com a análise séries temporais de valores contínuos. Uma classe que tem obtido especial relevo é a dos modelos baseados no operador binomial thinning, da qual se destaca o modelo auto-regressivo de valores inteiros de ordem p. Esta classe é muito vasta, pelo que este trabalho tem como objectivo dar um contributo para a análise estatística de processos de contagem que lhe pertencem. Esta análise é realizada do ponto de vista da predição de acontecimentos, aos quais estão associados mecanismos de alarme, e também da introdução de novos modelos que se baseiam no referido operador. Em muitos fenómenos descritos por processos estocásticos a implementação de um sistema de alarmes pode ser fundamental para prever a ocorrência de um acontecimento futuro. Neste trabalho abordam-se, nas perspectivas clássica e bayesiana, os sistemas de alarme óptimos para processos de contagem, cujos parâmetros dependem de covariáveis de interesse e que variam no tempo, mais concretamente para o modelo auto-regressivo de valores inteiros não negativos com coeficientes estocásticos, DSINAR(1). A introdução de novos modelos que pertencem à classe dos modelos baseados no operador binomial thinning é feita quando se propõem os modelos PINAR(1)T e o modelo SETINAR(2;1). O modelo PINAR(1)T tem estrutura periódica, cujas inovações são uma sucessão periódica de variáveis aleatórias independentes com distribuição de Poisson, o qual foi estudado com detalhe ao nível das suas propriedades probabilísticas, métodos de estimação e previsão. O modelo SETINAR(2;1) é um processo auto-regressivo de valores inteiros, definido por limiares auto-induzidos e cujas inovações formam uma sucessão de variáveis independentes e identicamente distribuídas com distribuição de Poisson. Para este modelo estudam-se as suas propriedades probabilísticas e métodos para estimar os seus parâmetros. Para cada modelo introduzido, foram realizados estudos de simulação para comparar os métodos de estimação que foram usados.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work reported in this thesis aimed at applying the methodology known as metabonomics to the detailed study of a particular type of beer and its quality control, with basis on the use of multivariate analysis (MVA) to extract meaningful information from given analytical data sets. In Chapter 1, a detailed description of beer is given considering the brewing process, main characteristics and typical composition of beer, beer stability and the commonly used analytical techniques for beer analysis. The fundamentals of the analytical methods employed here, namely nuclear magnetic resonance (NMR) spectroscopy, gas-chromatography-mass spectrometry (GC-MS) and mid-infrared (MIR) spectroscopy, together with the description of the metabonomics methodology are described shortly in Chapter 2. In Chapter 3, the application of high resolution NMR to characterize the chemical composition of a lager beer is described. The 1H NMR spectrum obtained by direct analysis of beer show a high degree of complexity, confirming the great potential of NMR spectroscopy for the detection of a wide variety of families of compounds, in a single run. Spectral assignment was carried out by 2D NMR, resulting in the identification of about 40 compounds, including alcohols, amino acids, organic acids, nucleosides and sugars. In a second part of Chapter 3, the compositional variability of beer was assessed. For that purpose, metabonomics was applied to 1H NMR data (NMR/MVA) to evaluate beer variability between beers from the same brand (lager), produced nationally but differing in brewing site and date of production. Differences between brewing sites and/or dates were observed, reflecting compositional differences related to particular processing steps, including mashing, fermentation and maturation. Chapter 4 describes the quantification of organic acids in beer by NMR, using different quantitative methods: direct integration of NMR signals (vs. internal reference or vs. an external electronic reference, ERETIC method) and by quantitative statistical methods (using the partial least squares (PLS) regression) were developed and compared. PLS1 regression models were built using different quantitative methods as reference: capillary electrophoresis with direct and indirect detection and enzymatic essays. It was found that NMR integration results generally agree with those obtained by the best performance PLS models, although some overestimation for malic and pyruvic acids and an apparent underestimation for citric acid were observed. Finally, Chapter 5 describes metabonomic studies performed to better understand the forced aging (18 days, at 45 ºC) beer process. The aging process of lager beer was followed by i) NMR, ii) GC-MS, and iii) MIR spectroscopy. MVA methods of each analytical data set revealed clear separation between different aging days for both NMR and GC-MS data, enabling the identification of compounds closely related with the aging process: 5-hydroxymethylfurfural (5-HMF), organic acids, γ-amino butyric acid (GABA), proline and the ratio linear/branched dextrins (NMR domain) and 5-HMF, furfural, diethyl succinate and phenylacetaldehyde (known aging markers) and, for the first time, 2,3-dihydro-3,5-dihydroxy-6-methyl-4(H)-pyran-4-one xii (DDMP) and maltoxazine (by GC-MS domain). For MIR/MVA, no aging trend could be measured, the results reflecting the need of further experimental optimizations. Data correlation between NMR and GC-MS data was performed by outer product analysis (OPA) and statistical heterospectroscopy (SHY) methodologies, enabling the identification of further compounds (11 compounds, 5 of each are still unassigned) highly related with the aging process. Data correlation between sensory characteristics and NMR and GC-MS was also assessed through PLS1 regression models using the sensory response as reference. The results obtained showed good relationships between analytical data response and sensory response, particularly for the aromatic region of the NMR spectra and for GC-MS data (r > 0.89). However, the prediction power of all built PLS1 regression models was relatively low, possibly reflecting the low number of samples/tasters employed, an aspect to improve in future studies.