57 resultados para NONLINEAR PARABOLIC-SYSTEMS
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
This paper investigates the two-stage stepwise identification for a class of nonlinear dynamic systems that can be described by linear-in-the-parameters models, and the model has to be built from a very large pool of basis functions or model terms. The main objective is to improve the compactness of the model that is obtained by the forward stepwise methods, while retaining the computational efficiency. The proposed algorithm first generates an initial model using a forward stepwise procedure. The significance of each selected term is then reviewed at the second stage and all insignificant ones are replaced, resulting in an optimised compact model with significantly improved performance. The main contribution of this paper is that these two stages are performed within a well-defined regression context, leading to significantly reduced computational complexity. The efficiency of the algorithm is confirmed by the computational complexity analysis, and its effectiveness is demonstrated by the simulation results.
Resumo:
We consider the local order estimation of nonlinear autoregressive systems with exogenous inputs (NARX), which may have different local dimensions at different points. By minimizing the kernel-based local information criterion introduced in this paper, the strongly consistent estimates for the local orders of the NARX system at points of interest are obtained. The modification of the criterion and a simple procedure of searching the minimum of the criterion, are also discussed. The theoretical results derived here are tested by simulation examples.
Resumo:
The identification of nonlinear dynamic systems using radial basis function (RBF) neural models is studied in this paper. Given a model selection criterion, the main objective is to effectively and efficiently build a parsimonious compact neural model that generalizes well over unseen data. This is achieved by simultaneous model structure selection and optimization of the parameters over the continuous parameter space. It is a mixed-integer hard problem, and a unified analytic framework is proposed to enable an effective and efficient two-stage mixed discrete-continuous; identification procedure. This novel framework combines the advantages of an iterative discrete two-stage subset selection technique for model structure determination and the calculus-based continuous optimization of the model parameters. Computational complexity analysis and simulation studies confirm the efficacy of the proposed algorithm.
Resumo:
The identification of nonlinear dynamic systems using linear-in-the-parameters models is studied. A fast recursive algorithm (FRA) is proposed to select both the model structure and to estimate the model parameters. Unlike orthogonal least squares (OLS) method, FRA solves the least-squares problem recursively over the model order without requiring matrix decomposition. The computational complexity of both algorithms is analyzed, along with their numerical stability. The new method is shown to require much less computational effort and is also numerically more stable than OLS.
Resumo:
According to the Mickael's selection theorem any surjective continuous linear operator from one Fr\'echet space onto another has a continuous (not necessarily linear) right inverse. Using this theorem Herzog and Lemmert proved that if $E$ is a Fr\'echet space and $T:E\to E$ is a continuous linear operator such that the Cauchy problem $\dot x=Tx$, $x(0)=x_0$ is solvable in $[0,1]$ for any $x_0\in E$, then for any $f\in C([0,1],E)$, there exists a continuos map $S:[0,1]\times E\to E$, $(t,x)\mapsto S_tx$ such that for any $x_0\in E$, the function $x(t)=S_tx_0$ is a solution of the Cauchy problem $\dot x(t)=Tx(t)+f(t)$, $x(0)=x_0$ (they call $S$ a fundamental system of solutions of the equation $\dot x=Tx+f$). We prove the same theorem, replacing "continuous" by "sequentially continuous" for locally convex spaces from a class which contains strict inductive limits of Fr\'echet spaces and strong duals of Fr\'echet--Schwarz spaces and is closed with respect to finite products and sequentially closed subspaces. The key-point of the proof is an extension of the theorem on existence of a sequentially continuous right inverse of any surjective sequentially continuous linear operator to some class of non-metrizable locally convex spaces.
Resumo:
This article discusses the identification of nonlinear dynamic systems using multi-layer perceptrons (MLPs). It focuses on both structure uncertainty and parameter uncertainty, which have been widely explored in the literature of nonlinear system identification. The main contribution is that an integrated analytic framework is proposed for automated neural network structure selection, parameter identification and hysteresis network switching with guaranteed neural identification performance. First, an automated network structure selection procedure is proposed within a fixed time interval for a given network construction criterion. Then, the network parameter updating algorithm is proposed with guaranteed bounded identification error. To cope with structure uncertainty, a hysteresis strategy is proposed to enable neural identifier switching with guaranteed network performance along the switching process. Both theoretic analysis and a simulation example show the efficacy of the proposed method.
Resumo:
A forward and backward least angle regression (LAR) algorithm is proposed to construct the nonlinear autoregressive model with exogenous inputs (NARX) that is widely used to describe a large class of nonlinear dynamic systems. The main objective of this paper is to improve model sparsity and generalization performance of the original forward LAR algorithm. This is achieved by introducing a replacement scheme using an additional backward LAR stage. The backward stage replaces insignificant model terms selected by forward LAR with more significant ones, leading to an improved model in terms of the model compactness and performance. A numerical example to construct four types of NARX models, namely polynomials, radial basis function (RBF) networks, neuro fuzzy and wavelet networks, is presented to illustrate the effectiveness of the proposed technique in comparison with some popular methods.