927 resultados para Recurrent Neural Networks


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A continuous forward algorithm (CFA) is proposed for nonlinear modelling and identification using radial basis function (RBF) neural networks. The problem considered here is simultaneous network construction and parameter optimization, well-known to be a mixed integer hard one. The proposed algorithm performs these two tasks within an integrated analytic framework, and offers two important advantages. First, the model performance can be significantly improved through continuous parameter optimization. Secondly, the neural representation can be built without generating and storing all candidate regressors, leading to significantly reduced memory usage and computational complexity. Computational complexity analysis and simulation results confirm the effectiveness.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the application of regularisation to the training of feedforward neural networks, as a means of improving the quality of solutions obtained. The basic principles of regularisation theory are outlined for both linear and nonlinear training and then extended to cover a new hybrid training algorithm for feedforward neural networks recently proposed by the authors. The concept of functional regularisation is also introduced and discussed in relation to MLP and RBF networks. The tendency for the hybrid training algorithm and many linear optimisation strategies to generate large magnitude weight solutions when applied to ill-conditioned neural paradigms is illustrated graphically and reasoned analytically. While such weight solutions do not generally result in poor fits, it is argued that they could be subject to numerical instability and are therefore undesirable. Using an illustrative example it is shown that, as well as being beneficial from a generalisation perspective, regularisation also provides a means for controlling the magnitude of solutions. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the identification of complex dynamic systems using fuzzy neural networks, one of the main issues is the curse of dimensionality, which makes it difficult to retain a large number of system inputs or to consider a large number of fuzzy sets. Moreover, due to the correlations, not all possible network inputs or regression vectors in the network are necessary and adding them simply increases the model complexity and deteriorates the network generalisation performance. In this paper, the problem is solved by first proposing a fast algorithm for selection of network terms, and then introducing a refinement procedure to tackle the correlation issue. Simulation results show the efficacy of the method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A significant part of the literature on input-output (IO) analysis is dedicated to the development and application of methodologies forecasting and updating technology coefficients and multipliers. Prominent among such techniques is the RAS method, while more information demanding econometric methods, as well as other less promising ones, have been proposed. However, there has been little interest expressed in the use of more modern and often more innovative methods, such as neural networks in IO analysis in general. This study constructs, proposes and applies a Backpropagation Neural Network (BPN) with the purpose of forecasting IO technology coefficients and subsequently multipliers. The RAS method is also applied on the same set of UK IO tables, and the discussion of results of both methods is accompanied by a comparative analysis. The results show that the BPN offers a valid alternative way of IO technology forecasting and many forecasts were more accurate using this method. Overall, however, the RAS method outperformed the BPN but the difference is rather small to be systematic and there are further ways to improve the performance of the BPN.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The origins of artificial neural networks are related to animal conditioning theory: both are forms of connectionist theory, which in turn derives from the empiricist philosophers' principle of association. The parallel between animal learning and neural nets suggests that interaction between them should benefit both sides.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the main purposes of building a battery model is for monitoring and control during battery charging/discharging as well as for estimating key factors of batteries such as the state of charge for electric vehicles. However, the model based on the electrochemical reactions within the batteries is highly complex and difficult to compute using conventional approaches. Radial basis function (RBF) neural networks have been widely used to model complex systems for estimation and control purpose, while the optimization of both the linear and non-linear parameters in the RBF model remains a key issue. A recently proposed meta-heuristic algorithm named Teaching-Learning-Based Optimization (TLBO) is free of presetting algorithm parameters and performs well in non-linear optimization. In this paper, a novel self-learning TLBO based RBF model is proposed for modelling electric vehicle batteries using RBF neural networks. The modelling approach has been applied to two battery testing data sets and compared with some other RBF based battery models, the training and validation results confirm the efficacy of the proposed method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.