929 resultados para fuzzy neural networks


Relevância:

90.00% 90.00%

Publicador:

Resumo:

FUELCON is an expert system in nuclear engineering. Its task is optimized refueling-design, which is crucial to keep down operation costs at a plant. FUELCON proposes sets of alternative configurations of fuel-allocation; the fuel is positioned in a grid representing the core of a reactor. The practitioner of in-core fuel management uses FUELCON to generate a reasonably good configuration for the situation at hand. The domain expert, on the other hand, resorts to the system to test heuristics and discover new ones, for the task described above. Expert use involves a manual phase of revising the ruleset, based on performance during previous iterations in the same session. This paper is concerned with a new phase: the design of a neural component to carry out the revision automatically. Such an automated revision considers previous performance of the system and uses it for adaptation and learning better rules. The neural component is based on a particular schema for a symbolic to recurrent-analogue bridge, called NIPPL, and on the reinforcement learning of neural networks for the adaptation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An Electronic Nose is being jointly developed between the University of Greenwich and the Institute of Intelligent Machines to detect the gases given off from an oil filled transformer when it begins to break down. The gas sensors being used are very simple, consisting of a layer of Tin Oxide (SnO2) which is heated to approximately 640 K and the conductivity varies with the gas concentrations. Some of the shortcomings introduced by the commercial gas sensors available are being overcome by the use of an integrated array of gas sensors and the use of artificial neural networks which can be 'taught' to recognize when the gas contains several components. At present simulated results have achieved up to a 94% success rate of recognizing two component gases and future work will investigate alternative neural network configurations to maintain this success rate with practical measurements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a novel hybrid forward algorithm (HFA) for the construction of radial basis function (RBF) neural networks with tunable nodes. The main objective is to efficiently and effectively produce a parsimonious RBF neural network that generalizes well. In this study, it is achieved through simultaneous network structure determination and parameter optimization on the continuous parameter space. This is a mixed integer hard problem and the proposed HFA tackles this problem using an integrated analytic framework, leading to significantly improved network performance and reduced memory usage for the network construction. The computational complexity analysis confirms the efficiency of the proposed algorithm, and the simulation results demonstrate its effectiveness

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A continuous forward algorithm (CFA) is proposed for nonlinear modelling and identification using radial basis function (RBF) neural networks. The problem considered here is simultaneous network construction and parameter optimization, well-known to be a mixed integer hard one. The proposed algorithm performs these two tasks within an integrated analytic framework, and offers two important advantages. First, the model performance can be significantly improved through continuous parameter optimization. Secondly, the neural representation can be built without generating and storing all candidate regressors, leading to significantly reduced memory usage and computational complexity. Computational complexity analysis and simulation results confirm the effectiveness.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the application of regularisation to the training of feedforward neural networks, as a means of improving the quality of solutions obtained. The basic principles of regularisation theory are outlined for both linear and nonlinear training and then extended to cover a new hybrid training algorithm for feedforward neural networks recently proposed by the authors. The concept of functional regularisation is also introduced and discussed in relation to MLP and RBF networks. The tendency for the hybrid training algorithm and many linear optimisation strategies to generate large magnitude weight solutions when applied to ill-conditioned neural paradigms is illustrated graphically and reasoned analytically. While such weight solutions do not generally result in poor fits, it is argued that they could be subject to numerical instability and are therefore undesirable. Using an illustrative example it is shown that, as well as being beneficial from a generalisation perspective, regularisation also provides a means for controlling the magnitude of solutions. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A significant part of the literature on input-output (IO) analysis is dedicated to the development and application of methodologies forecasting and updating technology coefficients and multipliers. Prominent among such techniques is the RAS method, while more information demanding econometric methods, as well as other less promising ones, have been proposed. However, there has been little interest expressed in the use of more modern and often more innovative methods, such as neural networks in IO analysis in general. This study constructs, proposes and applies a Backpropagation Neural Network (BPN) with the purpose of forecasting IO technology coefficients and subsequently multipliers. The RAS method is also applied on the same set of UK IO tables, and the discussion of results of both methods is accompanied by a comparative analysis. The results show that the BPN offers a valid alternative way of IO technology forecasting and many forecasts were more accurate using this method. Overall, however, the RAS method outperformed the BPN but the difference is rather small to be systematic and there are further ways to improve the performance of the BPN.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The origins of artificial neural networks are related to animal conditioning theory: both are forms of connectionist theory, which in turn derives from the empiricist philosophers' principle of association. The parallel between animal learning and neural nets suggests that interaction between them should benefit both sides.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the main purposes of building a battery model is for monitoring and control during battery charging/discharging as well as for estimating key factors of batteries such as the state of charge for electric vehicles. However, the model based on the electrochemical reactions within the batteries is highly complex and difficult to compute using conventional approaches. Radial basis function (RBF) neural networks have been widely used to model complex systems for estimation and control purpose, while the optimization of both the linear and non-linear parameters in the RBF model remains a key issue. A recently proposed meta-heuristic algorithm named Teaching-Learning-Based Optimization (TLBO) is free of presetting algorithm parameters and performs well in non-linear optimization. In this paper, a novel self-learning TLBO based RBF model is proposed for modelling electric vehicle batteries using RBF neural networks. The modelling approach has been applied to two battery testing data sets and compared with some other RBF based battery models, the training and validation results confirm the efficacy of the proposed method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.