1000 resultados para polycentric networks
Resumo:
In this paper, we show how a set of recently derived theoretical results for recurrent neural networks can be applied to the production of an internal model control system for a nonlinear plant. The results include determination of the relative order of a recurrent neural network and invertibility of such a network. A closed loop controller is produced without the need to retrain the neural network plant model. Stability of the closed-loop controller is also demonstrated.
Resumo:
Presents a technique for incorporating a priori knowledge from a state space system into a neural network training algorithm. The training algorithm considered is that of chemotaxis and the networks being trained are recurrent neural networks. Incorporation of the a priori knowledge ensures that the resultant network has behaviour similar to the system which it is modelling.
Resumo:
A neural network was used to map three PID operating regions for a two-input two-output steam generator system. The network was used in stand alone feedforward operation to control the whole operating range of the process, after being trained from the PID controllers corresponding to each control region. The network inputs are the plant error signals, their integral, their derivative and a 4-error delay train.
Resumo:
Recurrent neural networks can be used for both the identification and control of nonlinear systems. This paper takes a previously derived set of theoretical results about recurrent neural networks and applies them to the task of providing internal model control for a nonlinear plant. Using the theoretical results, we show how an inverse controller can be produced from a neural network model of the plant, without the need to train an additional network to perform the inverse control.
Resumo:
This paper presents the initial research carried out into a new neural network called the multilayer radial basis function network (MRBF). The network extends the radial basis function (RBF) in a similar way to that in which the multilayer perceptron extends the perceptron. It is hoped that by connecting RBFs together in a layered fashion, an equivalent increase in ability can be gained, as is gained from using MLPs instead of single perceptrons. The results of a practical comparison between individual RBFs and MRBF's are also given.
Resumo:
A dynamic recurrent neural network (DRNN) is used to input/output linearize a control affine system in the globally linearizing control (GLC) structure. The network is trained as a part of a closed loop that involves a PI controller, the goal is to use the network, as a dynamic feedback, to cancel the nonlinear terms of the plant. The stability of the configuration is guarantee if the network and the plant are asymptotically stable and the linearizing input is bounded.
Resumo:
The main limitation of linearization theory that prevents its application in practical problems is the need for an exact knowledge of the plant. This requirement is eliminated and it is shown that a multilayer network can synthesise the state feedback coefficients that linearize a nonlinear control affine plant. The stability of the linearizing closed loop can be guaranteed if the autonomous plant is asymptotically stable and the state feedback is bounded.
Resumo:
This paper uses techniques from control theory in the analysis of trained recurrent neural networks. Differential geometry is used as a framework, which allows the concept of relative order to be applied to neural networks. Any system possessing finite relative order has a left-inverse. Any recurrent network with finite relative order also has an inverse, which is shown to be a recurrent network.