94 resultados para Recurrent Neural Networks

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper brings together two areas of research that have received considerable attention during the last years, namely feedback linearization and neural networks. A proposition that guarantees the Input/Output (I/O) linearization of nonlinear control affine systems with Dynamic Recurrent Neural Networks (DRNNs) is formulated and proved. The proposition and the linearization procedure are illustrated with the simulation of a single link manipulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work provides a framework for the approximation of a dynamic system of the form x˙=f(x)+g(x)u by dynamic recurrent neural network. This extends previous work in which approximate realisation of autonomous dynamic systems was proven. Given certain conditions, the first p output neural units of a dynamic n-dimensional neural model approximate at a desired proximity a p-dimensional dynamic system with n>p. The neural architecture studied is then successfully implemented in a nonlinear multivariable system identification case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we show how a set of recently derived theoretical results for recurrent neural networks can be applied to the production of an internal model control system for a nonlinear plant. The results include determination of the relative order of a recurrent neural network and invertibility of such a network. A closed loop controller is produced without the need to retrain the neural network plant model. Stability of the closed-loop controller is also demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presents a technique for incorporating a priori knowledge from a state space system into a neural network training algorithm. The training algorithm considered is that of chemotaxis and the networks being trained are recurrent neural networks. Incorporation of the a priori knowledge ensures that the resultant network has behaviour similar to the system which it is modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recurrent neural networks can be used for both the identification and control of nonlinear systems. This paper takes a previously derived set of theoretical results about recurrent neural networks and applies them to the task of providing internal model control for a nonlinear plant. Using the theoretical results, we show how an inverse controller can be produced from a neural network model of the plant, without the need to train an additional network to perform the inverse control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dynamic recurrent neural network (DRNN) is used to input/output linearize a control affine system in the globally linearizing control (GLC) structure. The network is trained as a part of a closed loop that involves a PI controller, the goal is to use the network, as a dynamic feedback, to cancel the nonlinear terms of the plant. The stability of the configuration is guarantee if the network and the plant are asymptotically stable and the linearizing input is bounded.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic neural networks (DNNs), which are also known as recurrent neural networks, are often used for nonlinear system identification. The main contribution of this letter is the introduction of an efficient parameterization of a class of DNNs. Having to adjust less parameters simplifies the training problem and leads to more parsimonious models. The parameterization is based on approximation theory dealing with the ability of a class of DNNs to approximate finite trajectories of nonautonomous systems. The use of the proposed parameterization is illustrated through a numerical example, using data from a nonlinear model of a magnetic levitation system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper illustrates how internal model control of nonlinear processes can be achieved by recurrent neural networks, e.g. fully connected Hopfield networks. It is shown that using results developed by Kambhampati et al. (1995), that once a recurrent network model of a nonlinear system has been produced, a controller can be produced which consists of the network comprising the inverse of the model and a filter. Thus, the network providing control for the nonlinear system does not require any training after it has been trained to model the nonlinear system. Stability and other issues of importance for nonlinear control systems are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two approaches are presented to calculate the weights for a Dynamic Recurrent Neural Network (DRNN) in order to identify the input-output dynamics of a class of nonlinear systems. The number of states of the identified network is constrained to be the same as the number of states of the plant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses techniques from control theory in the analysis of trained recurrent neural networks. Differential geometry is used as a framework, which allows the concept of relative order to be applied to neural networks. Any system possessing finite relative order has a left-inverse. Any recurrent network with finite relative order also has an inverse, which is shown to be a recurrent network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The existence of endgame databases challenges us to extract higher-grade information and knowledge from their basic data content. Chess players, for example, would like simple and usable endgame theories if such holy grail exists: endgame experts would like to provide such insights and be inspired by computers to do so. Here, we investigate the use of artificial neural networks (NNs) to mine these databases and we report on a first use of NNs on KPK. The results encourage us to suggest further work on chess applications of neural networks and other data-mining techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human electroencephalogram (EEG) is globally characterized by a 1/f power spectrum superimposed with certain peaks, whereby the "alpha peak" in a frequency range of 8-14 Hz is the most prominent one for relaxed states of wakefulness. We present simulations of a minimal dynamical network model of leaky integrator neurons attached to the nodes of an evolving directed and weighted random graph (an Erdos-Renyi graph). We derive a model of the dendritic field potential (DFP) for the neurons leading to a simulated EEG that describes the global activity of the network. Depending on the network size, we find an oscillatory transition of the simulated EEG when the network reaches a critical connectivity. This transition, indicated by a suitably defined order parameter, is reflected by a sudden change of the network's topology when super-cycles are formed from merging isolated loops. After the oscillatory transition, the power spectra of simulated EEG time series exhibit a 1/f continuum superimposed with certain peaks. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neural networks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic stability criteria for macrostates rely on macro-level contexts, which make them sensitive to differences between different macro-levels.