18 resultados para RECURRENT MALIGNANT GLIOMA
em CentAUR: Central Archive University of Reading - UK
Resumo:
Boron-containing complexes that have the potential to irreversibly accumulate in melanoma cells have been prepared by reaction of amino acids with 9-methoxy-9-borabicyclo[3.3.1]nonane. The ability of each complex to act as a substrate for tyrosinase has been probed by oximetry. Increased uptake of the lead candidate in a tyrosinase-rich cell line, compared with a tyrosinase-absent cell line, is reported, with results correlating well with that for a drug currently approved for BNCT.
Resumo:
This paper illustrates how internal model control of nonlinear processes can be achieved by recurrent neural networks, e.g. fully connected Hopfield networks. It is shown that using results developed by Kambhampati et al. (1995), that once a recurrent network model of a nonlinear system has been produced, a controller can be produced which consists of the network comprising the inverse of the model and a filter. Thus, the network providing control for the nonlinear system does not require any training after it has been trained to model the nonlinear system. Stability and other issues of importance for nonlinear control systems are also discussed.
Resumo:
This paper brings together two areas of research that have received considerable attention during the last years, namely feedback linearization and neural networks. A proposition that guarantees the Input/Output (I/O) linearization of nonlinear control affine systems with Dynamic Recurrent Neural Networks (DRNNs) is formulated and proved. The proposition and the linearization procedure are illustrated with the simulation of a single link manipulator.
Resumo:
Differential geometry is used to investigate the structure of neural-network-based control systems. The key aspect is relative order—an invariant property of dynamic systems. Finite relative order allows the specification of a minimal architecture for a recurrent network. Any system with finite relative order has a left inverse. It is shown that a recurrent network with finite relative order has a local inverse that is also a recurrent network with the same weights. The results have implications for the use of recurrent networks in the inverse-model-based control of nonlinear systems.
Resumo:
A dynamic recurrent neural network (DRNN) that can be viewed as a generalisation of the Hopfield neural network is proposed to identify and control a class of control affine systems. In this approach, the identified network is used in the context of the differential geometric control to synthesise a state feedback that cancels the nonlinear terms of the plant yielding a linear plant which can then be controlled using a standard PID controller.
Resumo:
The last decade has seen the re-emergence of artificial neural networks as an alternative to traditional modelling techniques for the control of nonlinear systems. Numerous control schemes have been proposed and have been shown to work in simulations. However, very few analyses have been made of the working of these networks. The authors show that a receding horizon control strategy based on a class of recurrent networks can stabilise nonlinear systems.
Resumo:
This work provides a framework for the approximation of a dynamic system of the form x˙=f(x)+g(x)u by dynamic recurrent neural network. This extends previous work in which approximate realisation of autonomous dynamic systems was proven. Given certain conditions, the first p output neural units of a dynamic n-dimensional neural model approximate at a desired proximity a p-dimensional dynamic system with n>p. The neural architecture studied is then successfully implemented in a nonlinear multivariable system identification case study.
Resumo:
In this paper, we show how a set of recently derived theoretical results for recurrent neural networks can be applied to the production of an internal model control system for a nonlinear plant. The results include determination of the relative order of a recurrent neural network and invertibility of such a network. A closed loop controller is produced without the need to retrain the neural network plant model. Stability of the closed-loop controller is also demonstrated.
Resumo:
Presents a technique for incorporating a priori knowledge from a state space system into a neural network training algorithm. The training algorithm considered is that of chemotaxis and the networks being trained are recurrent neural networks. Incorporation of the a priori knowledge ensures that the resultant network has behaviour similar to the system which it is modelling.
Resumo:
Recurrent neural networks can be used for both the identification and control of nonlinear systems. This paper takes a previously derived set of theoretical results about recurrent neural networks and applies them to the task of providing internal model control for a nonlinear plant. Using the theoretical results, we show how an inverse controller can be produced from a neural network model of the plant, without the need to train an additional network to perform the inverse control.
Resumo:
A dynamic recurrent neural network (DRNN) is used to input/output linearize a control affine system in the globally linearizing control (GLC) structure. The network is trained as a part of a closed loop that involves a PI controller, the goal is to use the network, as a dynamic feedback, to cancel the nonlinear terms of the plant. The stability of the configuration is guarantee if the network and the plant are asymptotically stable and the linearizing input is bounded.
Resumo:
Two approaches are presented to calculate the weights for a Dynamic Recurrent Neural Network (DRNN) in order to identify the input-output dynamics of a class of nonlinear systems. The number of states of the identified network is constrained to be the same as the number of states of the plant.
Resumo:
This paper uses techniques from control theory in the analysis of trained recurrent neural networks. Differential geometry is used as a framework, which allows the concept of relative order to be applied to neural networks. Any system possessing finite relative order has a left-inverse. Any recurrent network with finite relative order also has an inverse, which is shown to be a recurrent network.