876 resultados para ADAPTIVE NEURAL NETWORKS
Resumo:
More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neural networks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic stability criteria for macrostates rely on macro-level contexts, which make them sensitive to differences between different macro-levels.
Resumo:
In this paper we present the initial results using an artificial neural network to predict the onset of Parkinson's Disease tremors in a human subject. Data for the network was obtained from implanted deep brain electrodes. A tuned artificial neural network was shown to be able to identify the pattern of the onset tremor from these real time recordings.
Resumo:
In this paper we consider the possibility of using an artificial neural network to accurately identify the onset of Parkinson’s Disease tremors in human subjects. Data for the network is obtained by means of deep brain implantation in the human brain. Results presented have been obtained from a practical study (i.e. real not simulated data) but should be regarded as initial trials to be discussed further. It can be seen that a tuned artificial neural network can act as an extremely effective predictor in these circumstances.
Resumo:
Dynamic neural networks (DNNs), which are also known as recurrent neural networks, are often used for nonlinear system identification. The main contribution of this letter is the introduction of an efficient parameterization of a class of DNNs. Having to adjust less parameters simplifies the training problem and leads to more parsimonious models. The parameterization is based on approximation theory dealing with the ability of a class of DNNs to approximate finite trajectories of nonautonomous systems. The use of the proposed parameterization is illustrated through a numerical example, using data from a nonlinear model of a magnetic levitation system.
Resumo:
In this paper, we propose to study a class of neural networks with recent-history distributed delays. A sufficient condition is derived for the global exponential periodicity of the proposed neural networks, which has the advantage that it assumes neither the differentiability nor monotonicity of the activation function of each neuron nor the symmetry of the feedback matrix or delayed feedback matrix. Our criterion is shown to be valid by applying it to an illustrative system. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The possibility of using a radial basis function neural network (RBFNN) to accurately recognise and predict the onset of Parkinson’s disease tremors in human subjects is discussed in this paper. The data for training the RBFNN are obtained by means of deep brain electrodes implanted in a Parkinson disease patient’s brain. The effectiveness of a RBFNN is initially demonstrated by a real case study.
Resumo:
This article looks at the use of cultured neural networks as the decision-making mechanism of a control system. In this case biological neurons are grown and trained to act as an artificial intelligence engine. Such research has immediate medical implications as well as enormous potential in computing and robotics. An experimental system involving closed-loop control of a mobile robot by a culture of neurons has been successfully created and is described here. This article gives a brief overview of the problem area and ongoing research. Questions are asked as to where this will lead in the future.
Resumo:
The problem of identification of a nonlinear dynamic system is considered. A two-layer neural network is used for the solution of the problem. Systems disturbed with unmeasurable noise are considered, although it is known that the disturbance is a random piecewise polynomial process. Absorption polynomials and nonquadratic loss functions are used to reduce the effect of this disturbance on the estimates of the optimal memory of the neural-network model.
Resumo:
This paper brings together two areas of research that have received considerable attention during the last years, namely feedback linearization and neural networks. A proposition that guarantees the Input/Output (I/O) linearization of nonlinear control affine systems with Dynamic Recurrent Neural Networks (DRNNs) is formulated and proved. The proposition and the linearization procedure are illustrated with the simulation of a single link manipulator.
Resumo:
A multi-layered architecture of self-organizing neural networks is being developed as part of an intelligent alarm processor to analyse a stream of power grid fault messages and provide a suggested diagnosis of the fault location. Feedback concerning the accuracy of the diagnosis is provided by an object-oriented grid simulator which acts as an external supervisor to the learning system. The utilization of artificial neural networks within this environment should result in a powerful generic alarm processor which will not require extensive training by a human expert to produce accurate results.