785 resultados para Multi layer perceptron backpropagation neural network
Resumo:
This review attempts to provide an insightful perspective on the role of time within neural network models and the use of neural networks for problems involving time. The most commonly used neural network models are defined and explained giving mention to important technical issues but avoiding great detail. The relationship between recurrent and feedforward networks is emphasised, along with the distinctions in their practical and theoretical abilities. Some practical examples are discussed to illustrate the major issues concerning the application of neural networks to data with various types of temporal structure, and finally some highlights of current research on the more difficult types of problems are presented.
Resumo:
Attractor properties of a popular discrete-time neural network model are illustrated through numerical simulations. The most complex dynamics is found to occur within particular ranges of parameters controlling the symmetry and magnitude of the weight matrix. A small network model is observed to produce fixed points, limit cycles, mode-locking, the Ruelle-Takens route to chaos, and the period-doubling route to chaos. Training algorithms for tuning this dynamical behaviour are discussed. Training can be an easy or difficult task, depending whether the problem requires the use of temporal information distributed over long time intervals. Such problems require training algorithms which can handle hidden nodes. The most prominent of these algorithms, back propagation through time, solves the temporal credit assignment problem in a way which can work only if the relevant information is distributed locally in time. The Moving Targets algorithm works for the more general case, but is computationally intensive, and prone to local minima.