On-line learning with adaptive back-propagation in two-layer networks


Autoria(s): West, Ansgar H.L.; Saad, David
Data(s)

01/09/1997

Resumo

An adaptive back-propagation algorithm parameterized by an inverse temperature 1/T is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, we analyse these learning algorithms in both the symmetric and the convergence phase for finite learning rates in the case of uncorrelated teachers of similar but arbitrary length T. These analyses show that adaptive back-propagation results generally in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.

Formato

application/pdf

Identificador

http://eprints.aston.ac.uk/1209/1/NCRG_97_019.pdf

West, Ansgar H.L. and Saad, David (1997). On-line learning with adaptive back-propagation in two-layer networks. Physical Review E, 56 (3), pp. 3426-3445.

Relação

http://eprints.aston.ac.uk/1209/

Tipo

Article

PeerReviewed