Learning with regularizers in multilayer neural networks


Autoria(s): Saad, David; Rattray, Magnus
Data(s)

01/02/1998

Resumo

We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labelled by a two-layer teacher network with an arbitrary number of hidden units which may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.

Formato

application/pdf

Identificador

http://eprints.aston.ac.uk/1218/1/NCRG_98_001.pdf

Saad, David and Rattray, Magnus (1998). Learning with regularizers in multilayer neural networks. Physical Review E, 57 (2), pp. 2170-2176.

Relação

http://eprints.aston.ac.uk/1218/

Tipo

Article

PeerReviewed