Improving neural network training solutions using regularisation
Data(s) |
2001
|
---|---|
Resumo |
This paper describes the application of regularisation to the training of feedforward neural networks, as a means of improving the quality of solutions obtained. The basic principles of regularisation theory are outlined for both linear and nonlinear training and then extended to cover a new hybrid training algorithm for feedforward neural networks recently proposed by the authors. The concept of functional regularisation is also introduced and discussed in relation to MLP and RBF networks. The tendency for the hybrid training algorithm and many linear optimisation strategies to generate large magnitude weight solutions when applied to ill-conditioned neural paradigms is illustrated graphically and reasoned analytically. While such weight solutions do not generally result in poor fits, it is argued that they could be subject to numerical instability and are therefore undesirable. Using an illustrative example it is shown that, as well as being beneficial from a generalisation perspective, regularisation also provides a means for controlling the magnitude of solutions. (C) 2001 Elsevier Science B.V. All rights reserved. |
Identificador |
http://www.scopus.com/inward/record.url?scp=0035100447&partnerID=8YFLogxK |
Idioma(s) |
eng |
Direitos |
info:eu-repo/semantics/restrictedAccess |
Fonte |
McLoone , S & Irwin , G 2001 , ' Improving neural network training solutions using regularisation ' Neurocomputing , vol 37 , pp. 71-90 . |
Palavras-Chave | #/dk/atira/pure/subjectarea/asjc/1700/1702 #Artificial Intelligence #/dk/atira/pure/subjectarea/asjc/2800/2804 #Cellular and Molecular Neuroscience |
Tipo |
article |