Bayesian invariant measurements of generalisation for discrete distributions


Autoria(s): Zhu, Huaiyu; Rohwer, Richard
Data(s)

31/08/1995

Resumo

Neural network learning rules can be viewed as statistical estimators. They should be studied in Bayesian framework even if they are not Bayesian estimators. Generalisation should be measured by the divergence between the true distribution and the estimated distribution. Information divergences are invariant measurements of the divergence between two distributions. The posterior average information divergence is used to measure the generalisation ability of a network. The optimal estimators for multinomial distributions with Dirichlet priors are studied in detail. This confirms that the definition is compatible with intuition. The results also show that many commonly used methods can be put under this unified framework, by assume special priors and special divergences.

Formato

application/pdf

Identificador

http://eprints.aston.ac.uk/505/1/NCRG_95_003.pdf

Zhu, Huaiyu and Rohwer, Richard (1995). Bayesian invariant measurements of generalisation for discrete distributions. Technical Report. Aston University, Birmingham, UK. (Unpublished)

Publicador

Aston University

Relação

http://eprints.aston.ac.uk/505/

Tipo

Monograph

NonPeerReviewed