Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks


Autoria(s): Jaakkola, Tommi S.; Saul, Lawrence K.; Jordan, Michael I.
Data(s)

20/10/2004

20/10/2004

09/02/1996

Resumo

Sigmoid type belief networks, a class of probabilistic neural networks, provide a natural framework for compactly representing probabilistic information in a variety of unsupervised and supervised learning problems. Often the parameters used in these networks need to be learned from examples. Unfortunately, estimating the parameters via exact probabilistic calculations (i.e, the EM-algorithm) is intractable even for networks with fairly small numbers of hidden units. We propose to avoid the infeasibility of the E step by bounding likelihoods instead of computing them exactly. We introduce extended and complementary representations for these networks and show that the estimation of the network parameters can be made fast (reduced to quadratic optimization) by performing the estimation in either of the alternative domains. The complementary networks can be used for continuous density estimation as well.

Formato

7 p.

197474 bytes

292170 bytes

application/postscript

application/pdf

Identificador

AIM-1560

CBCL-129

http://hdl.handle.net/1721.1/7189

Idioma(s)

en_US

Relação

AIM-1560

CBCL-129

Palavras-Chave #AI #MIT #Artificial Intelligence #Belief networks #Probabilistic networks #EM algorithm #Density estimation #Likelihood bounds