Factorial Hidden Markov Models


Autoria(s): Ghahramani, Zoubin; Jordan, Michael I.
Data(s)

20/10/2004

20/10/2004

09/02/1996

Resumo

We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.

Formato

7 p.

198365 bytes

244196 bytes

application/postscript

application/pdf

Identificador

AIM-1561

CBCL-130

http://hdl.handle.net/1721.1/7188

Idioma(s)

en_US

Relação

AIM-1561

CBCL-130

Palavras-Chave #AI #MIT #Artificial Intelligence #Hidden Markov Models #sNeural networks #Time series #Mean field theory #Gibbs sampling #sFactorial #Learning algorithms #Machine learning