Learning from Incomplete Data


Autoria(s): Ghahramani, Zoubin; Jordan, Michael I.
Data(s)

20/10/2004

20/10/2004

24/01/1995

Resumo

Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.

Formato

11 p.

388268 bytes

515095 bytes

application/postscript

application/pdf

Identificador

AIM-1509

CBCL-108

http://hdl.handle.net/1721.1/7202

Idioma(s)

en_US

Relação

AIM-1509

CBCL-108

Palavras-Chave #AI #MIT #Artificial Intelligence #missing data #mixture models #statistical learning #EM algorithm #maximum likelihood #neural networks