Learning Effective and Interpretable Semantic Models using Non-Negative Sparse Embedding


Autoria(s): Murphy, Brian; Talukdar, Partha Pratim; Mitchell, Tom
Data(s)

01/12/2012

Resumo

In this paper, we introduce an application of matrix factorization to produce corpus-derived, distributional<br/>models of semantics that demonstrate cognitive plausibility. We find that word representations<br/>learned by Non-Negative Sparse Embedding (NNSE), a variant of matrix factorization, are sparse,<br/>effective, and highly interpretable. To the best of our knowledge, this is the first approach which<br/>yields semantic representation of words satisfying these three desirable properties. Though extensive<br/>experimental evaluations on multiple real-world tasks and datasets, we demonstrate the superiority<br/>of semantic models learned by NNSE over other state-of-the-art baselines.

Formato

application/pdf

Identificador

http://pure.qub.ac.uk/portal/en/publications/learning-effective-and-interpretable-semantic-models-using-nonnegative-sparse-embedding(774536f9-0731-42ea-9c80-c3ee6dcccad9).html

http://pure.qub.ac.uk/ws/files/5488402/C12_1118.pdf

Idioma(s)

eng

Publicador

Association for Computational Linguistics

Direitos

info:eu-repo/semantics/openAccess

Fonte

Murphy , B , Talukdar , P P & Mitchell , T 2012 , Learning Effective and Interpretable Semantic Models using Non-Negative Sparse Embedding . in International Conference on Computational Linguistics (COLING 2012), Mumbai, India . Association for Computational Linguistics , pp. 1933-1949 .

Palavras-Chave #distributional semantics,inter-,neuro-semantics,pretability,sparse coding,vector-space models,word embeddings
Tipo

contributionToPeriodical