Learning Effective and Interpretable Semantic Models using Non-Negative Sparse Embedding
Data(s) |
01/12/2012
|
---|---|
Resumo |
In this paper, we introduce an application of matrix factorization to produce corpus-derived, distributional<br/>models of semantics that demonstrate cognitive plausibility. We find that word representations<br/>learned by Non-Negative Sparse Embedding (NNSE), a variant of matrix factorization, are sparse,<br/>effective, and highly interpretable. To the best of our knowledge, this is the first approach which<br/>yields semantic representation of words satisfying these three desirable properties. Though extensive<br/>experimental evaluations on multiple real-world tasks and datasets, we demonstrate the superiority<br/>of semantic models learned by NNSE over other state-of-the-art baselines. |
Formato |
application/pdf |
Identificador | |
Idioma(s) |
eng |
Publicador |
Association for Computational Linguistics |
Direitos |
info:eu-repo/semantics/openAccess |
Fonte |
Murphy , B , Talukdar , P P & Mitchell , T 2012 , Learning Effective and Interpretable Semantic Models using Non-Negative Sparse Embedding . in International Conference on Computational Linguistics (COLING 2012), Mumbai, India . Association for Computational Linguistics , pp. 1933-1949 . |
Palavras-Chave | #distributional semantics,inter-,neuro-semantics,pretability,sparse coding,vector-space models,word embeddings |
Tipo |
contributionToPeriodical |