Word Segmentation for Chinese Wikipedia Using N-Gram Mutual Information


Autoria(s): Tang, Ling-Xiang; Geva, Shlomo; Xu, Yue; Trotman, Andrew
Data(s)

04/12/2009

Resumo

In this paper, we propose an unsupervised segmentation approach, named "n-gram mutual information", or NGMI, which is used to segment Chinese documents into n-character words or phrases, using language statistics drawn from the Chinese Wikipedia corpus. The approach alleviates the tremendous effort that is required in preparing and maintaining the manually segmented Chinese text for training purposes, and manually maintaining ever expanding lexicons. Previously, mutual information was used to achieve automated segmentation into 2-character words. The NGMI approach extends the approach to handle longer n-character words. Experiments with heterogeneous documents from the Chinese Wikipedia collection show good results.

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/29367/

Publicador

School of Information Technologies, University of Sydney

Relação

http://eprints.qut.edu.au/29367/1/c29367.pdf

http://es.csiro.au/adcs2009/proceedings/

Tang, Ling-Xiang, Geva, Shlomo, Xu, Yue, & Trotman, Andrew (2009) Word Segmentation for Chinese Wikipedia Using N-Gram Mutual Information. In Proceedings of the Fourteenth Australasian Document Computing Symposium, School of Information Technologies, University of Sydney, University of New South Wales, Sydney, pp. 82-89.

Direitos

Copyright 2009 The authors.

Fonte

Faculty of Science and Technology; School of Information Technology

Palavras-Chave #080107 Natural Language Processing #Chinese word segmentation #mutual information #n-gram mutual information #boundary confidence
Tipo

Conference Paper