7 resultados para language model

em Chinese Academy of Sciences Institutional Repositories Grid Portal


Relevância:

100.00% 100.00%

Publicador:

Resumo:

ACM SIGIR; ACM SIGWEB

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Abstract. Latent Dirichlet Allocation (LDA) is a document level language model. In general, LDA employ the symmetry Dirichlet distribution as prior of the topic-words’ distributions to implement model smoothing. In this paper, we propose a data-driven smoothing strategy in which probability mass is allocated from smoothing-data to latent variables by the intrinsic inference procedure of LDA. In such a way, the arbitrariness of choosing latent variables'priors for the multi-level graphical model is overcome. Following this data-driven strategy,two concrete methods, Laplacian smoothing and Jelinek-Mercer smoothing, are employed to LDA model. Evaluations on different text categorization collections show data-driven smoothing can significantly improve the performance in balanced and unbalanced corpora.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

藏文语言模型是藏文信息处理的基础和核心技术。研究和开发具有强大描述藏语能力的藏文统计语言模型对藏文信息处理的各个应用领域,如机器翻译、藏文语音识别、藏文输入法、藏字校对和藏文信息检索等具有重要的现实意义和实用价值,构建藏文语言模型是藏文信息处理的关键性基础工作,是实现藏文信息化的必要步骤。 本文首先对藏文自动分词进行了研究,实现了基于格助词的藏文最大匹配分词方案。接着研究了统计语言模型构造、数据平滑等技术,实现了一个藏文统计语言模型系统,主要包括词频统计、模型训练和模型评估三个模块。为解决数据稀疏问题,实现了多种模型平滑方法,包括Witten-Bell平滑、绝对折扣平滑、Kneser-Ney平滑和修正的Kneser-Ney平滑。 本文的实验在收集和整理一定规模的藏文语料并进行预处理的基础上,使用分词程序对藏文文本进行分词,并将藏文文本分成训练语料和测试语料两个部分。接着使用测试语料训练得到藏文统计语言模型,并使用了多种平滑方法,结合测试语料对藏文统计语言模型进行评估,比较了不同平滑方法的优劣。

Relevância:

60.00% 60.00%

Publicador: