944 resultados para Statistical language models
Resumo:
Most information retrieval (IR) models treat the presence of a term within a document as an indication that the document is somehow "about" that term, they do not take into account when a term might be explicitly negated. Medical data, by its nature, contains a high frequency of negated terms - e.g. "review of systems showed no chest pain or shortness of breath". This papers presents a study of the effects of negation on information retrieval. We present a number of experiments to determine whether negation has a significant negative affect on IR performance and whether language models that take negation into account might improve performance. We use a collection of real medical records as our test corpus. Our findings are that negation has some affect on system performance, but this will likely be confined to domains such as medical data where negation is prevalent.
Resumo:
Performance comparisons between File Signatures and Inverted Files for text retrieval have previously shown several significant shortcomings of file signatures relative to inverted files. The inverted file approach underpins most state-of-the-art search engine algorithms, such as Language and Probabilistic models. It has been widely accepted that traditional file signatures are inferior alternatives to inverted files. This paper describes TopSig, a new approach to the construction of file signatures. Many advances in semantic hashing and dimensionality reduction have been made in recent times, but these were not so far linked to general purpose, signature file based, search engines. This paper introduces a different signature file approach that builds upon and extends these recent advances. We are able to demonstrate significant improvements in the performance of signature file based indexing and retrieval, performance that is comparable to that of state of the art inverted file based systems, including Language models and BM25. These findings suggest that file signatures offer a viable alternative to inverted files in suitable settings and positions the file signatures model in the class of Vector Space retrieval models.
Resumo:
This paper details the participation of the Australian e- Health Research Centre (AEHRC) in the ShARe/CLEF 2013 eHealth Evaluation Lab { Task 3. This task aims to evaluate the use of information retrieval (IR) systems to aid consumers (e.g. patients and their relatives) in seeking health advice on the Web. Our submissions to the ShARe/CLEF challenge are based on language models generated from the web corpus provided by the organisers. Our baseline system is a standard Dirichlet smoothed language model. We enhance the baseline by identifying and correcting spelling mistakes in queries, as well as expanding acronyms using AEHRC's Medtex medical text analysis platform. We then consider the readability and the authoritativeness of web pages to further enhance the quality of the document ranking. Measures of readability are integrated in the language models used for retrieval via prior probabilities. Prior probabilities are also used to encode authoritativeness information derived from a list of top-100 consumer health websites. Empirical results show that correcting spelling mistakes and expanding acronyms found in queries signi cantly improves the e ectiveness of the language model baseline. Readability priors seem to increase retrieval e ectiveness for graded relevance at early ranks (nDCG@5, but not precision), but no improvements are found at later ranks and when considering binary relevance. The authoritativeness prior does not appear to provide retrieval gains over the baseline: this is likely to be because of the small overlap between websites in the corpus and those in the top-100 consumer-health websites we acquired.
Resumo:
Recent advances in neural language models have contributed new methods for learning distributed vector representations of words (also called word embeddings). Two such methods are the continuous bag-of-words model and the skipgram model. These methods have been shown to produce embeddings that capture higher order relationships between words that are highly effective in natural language processing tasks involving the use of word similarity and word analogy. Despite these promising results, there has been little analysis of the use of these word embeddings for retrieval. Motivated by these observations, in this paper, we set out to determine how these word embeddings can be used within a retrieval model and what the benefit might be. To this aim, we use neural word embeddings within the well known translation language model for information retrieval. This language model captures implicit semantic relations between the words in queries and those in relevant documents, thus producing more accurate estimations of document relevance. The word embeddings used to estimate neural language models produce translations that differ from previous translation language model approaches; differences that deliver improvements in retrieval effectiveness. The models are robust to choices made in building word embeddings and, even more so, our results show that embeddings do not even need to be produced from the same corpus being used for retrieval.
Assessment of insect occurrence in boreal forests based on satellite imagery and field measurements.
Resumo:
The presence/absence data of twenty-seven forest insect taxa (e.g. Retinia resinella, Formica spp., Pissodes spp., several scolytids) and recorded environmental variation were used to investigate the applicability of modelling insect occurrence based on satellite imagery. The sampling was based on 1800 sample plots (25 m by 25 m) placed along the sides of 30 equilateral triangles (side 1 km) in a fragmented forest area (approximately 100 km2) in Evo, S Finland. The triangles were overlaid on land use maps interpreted from satellite images (Landsat TM 30 m multispectral scanner imagery 1991) and digitized geological maps. Insect occurrence was explained using either environmental variables measured in the field or those interpreted from the land use and geological maps. The fit of logistic regression models varied between species, possibly because some species may be associated with the characteristics of single trees while other species with stand characteristics. The occurrence of certain insect species at least, especially those associated with Scots pine, could be relatively accurately assessed indirectly on the basis of satellite imagery and geological maps. Models based on both remotely sensed and geological data better predicted the distribution of forest insects except in the case of Xylechinus pilosus, Dryocoetes sp. and Trypodendron lineatum, where the differences were relatively small in favour of the models based on field measurements. The number of species was related to habitat compartment size and distance from the habitat edge calculated from the land use maps, but logistic regressions suggested that other environmental variables in general masked the effect of these variables in species occurrence at the present scale.
Resumo:
The factors affecting the non-industrial, private forest landowners' (hereafter referred to using the acronym NIPF) strategic decisions in management planning are studied. A genetic algorithm is used to induce a set of rules predicting potential cut of the landowners' choices of preferred timber management strategies. The rules are based on variables describing the characteristics of the landowners and their forest holdings. The predictive ability of a genetic algorithm is compared to linear regression analysis using identical data sets. The data are cross-validated seven times applying both genetic algorithm and regression analyses in order to examine the data-sensitivity and robustness of the generated models. The optimal rule set derived from genetic algorithm analyses included the following variables: mean initial volume, landowner's positive price expectations for the next eight years, landowner being classified as farmer, and preference for the recreational use of forest property. When tested with previously unseen test data, the optimal rule set resulted in a relative root mean square error of 0.40. In the regression analyses, the optimal regression equation consisted of the following variables: mean initial volume, proportion of forestry income, intention to cut extensively in future, and positive price expectations for the next two years. The R2 of the optimal regression equation was 0.34 and the relative root mean square error obtained from the test data was 0.38. In both models, mean initial volume and positive stumpage price expectations were entered as significant predictors of potential cut of preferred timber management strategy. When tested with the complete data set of 201 observations, both the optimal rule set and the optimal regression model achieved the same level of accuracy.
Resumo:
In this article, we aim at reducing the error rate of the online Tamil symbol recognition system by employing multiple experts to reevaluate certain decisions of the primary support vector machine classifier. Motivated by the relatively high percentage of occurrence of base consonants in the script, a reevaluation technique has been proposed to correct any ambiguities arising in the base consonants. Secondly, a dynamic time-warping method is proposed to automatically extract the discriminative regions for each set of confused characters. Class-specific features derived from these regions aid in reducing the degree of confusion. Thirdly, statistics of specific features are proposed for resolving any confusions in vowel modifiers. The reevaluation approaches are tested on two databases (a) the isolated Tamil symbols in the IWFHR test set, and (b) the symbols segmented from a set of 10,000 Tamil words. The recognition rate of the isolated test symbols of the IWFHR database improves by 1.9 %. For the word database, the incorporation of the reevaluation step improves the symbol recognition rate by 3.5 % (from 88.4 to 91.9 %). This, in turn, boosts the word recognition rate by 11.9 % (from 65.0 to 76.9 %). The reduction in the word error rate has been achieved using a generic approach, without the incorporation of language models.
Resumo:
In this work, we describe a system, which recognises open vocabulary, isolated, online handwritten Tamil words and extend it to recognize a paragraph of writing. We explain in detail each step involved in the process: segmentation, preprocessing, feature extraction, classification and bigram-based post-processing. On our database of 45,000 handwritten words obtained through tablet PC, we have obtained symbol level accuracy of 78.5% and 85.3% without and with the usage of post-processing using symbol level language models, respectively. Word level accuracies for the same are 40.1% and 59.6%. A line and word level segmentation strategy is proposed, which gives promising results of 100% line segmentation and 98.1% word segmentation accuracies on our initial trials of 40 handwritten paragraphs. The two modules have been combined to obtain a full-fledged page recognition system for online handwritten Tamil data. To the knowledge of the authors, this is the first ever attempt on recognition of open vocabulary, online handwritten paragraphs in any Indian language.
Resumo:
This paper discusses the Cambridge University HTK (CU-HTK) system for the automatic transcription of conversational telephone speech. A detailed discussion of the most important techniques in front-end processing, acoustic modeling and model training, language and pronunciation modeling are presented. These include the use of conversation side based cepstral normalization, vocal tract length normalization, heteroscedastic linear discriminant analysis for feature projection, minimum phone error training and speaker adaptive training, lattice-based model adaptation, confusion network based decoding and confidence score estimation, pronunciation selection, language model interpolation, and class based language models. The transcription system developed for participation in the 2002 NIST Rich Transcription evaluations of English conversational telephone speech data is presented in detail. In this evaluation the CU-HTK system gave an overall word error rate of 23.9%, which was the best performance by a statistically significant margin. Further details on the derivation of faster systems with moderate performance degradation are discussed in the context of the 2002 CU-HTK 10 × RT conversational speech transcription system. © 2005 IEEE.
Resumo:
The article analyzes the legal regime of Euskara in the education system of the Autonomous Community of the Basque Country (capv). In the capv, the legislation recognizes the right to choose the language of study during the educational cycle. The students are separated into different classrooms based on their language preference. This system of separation (of language models) has made it possible to make great strides, although its implementation also suggests aspects which, from the perspective of a pluralistic Basque society on its way towards greater social, political and language integration, call for further reflection The general model for language planning in the capv was fashioned in the eighties as a model characterized by the guarantee of spaces of language freedom, and the educational system was charged with making the learning of the region’s autochthonous language more widespread. At this point, we already have a fair degree of evidence on which to base an analysis of the system of language models and we are in a position to conclude that perhaps the educational system was given too heavy a burden. Official studies on language performance of Basque schoolchildren show (in a way that is now fully verified) that not all the students who finish their mandatory period of schooling achieve the level of knowledge of Euskara required by the regulations. When faced with this reality, it becomes necessary for us to articulate some alternative to the current configuration of the system of language models, one that will make it possible in the future to have a Basque society that is linguistically more integrated, thereby avoiding having the knowledge or lack of knowledge of one of the official languages become a language barrier between two communities. Many sides have urged a reconsideration of the system of language models. The Basque Parliament itself has requested the Department of Education to design a new system. This article analyzes the legal foundations on which the current system is built and explores the potential avenues for legal cooperation that would make it possible to move towards a new system aimed at guaranteeing higher rates of bilingualism. The system would be sufficiently flexible so as to be able to respond to and accommodate the different sociolinguistic realities of the region.
Resumo:
The first chapter of this thesis deals with automating data gathering for single cell microfluidic tests. The programs developed saved significant amounts of time with no loss in accuracy. The technology from this chapter was applied to experiments in both Chapters 4 and 5.
The second chapter describes the use of statistical learning to prognose if an anti-angiogenic drug (Bevacizumab) would successfully treat a glioblastoma multiforme tumor. This was conducted by first measuring protein levels from 92 blood samples using the DNA-encoded antibody library platform. This allowed the measure of 35 different proteins per sample, with comparable sensitivity to ELISA. Two statistical learning models were developed in order to predict whether the treatment would succeed. The first, logistic regression, predicted with 85% accuracy and an AUC of 0.901 using a five protein panel. These five proteins were statistically significant predictors and gave insight into the mechanism behind anti-angiogenic success/failure. The second model, an ensemble model of logistic regression, kNN, and random forest, predicted with a slightly higher accuracy of 87%.
The third chapter details the development of a photocleavable conjugate that multiplexed cell surface detection in microfluidic devices. The method successfully detected streptavidin on coated beads with 92% positive predictive rate. Furthermore, chambers with 0, 1, 2, and 3+ beads were statistically distinguishable. The method was then used to detect CD3 on Jurkat T cells, yielding a positive predictive rate of 49% and false positive rate of 0%.
The fourth chapter talks about the use of measuring T cell polyfunctionality in order to predict whether a patient will succeed an adoptive T cells transfer therapy. In 15 patients, we measured 10 proteins from individual T cells (~300 cells per patient). The polyfunctional strength index was calculated, which was then correlated with the patient's progress free survival (PFS) time. 52 other parameters measured in the single cell test were correlated with the PFS. No statistical correlator has been determined, however, and more data is necessary to reach a conclusion.
Finally, the fifth chapter talks about the interactions between T cells and how that affects their protein secretion. It was observed that T cells in direct contact selectively enhance their protein secretion, in some cases by over 5 fold. This occurred for Granzyme B, Perforin, CCL4, TNFa, and IFNg. IL- 10 was shown to decrease slightly upon contact. This phenomenon held true for T cells from all patients tested (n=8). Using single cell data, the theoretical protein secretion frequency was calculated for two cells and then compared to the observed rate of secretion for both two cells not in contact, and two cells in contact. In over 90% of cases, the theoretical protein secretion rate matched that of two cells not in contact.
Resumo:
Statistical dialogue models have required a large number of dialogues to optimise the dialogue policy, relying on the use of a simulated user. This results in a mismatch between training and live conditions, and significant development costs for the simulator thereby mitigating many of the claimed benefits of such models. Recent work on Gaussian process reinforcement learning, has shown that learning can be substantially accelerated. This paper reports on an experiment to learn a policy for a real-world task directly from human interaction using rewards provided by users. It shows that a usable policy can be learnt in just a few hundred dialogues without needing a user simulator and, using a learning strategy that reduces the risk of taking bad actions. The paper also investigates adaptation behaviour when the system continues learning for several thousand dialogues and highlights the need for robustness to noisy rewards. © 2011 IEEE.
Resumo:
The diversity of non-domestic buildings at urban scale poses a number of difficulties to develop models for large scale analysis of the stock. This research proposes a probabilistic, engineering-based, bottom-up model to address these issues. In a recent study we classified London's non-domestic buildings based on the service they provide, such as offices, retail premise, and schools, and proposed the creation of one probabilistic representational model per building type. This paper investigates techniques for the development of such models. The representational model is a statistical surrogate of a dynamic energy simulation (ES) model. We first identify the main parameters affecting energy consumption in a particular building sector/type by using sampling-based global sensitivity analysis methods, and then generate statistical surrogate models of the dynamic ES model within the dominant model parameters. Given a sample of actual energy consumption for that sector, we use the surrogate model to infer the distribution of model parameters by inverse analysis. The inferred distributions of input parameters are able to quantify the relative benefits of alternative energy saving measures on an entire building sector with requisite quantification of uncertainties. Secondary school buildings are used for illustrating the application of this probabilistic method. © 2012 Elsevier B.V. All rights reserved.
Resumo:
The diversity of non-domestic buildings at urban scale poses a number of difficulties to develop building stock models. This research proposes an engineering-based bottom-up stock model in a probabilistic manner to address these issues. School buildings are used for illustrating the application of this probabilistic method. Two sampling-based global sensitivity methods are used to identify key factors affecting building energy performance. The sensitivity analysis methods can also create statistical regression models for inverse analysis, which are used to estimate input information for building stock energy models. The effects of different energy saving measures are analysed by changing these building stock input distributions.
Resumo:
藏文语言模型是藏文信息处理的基础和核心技术。研究和开发具有强大描述藏语能力的藏文统计语言模型对藏文信息处理的各个应用领域,如机器翻译、藏文语音识别、藏文输入法、藏字校对和藏文信息检索等具有重要的现实意义和实用价值,构建藏文语言模型是藏文信息处理的关键性基础工作,是实现藏文信息化的必要步骤。 本文首先对藏文自动分词进行了研究,实现了基于格助词的藏文最大匹配分词方案。接着研究了统计语言模型构造、数据平滑等技术,实现了一个藏文统计语言模型系统,主要包括词频统计、模型训练和模型评估三个模块。为解决数据稀疏问题,实现了多种模型平滑方法,包括Witten-Bell平滑、绝对折扣平滑、Kneser-Ney平滑和修正的Kneser-Ney平滑。 本文的实验在收集和整理一定规模的藏文语料并进行预处理的基础上,使用分词程序对藏文文本进行分词,并将藏文文本分成训练语料和测试语料两个部分。接着使用测试语料训练得到藏文统计语言模型,并使用了多种平滑方法,结合测试语料对藏文统计语言模型进行评估,比较了不同平滑方法的优劣。