991 resultados para extension language
Resumo:
Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents BAGEL, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that BAGEL can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data. © 2010 Association for Computational Linguistics.
Resumo:
State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple subsystems developed at different sites. Cross system adaptation can be used as an alternative to direct hypothesis level combination schemes such as ROVER. The standard approach involves only cross adapting acoustic models. To fully exploit the complimentary features among sub-systems, language model (LM) cross adaptation techniques can be used. Previous research on multi-level n-gram LM cross adaptation is extended to further include the cross adaptation of neural network LMs in this paper. Using this improved LM cross adaptation framework, significant error rate gains of 4.0%-7.1% relative were obtained over acoustic model only cross adaptation when combining a range of Chinese LVCSR sub-systems used in the 2010 and 2011 DARPA GALE evaluations. Copyright © 2011 ISCA.
Resumo:
Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Differential growth of thin elastic bodies furnishes a surprisingly simple explanation of the complex and intriguing shapes of many biological systems, such as plant leaves and organs. Similarly, inelastic strains induced by thermal effects or active materials in layered plates are extensively used to control the curvature of thin engineering structures. Such behaviour inspires us to distinguish and to compare two possible modes of differential growth not normally compared to each other, in order to reveal the full range of out-of-plane shapes of an initially flat disk. The first growth mode, frequently employed by engineers, is characterised by direct bending strains through the thickness, and the second mode, mainly apparent in biological systems, is driven by extensional strains of the middle surface. When each mode is considered separately, it is shown that buckling is common to both modes, leading to bistable shapes: growth from bending strains results in a double-curvature limit at buckling, followed by almost developable deformation in which the Gaussian curvature at buckling is conserved; during extensional growth, out-of-plane distortions occur only when the buckling condition is reached, and the Gaussian curvature continues to increase. When both growth modes are present, it is shown that, generally, larger displacements are obtained under in-plane growth when the disk is relatively thick and growth strains are small, and vice versa. It is also shown that shapes can be mono-, bi-, tri- or neutrally stable, depending on the growth strain levels and the material properties: furthermore, it is shown that certain combinations of growth modes result in a free, or natural, response in which the doubly curved shape of disk exactly matches the imposed strains. Such diverse behaviour, in general, may help to realise more effective actuation schemes for engineering structures. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Mandarin Chinese is based on characters which are syllabic in nature and morphological in meaning. All spoken languages have syllabiotactic rules which govern the construction of syllables and their allowed sequences. These constraints are not as restrictive as those learned from word sequences, but they can provide additional useful linguistic information. Hence, it is possible to improve speech recognition performance by appropriately combining these two types of constraints. For the Chinese language considered in this paper, character level language models (LMs) can be used as a first level approximation to allowed syllable sequences. To test this idea, word and character level n-gram LMs were trained on 2.8 billion words (equivalent to 4.3 billion characters) of texts from a wide collection of text sources. Both hypothesis and model based combination techniques were investigated to combine word and character level LMs. Significant character error rate reductions up to 7.3% relative were obtained on a state-of-the-art Mandarin Chinese broadcast audio recognition task using an adapted history dependent multi-level LM that performs a log-linearly combination of character and word level LMs. This supports the hypothesis that character or syllable sequence models are useful for improving Mandarin speech recognition performance.
Resumo:
Current commercial dialogue systems typically use hand-crafted grammars for Spoken Language Understanding (SLU) operating on the top one or two hypotheses output by the speech recogniser. These systems are expensive to develop and they suffer from significant degradation in performance when faced with recognition errors. This paper presents a robust method for SLU based on features extracted from the full posterior distribution of recognition hypotheses encoded in the form of word confusion networks. Following [1], the system uses SVM classifiers operating on n-gram features, trained on unaligned input/output pairs. Performance is evaluated on both an off-line corpus and on-line in a live user trial. It is shown that a statistical discriminative approach to SLU operating on the full posterior ASR output distribution can substantially improve performance both in terms of accuracy and overall dialogue reward. Furthermore, additional gains can be obtained by incorporating features from the previous system output. © 2012 IEEE.
Resumo:
State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple sub-systems that may even be developed at different sites. Cross system adaptation, in which model adaptation is performed using the outputs from another sub-system, can be used as an alternative to hypothesis level combination schemes such as ROVER. Normally cross adaptation is only performed on the acoustic models. However, there are many other levels in LVCSR systems' modelling hierarchy where complimentary features may be exploited, for example, the sub-word and the word level, to further improve cross adaptation based system combination. It is thus interesting to also cross adapt language models (LMs) to capture these additional useful features. In this paper cross adaptation is applied to three forms of language models, a multi-level LM that models both syllable and word sequences, a word level neural network LM, and the linear combination of the two. Significant error rate reductions of 4.0-7.1% relative were obtained over ROVER and acoustic model only cross adaptation when combining a range of Chinese LVCSR sub-systems used in the 2010 and 2011 DARPA GALE evaluations. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
In natural languages multiple word sequences can represent the same underlying meaning. Only modelling the observed surface word sequence can result in poor context coverage, for example, when using n-gram language models (LM). To handle this issue, this paper presents a novel form of language model, the paraphrastic LM. A phrase level transduction model that is statistically learned from standard text data is used to generate paraphrase variants. LM probabilities are then estimated by maximizing their marginal probability. Significant error rate reductions of 0.5%-0.6% absolute were obtained on a state-ofthe-art conversational telephone speech recognition task using a paraphrastic multi-level LM modelling both word and phrase sequences.
Resumo:
In natural languages multiple word sequences can represent the same underlying meaning. Only modelling the observed surface word sequence can result in poor context coverage, for example, when using n-gram language models (LM). To handle this issue, paraphrastic LMs were proposed in previous research and successfully applied to a US English conversational telephone speech transcription task. In order to exploit the complementary characteristics of paraphrastic LMs and neural network LMs (NNLM), the combination between the two is investigated in this paper. To investigate paraphrastic LMs' generalization ability to other languages, experiments are conducted on a Mandarin Chinese broadcast speech transcription task. Using a paraphrastic multi-level LM modelling both word and phrase sequences, significant error rate reductions of 0.9% absolute (9% relative) and 0.5% absolute (5% relative) were obtained over the baseline n-gram and NNLM systems respectively, after a combination with word and phrase level NNLMs. © 2013 IEEE.
Resumo:
Studies have attributed several functions to the Eaf family, including tumor suppression and eye development. Given the potential association between cancer and development, we set forth to explore Eaf1 and Eaf2/U19 activity in vertebrate embryogenesis, using zebrafish. In situ hybridization revealed similar eaf1 and eaf2/u19 expression patterns. Morpholino-mediated knockdown of either eaf1 or eaf2/u19 expression produced similar morphological changes that could be reversed by ectopic expression of target or reciprocal-target mRNA. However, combination of Eaf1 and Eaf2/U19 (Eafs)-morpholinos increased the severity of defects, suggesting that Eaf1 and Eaf2/U19 only share some functional redundancy. The Eafs knockdown phenotype resembled that of embryos with defects in convergence and extension movements. Indeed, knockdown caused expression pattern changes for convergence and extension movement markers, whereas cell tracing experiments using kaeda mRNA showed a correlation between Eafs knockdown and cell migration defects. Cardiac and pancreatic differentiation markers revealed that Eafs knockdown also disrupted midline convergence of heart and pancreatic organ precursors. Noncanonical Wnt signaling plays a key role in both convergence and extension movements and midline convergence of organ precursors. We found that Eaf1 and Eaf2/U19 maintained expression levels of wnt11 and wnt5. Moreover, wnt11 or wnt5 mRNA partially rescued the convergence and extension movement defects occurring in eafs morphants. Wnt11 and Wnt5 converge on rhoA, so not surprisingly, rhoA mRNA more effectively rescued defects than either wnt11 or wnt5 mRNA alone. However, the ectopic expression of wnt11 and wnt5 did not affect eaf1 and eaf2/u19 expression. These data indicate that eaf1 and eaf2/u19 act upstream of noncanonical Wnt signaling to mediate convergence and extension movements.
Resumo:
Copyright © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. This paper presents the beginnings of an automatic statistician, focusing on regression problems. Our system explores an open-ended space of statistical models to discover a good explanation of a data set, and then produces a detailed report with figures and natural- language text. Our approach treats unknown regression functions non- parametrically using Gaussian processes, which has two important consequences. First, Gaussian processes can model functions in terms of high-level properties (e.g. smoothness, trends, periodicity, changepoints). Taken together with the compositional structure of our language of models this allows us to automatically describe functions in simple terms. Second, the use of flexible nonparametric models and a rich language for composing them in an open-ended manner also results in state- of-the-art extrapolation performance evaluated over 13 real time series data sets from various domains.