986 resultados para sistemi integrati, CAT tools, machine translation
Resumo:
Mode of access: Internet.
Resumo:
"Comprises a compilation of reports made to the Bureau of manufactures ... and originally published in Daily consular and trade reports." cf. Introduction, p. 9.
Resumo:
Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.
Resumo:
Trabalho de Projeto apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Tradução e Interpretação Especializadas, sob orientação do Mestre Alberto Couto.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
O presente relatório enquadra-se no âmbito do estágio de 400 horas, integrado na componente não-letiva do Mestrado em Tradução (especialização em inglês) da Faculdade de Ciências Sociais e Humanas. O principal propósito da elaboração do relatório de estágio foi o de descrever os diferentes projetos realizados no âmbito do estágio, focando-se nas ferramentas utilizadas para levar a cabo os projetos de tradução. Deste modo foi possível à mestranda explorar as ferramentas, chegar a uma conclusão sobre qual o fluxo de trabalho com cada ferramenta, e ainda realizar uma apreciação crítica e pessoal sobre as ferramentas de TAC com as quais trabalhou.
Resumo:
A pós-edição, aqui definida como a reescrita de um processo tradutório gerado exclusivamente por tradução automática, tem vindo a ganhar cada vez mais destaque no mundo da tradução. Influencia clientes, tradutores e empresas, e por isso merece um espaço no seio académico da tradução, de modo a ser estudada e discutida. Levanta questões, maioritariamente, no que diz respeito a tempo e a qualidade. É uma área na qual ainda há bastante pesquisa para ser feita. Neste relatório, analisa-se principalmente um projeto de pós-edição realizado no âmbito de um estágio curricular, abordando teoria e prática, como o nome indica, de uma forma introdutória.
Resumo:
Relatório de estágio de mestrado em Tradução e Comunicação Multilingue
Resumo:
Este artículo describe una metodología de construcción de WordNets que se basa en la traducción automática de un corpus en inglés desambiguado por sentidos. El corpus que utilizamos está formado por las propias glosas de WN 3.0 etiquetadas semánticamente y por el corpus Semcor. Los resultados de precisión son comparables a los obtenidos mediante métodos basados en diccionarios bilingües para las mismas lenguas. La metodología descrita se está utilizando, en combinación con otras estrategias, en la creación de los WordNets 3.0 del español y catalán.
Resumo:
The objective of PANACEA is to build a factory of LRs that automates the stages involved in the acquisition, production, updating and maintenance of LRs required by MT systems and by other applications based on language technologies, and simplifies eventual issues regarding intellectual property rights. This automation will cut down the cost, time and human effort significantly. These reductions of costs and time are the only way to guarantee the continuous supply of LRs that MT and other language technologies will be demanding in the multilingual Europe.
Resumo:
The objective of the PANACEA ICT-2007.2.2 EU project is to build a platform that automates the stages involved in the acquisition,production, updating and maintenance of the large language resources required by, among others, MT systems. The development of a Corpus Acquisition Component (CAC) for extracting monolingual and bilingual data from the web is one of the most innovative building blocks of PANACEA. The CAC, which is the first stage in the PANACEA pipeline for building Language Resources, adopts an efficient and distributed methodology to crawl for web documents with rich textual content in specific languages and predefined domains. The CAC includes modules that can acquire parallel data from sites with in-domain content available in more than one language. In order to extrinsically evaluate the CAC methodology, we have conducted several experiments that used crawled parallel corpora for the identification and extraction of parallel sentences using sentence alignment. The corpora were then successfully used for domain adaptation of Machine Translation Systems.
Resumo:
This paper presents a novel efficiencybased evaluation of sentence and word aligners. This assessment is critical in order to make a reliable use in industrial scenarios. The evaluation shows that the resourcesrequired by aligners differ rather broadly. Subsequently, we establish limitation mechanisms on a set of aligners deployed as web services. These results, paired with the quality expected from the aligners, allow providers to choose the most appropriate aligner according to the task at hand.
Resumo:
Extensible Dependency Grammar (XDG; Debusmann, 2007) is a flexible, modular dependency grammarframework in which sentence analyses consist of multigraphs and processing takes the form of constraint satisfaction. This paper shows how XDGlends itself to grammar-driven machine translation and introduces the machinery necessary for synchronous XDG. Since the approach relies on a shared semantics, it resembles interlingua MT.It differs in that there are no separateanalysis and generation phases. Rather, translation consists of the simultaneousanalysis and generation of a single source-target sentence.
Resumo:
There are a number of morphological analysers for Polish. Most of these, however, are non-free resources. What is more, different analysers employ different tagsets and tokenisation strategies. This situation calls for a simpleand universal framework to join different sources of morphological information, including the existing resources as well as user-provided dictionaries. We present such a configurable framework that allows to write simple configuration files that define tokenisation strategies and the behaviour of morphologicalanalysers, including simple tagset conversion.
Resumo:
We describe a series of experiments in which we start with English to French and English to Japanese versions of an Open Source rule-based speech translation system for a medical domain, and bootstrap correspondign statistical systems. Comparative evaluation reveals that the rule-based systems are still significantly better than the statistical ones, despite the fact that considerable effort has been invested in tuning both the recognition and translation components; also, a hybrid system only marginally improved recall at the cost of a los in precision. The result suggests that rule-based architectures may still be preferable to statistical ones for safety-critical speech translation tasks.