998 resultados para Translation memory
Resumo:
Con il presente studio si è inteso analizzare l’impatto dell’utilizzo di una memoria di traduzione (TM) e del post-editing (PE) di un output grezzo sul livello di difficoltà percepita e sul tempo necessario per ottenere un testo finale di alta qualità. L’esperimento ha coinvolto sei studenti, di madrelingua italiana, del corso di Laurea Magistrale in Traduzione Specializzata dell’Università di Bologna (Vicepresidenza di Forlì). I partecipanti sono stati divisi in tre coppie, a ognuna delle quali è stato assegnato un estratto di comunicato stampa in inglese. Per ogni coppia, ad un partecipante è stato chiesto di tradurre il testo in italiano usando la TM all’interno di SDL Trados Studio 2011. All’altro partecipante è stato chiesto di fare il PE completo in italiano dell’output grezzo ottenuto da Google Translate. Nei casi in cui la TM o l’output non contenevano traduzioni (corrette), i partecipanti avrebbero potuto consultare Internet. Ricorrendo ai Think-aloud Protocols (TAPs), è stato chiesto loro di riflettere a voce alta durante lo svolgimento dei compiti. È stato quindi possibile individuare i problemi traduttivi incontrati e i casi in cui la TM e l’output grezzo hanno fornito soluzioni corrette; inoltre, è stato possibile osservare le strategie traduttive impiegate, per poi chiedere ai partecipanti di indicarne la difficoltà attraverso interviste a posteriori. È stato anche misurato il tempo impiegato da ogni partecipante. I dati sulla difficoltà percepita e quelli sul tempo impiegato sono stati messi in relazione con il numero di soluzioni corrette rispettivamente fornito da TM e output grezzo. È stato osservato che usare la TM ha comportato un maggior risparmio di tempo e che, al contrario del PE, ha portato a una riduzione della difficoltà percepita. Il presente studio si propone di aiutare i futuri traduttori professionisti a scegliere strumenti tecnologici che gli permettano di risparmiare tempo e risorse.
Resumo:
Following the internationalization of contemporary higher education, academic institutions based in non-English speaking countries are increasingly urged to produce contents in English to address international prospective students and personnel, as well as to increase their attractiveness. The demand for English translations in the institutional academic domain is consequently increasing at a rate exceeding the capacity of the translation profession. Resources for assisting non-native authors and translators in the production of appropriate texts in L2 are therefore required in order to help academic institutions and professionals streamline their translation workload. Some of these resources include: (i) parallel corpora to train machine translation systems and multilingual authoring tools; and (ii) translation memories for computer-aided tools. The purpose of this study is to create and evaluate reference resources like the ones mentioned in (i) and (ii) through the automatic sentence alignment of a large set of Italian and English as a Lingua Franca (ELF) institutional academic texts given as equivalent but not necessarily parallel (i.e. translated). In this framework, a set of aligning algorithms and alignment tools is examined in order to identify the most profitable one(s) in terms of accuracy and time- and cost-effectiveness. In order to determine the text pairs to align, a sample is selected according to document length similarity (characters) and subsequently evaluated in terms of extent of noisiness/parallelism, alignment accuracy and content leverageability. The results of these analyses serve as the basis for the creation of an aligned bilingual corpus of academic course descriptions, which is eventually used to create a translation memory in TMX format.
Resumo:
Com o constante desenvolvimento das tecnologias para a tradução e a sua importância para o mercado de trabalho, os computadores e as ferramentas TAC tornaram-se instrumentos imprescindíveis ao tradutor. Tendo em conta a crescente preocupação com a otimização do processo de tradução, impõe-se que o tradutor construa as suas próprias fontes de consulta fiáveis, seja a partir da Web, de outros tradutores e até do seu próprio esforço de trabalho prévio. É possível adquirir memórias de tradução a partir de diversas fontes. Estas memórias têm como principal vantagem o facto de nunca se ter de traduzir o mesmo uma segunda vez, o que proporciona grande economia de tempo e também coerência a nível da terminologia e da fraseologia. Um dos métodos utilizados na construção de memórias de tradução diz respeito ao alinhamento de textos paralelos, de modo a que os conteúdos desses mesmos textos sejam integrados nas memórias. Com este relatório pretendeu-se esclarecer algumas questões relacionadas com as duas matérias que figuram no título, abordando as vantagens e desvantagens do alinhamento de documentos para a criação de memórias de tradução e as implicações do seu uso, tanto para o cliente que encomenda a tradução, como para a empresa que trata do projeto e para o próprio tradutor.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper analyzes the growing adoption of translation tools by the contemporary translator working for markets such as the localization industry. The fast turnaround pace of translation of electronic texts ends up conditioning the employment of translators to their ability to use the resources provided by tools such as translation memories systems efficiently. These systems, as envisioned in their early conception, would allow users to increase productivity and, simultaneously, standardize their terminological production. Seeking to go beyond the predominantly descriptive approaches of these tools, some theoretical assumptions upholding the use of translation memories are examined. From this perspective, the translator’s involvement with the work in progress is analyzed, mainly when this professional is part of a larger process of production and distribution of information by electronic means and for diverse audiences. Ultimately, the consequences of the employment of these tools are taken into consideration, such as those between translator/translation and translator/client, as well as the extension of the responsibility of the translator dedicated to developing partially automated translations.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This dissertation is part of the Language Toolkit project which is a collaboration between the School of Foreign Languages and Literature, Interpreting and Translation of the University of Bologna, Forlì campus, and the Chamber of Commerce of Forlì-Cesena. This project aims to create an exchange between translation students and companies who want to pursue a process of internationalization. The purpose of this dissertation is demonstrating the benefits that translation systems can bring to businesses. In particular, it consists of the translation into English of documents supplied by the Italian company Technologica S.r.l. and the creation of linguistic resources that can be integrated into computer-assisted translation (CAT) software, in order to optimize the translation process. The latter is claimed to be a priority with respect to the actual translation products (the target texts), since the analysis conducted on the source texts highlighted that the company could streamline and optimize its English language communication thanks to the use of open source CAT tools such as OmegaT. The work consists of five chapters. The first introduces the Language Toolkit project, the company (Technologica S.r.l ) and its products. The second chapter provides some considerations about technical translation, its features and some misconceptions about it. The difference between technical translation and scientific translation is then clarified and an overview is offered of translation aids such as those used for computer-assisted translation, machine translation, termbases and translation memories. The third chapter contains the analysis of the texts commissioned by Technologica S.r.l. and their categorization. The fourth chapter describes the translation process, with particular attention to terminology extraction and the creation of a bilingual glossary based on a specialized corpus. The glossary was integrated into the OmegaT software in order to facilitate the translation process both for the present task and for future applications. The memory deriving from the translation represents a sort of hybrid resource between a translation memory and a glossary. This was found to be the most appropriate format, given the specific nature of the texts to be translated. Finally, in chapter five conclusions are offered about the importance of language training within a company environment, the potentialities of translation aids and the benefits that they would bring to a company wishing to internationalize itself.
Resumo:
The aim of this dissertation is to provide an adequate translation from English into Italian of a section of the European Commission's site, concerning an environmental policy tool whose aim is to reduce the EU greenhouse gas emissions, the Emissions Trading System. The main reason behind this choice was the intention to combine a personal interest in the domain of sustainability development with the desire to delve deeper into the knowledge of the different aspects involved in the localisation process. I also had the possibility to combine these two with my interest in the universe of the European Union. I therefore worked on the particular language of this supranational organisation and for this reason I had the opportunity to experience a very stimulating work placement at the Directorate-General for Translation in Brussels. However, the choice of the text was personal and the translation is not intended for publication. The work is divided into six chapters. In the first chapter the text is contextualised within the framework of the EU, and its legislation on multilingualism. This has consequences on the languages that are used by the drafters of the official documents and on the languages used by translators. The text originates from those documents, but it needs to be adapted to different receivers. The second chapter investigates the process of website localisation. The third chapter offers an analysis of the source text and of the prospective target text. In the fourth chapter the resources created and used for the translation of the text are described. A comparison is made between the resources of the translation service of the European Commission and the ones created specifically for this project: a translation memory, exploited through the use of a CAT tool, and two corpora. The fifth chapter contains the actual translation, side-by-side with the source text, while the sixth one provides a comment on the translation strategies.
Resumo:
There has been significant debate about the value of screening for dementia, and the need for early diagnosis. Options include Gene testing, early risk assessment, screening, case finding and review when a patient or carer identify that they have symptoms. This paper is not focused on these early approaches to identifying people with dementia. It is focused on the period when a patient or a carer has recognised that there are some memory problems and they are seeking assistance with a diagnosis or explanation in relation to memory loss.
Resumo:
This paper focuses on an efficient user-level method for the deployment of application-specific extensions, using commodity operating systems and hardware. A sandboxing technique is described that supports multiple extensions within a shared virtual address space. Applications can register sandboxed code with the system, so that it may be executed in the context of any process. Such code may be used to implement generic routines and handlers for a class of applications, or system service extensions that complement the functionality of the core kernel. Using our approach, application-specific extensions can be written like conventional user-level code, utilizing libraries and system calls, with the advantage that they may be executed without the traditional costs of scheduling and context-switching between process-level protection domains. No special hardware support such as segmentation or tagged translation look-aside buffers (TLBs) is required. Instead, our ``user-level sandboxing'' mechanism requires only paged-based virtual memory support, given that sandboxed extensions are either written by a trusted source or are guaranteed to be memory-safe (e.g., using type-safe languages). Using a fast method of upcalls, we show how our mechanism provides significant performance improvements over traditional methods of invoking user-level services. As an application of our approach, we have implemented a user-level network subsystem that avoids data copying via the kernel and, in many cases, yields far greater network throughput than kernel-level approaches.
Resumo:
The proposed model, called the combinatorial and competitive spatio-temporal memory or CCSTM, provides an elegant solution to the general problem of having to store and recall spatio-temporal patterns in which states or sequences of states can recur in various contexts. For example, fig. 1 shows two state sequences that have a common subsequence, C and D. The CCSTM assumes that any state has a distributed representation as a collection of features. Each feature has an associated competitive module (CM) containing K cells. On any given occurrence of a particular feature, A, exactly one of the cells in CMA will be chosen to represent it. It is the particular set of cells active on the previous time step that determines which cells are chosen to represent instances of their associated features on the current time step. If we assume that typically S features are active in any state then any state has K^S different neural representations. This huge space of possible neural representations of any state is what underlies the model's ability to store and recall numerous context-sensitive state sequences. The purpose of this paper is simply to describe this mechanism.
Resumo:
Les souvenirs sont encodés dans le cerveau grâce aux configurations uniques de vastes réseaux neuronaux. Chaque connexion dans ces circuits est apte à être modifiée. Ces changements durables s’opèrent au niveau des synapses grâce à une synthèse de protéines de novo et génèrent ce qu’on nomme des traces mnésiques. Plusieurs preuves indiquent que, dans certaines formes de plasticité synaptique à long terme, cette synthèse a lieu dans les dendrites près des synapses activées plutôt que dans le corps cellulaire. Cependant, les mécanismes qui régulent cette traduction de protéines demeurent encore nébuleux. La phase d’initiation de la traduction est une étape limitante et hautement régulée qui, selon plusieurs chercheurs, constitue la cible principale des mécanismes de régulation de la traduction dans la plasticité synaptique à long terme. Le présent projet de recherche infirme cette hypothèse dans une certaine forme de plasticité synaptique, la dépression à long terme dépendante des récepteurs métabotropiques du glutamate (mGluR-LTD). À l’aide d’enregistrements électrophysiologiques de neurones hippocampiques en culture couplés à des inhibiteurs pharmacologiques, nous montrons que la régulation de la traduction implique les étapes de l’élongation et de la terminaison et non celle de l’initiation. De plus, nous démontrons grâce à des stratégies de knockdown d’expression d’ARN que la protéine de liaison d’ARNm Staufen 2 joue un rôle déterminant dans la mGluR-LTD induite en cultures. Dans leur ensemble, les résultats de la présente étude viennent appuyer un modèle de régulation de la traduction locale de protéines qui est indépendante de l’initiation.