819 resultados para terminologia finanziaria, variazione linguistica, analisi corpus-based


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present essay aims at observing possible tendencies of normalization by the translator Irene Matthews in the translation to English from As mulheres de Tijucopapo, by Marilene Felinto. The methodology employed is that of corpus-based translation studies (proposed by BAKER, 1993, 1995, 1996, 2000; SCOTT’s study concerning normalization, 1998; and CAMARGO’s research studies, 2005, 2007), and that of corpus linguistics (BERBER SARDINHA’s studies, 2003, 2004). The investigation was carried out by means of a combination of semi-manual and computerized analyses using the computer software WordSmith Tools. Based on Scott (1998), we analyzed the translation of five words considered to be preferred by the author, as well as their co-text, in relation to three normalization features. The final results obtained in this study show that the translator Irene Matthews tends to use strategies that may be identified as features of normalization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper aims at observing a particular case of an author’s and self-translator’s style in the pair of works Viva o Povo Brasileiro and An Invincible Memory. Our investigation has its theoretical starting point based on Corpus-Based Translation Studies (Baker, 1993, 1995, 1996, 2000; Camargo, 2005, 2007), and works on cultural domains (Nida, 1945; Aubert, 1981, 2006). The results showed that great part of cultural marks may be classified as the material, social, and ideological cultural domains, which reflects the context of the source text. It was also possible to observe that normalization features tends to reveal conscious or unconscious use of fluency strategies by the self-translator, making the translated text easier to read.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The construction and use of multimedia corpora has been advocated for a while in the literature as one of the expected future application fields of Corpus Linguistics. This research project represents a pioneering experience aimed at applying a data-driven methodology to the study of the field of AVT, similarly to what has been done in the last few decades in the macro-field of Translation Studies. This research was based on the experience of Forlixt 1, the Forlì Corpus of Screen Translation, developed at the University of Bologna’s Department of Interdisciplinary Studies in Translation, Languages and Culture. As a matter of fact, in order to quantify strategies of linguistic transfer of an AV product, we need to take into consideration not only the linguistic aspect of such a product but all the meaning-making resources deployed in the filmic text. Provided that one major benefit of Forlixt 1 is the combination of audiovisual and textual data, this corpus allows the user to access primary data for scientific investigation, and thus no longer rely on pre-processed material such as traditional annotated transcriptions. Based on this rationale, the first chapter of the thesis sets out to illustrate the state of the art of research in the disciplinary fields involved. The primary objective was to underline the main repercussions on multimedia texts resulting from the interaction of a double support, audio and video, and, accordingly, on procedures, means, and methods adopted in their translation. By drawing on previous research in semiotics and film studies, the relevant codes at work in visual and acoustic channels were outlined. Subsequently, we concentrated on the analysis of the verbal component and on the peculiar characteristics of filmic orality as opposed to spontaneous dialogic production. In the second part, an overview of the main AVT modalities was presented (dubbing, voice-over, interlinguistic and intra-linguistic subtitling, audio-description, etc.) in order to define the different technologies, processes and professional qualifications that this umbrella term presently includes. The second chapter focuses diachronically on various theories’ contribution to the application of Corpus Linguistics’ methods and tools to the field of Translation Studies (i.e. Descriptive Translation Studies, Polysystem Theory). In particular, we discussed how the use of corpora can favourably help reduce the gap existing between qualitative and quantitative approaches. Subsequently, we reviewed the tools traditionally employed by Corpus Linguistics in regard to the construction of traditional “written language” corpora, to assess whether and how they can be adapted to meet the needs of multimedia corpora. In particular, we reviewed existing speech and spoken corpora, as well as multimedia corpora specifically designed to investigate Translation. The third chapter reviews Forlixt 1's main developing steps, from a technical (IT design principles, data query functions) and methodological point of view, by laying down extensive scientific foundations for the annotation methods adopted, which presently encompass categories of pragmatic, sociolinguistic, linguacultural and semiotic nature. Finally, we described the main query tools (free search, guided search, advanced search and combined search) and the main intended uses of the database in a pedagogical perspective. The fourth chapter lists specific compilation criteria retained, as well as statistics of the two sub-corpora, by presenting data broken down by language pair (French-Italian and German-Italian) and genre (cinema’s comedies, television’s soapoperas and crime series). Next, we concentrated on the discussion of the results obtained from the analysis of summary tables reporting the frequency of categories applied to the French-Italian sub-corpus. The detailed observation of the distribution of categories identified in the original and dubbed corpus allowed us to empirically confirm some of the theories put forward in the literature and notably concerning the nature of the filmic text, the dubbing process and Italian dubbed language’s features. This was possible by looking into some of the most problematic aspects, like the rendering of socio-linguistic variation. The corpus equally allowed us to consider so far neglected aspects, such as pragmatic, prosodic, kinetic, facial, and semiotic elements, and their combination. At the end of this first exploration, some specific observations concerning possible macrotranslation trends were made for each type of sub-genre considered (cinematic and TV genre). On the grounds of this first quantitative investigation, the fifth chapter intended to further examine data, by applying ad hoc models of analysis. Given the virtually infinite number of combinations of categories adopted, and of the latter with searchable textual units, three possible qualitative and quantitative methods were designed, each of which was to concentrate on a particular translation dimension of the filmic text. The first one was the cultural dimension, which specifically focused on the rendering of selected cultural references and on the investigation of recurrent translation choices and strategies justified on the basis of the occurrence of specific clusters of categories. The second analysis was conducted on the linguistic dimension by exploring the occurrence of phrasal verbs in the Italian dubbed corpus and by ascertaining the influence on the adoption of related translation strategies of possible semiotic traits, such as gestures and facial expressions. Finally, the main aim of the third study was to verify whether, under which circumstances, and through which modality, graphic and iconic elements were translated into Italian from an original corpus of both German and French films. After having reviewed the main translation techniques at work, an exhaustive account of possible causes for their non-translation was equally provided. By way of conclusion, the discussion of results obtained from the distribution of annotation categories on the French-Italian corpus, as well as the application of specific models of analysis allowed us to underline possible advantages and drawbacks related to the adoption of a corpus-based approach to AVT studies. Even though possible updating and improvement were proposed in order to help solve some of the problems identified, it is argued that the added value of Forlixt 1 lies ultimately in having created a valuable instrument, allowing to carry out empirically-sound contrastive studies that may be usefully replicated on different language pairs and several types of multimedia texts. Furthermore, multimedia corpora can also play a crucial role in L2 and translation teaching, two disciplines in which their use still lacks systematic investigation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis is to apply multilevel regression model in context of household surveys. Hierarchical structure in this type of data is characterized by many small groups. In last years comparative and multilevel analysis in the field of perceived health have grown in size. The purpose of this thesis is to develop a multilevel analysis with three level of hierarchy for Physical Component Summary outcome to: evaluate magnitude of within and between variance at each level (individual, household and municipality); explore which covariates affect on perceived physical health at each level; compare model-based and design-based approach in order to establish informativeness of sampling design; estimate a quantile regression for hierarchical data. The target population are the Italian residents aged 18 years and older. Our study shows a high degree of homogeneity within level 1 units belonging from the same group, with an intraclass correlation of 27% in a level-2 null model. Almost all variance is explained by level 1 covariates. In fact, in our model the explanatory variables having more impact on the outcome are disability, unable to work, age and chronic diseases (18 pathologies). An additional analysis are performed by using novel procedure of analysis :"Linear Quantile Mixed Model", named "Multilevel Linear Quantile Regression", estimate. This give us the possibility to describe more generally the conditional distribution of the response through the estimation of its quantiles, while accounting for the dependence among the observations. This has represented a great advantage of our models with respect to classic multilevel regression. The median regression with random effects reveals to be more efficient than the mean regression in representation of the outcome central tendency. A more detailed analysis of the conditional distribution of the response on other quantiles highlighted a differential effect of some covariate along the distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contacts between languages have always led to mutual influence. Today, the position of authority of the English language affects Italian in many ways, especially in the scientific and technical fields. When new studies conceived in the English-speaking world reach the Italian public, we are faced not only with the translation of texts, but most importantly the rendition of theoretical constructs that do not always have a suitable rendering in the target language. That is why we often find anglicisms in Italian texts. This work aims to show their frequency in a specific field, underlying how and when they are used, and sometimes preferred to the Italian corresponding word. This dissertation looks at a sample of essays from the specialised magazine “Lavoro Sociale”, published by Edizioni Centro Studi Erickson, searching for borrowings from English and discussing their use in order to make hypotheses on the reasons of this phenomenon, against the wider background of translation studies and translation universals research. What I am more interested in is the understanding of the similarities and differences in the use of anglicisms by authors of Italian texts and translators from English into Italian, so that I can figure out what the main dynamics and tendencies are. The whole paper is has four parts. Chapter 1 briefly explains the theoretical background on translation studies, and introduces and discusses the notion of translation universals. After that, the research methodology and theoretical background on linguistic borrowings (especially anglicisms) in Italian are summarized. Chapter 2 presents the study, explaining the organisation of the material, the methodology used and the object of interest. Chapter 3 is the core of the dissertation because it contains the qualitative and quantitative data taken from the texts and the examination of the dynamics of the use of anglicisms. Finally, Chapter 4 compares the conclusions drawn from the previous chapter with the opinions of authors, translators and proof-readers, whom I asked to answer a questionnaire written specifically to investigate the mechanisms and choices behind their use of anglicisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de la presente investigación, el catálogo y estudio de las gramáticas de italiano destinadas a hispanohablantes de los siglos XVIII y XIX, se encuadra en el macrosector gramaticográfico de la historiografía lingüística, en el cual el estudio de las gramáticas de las lenguas dirigidas a hablantes nativos y a hablantes extranjeros, con los consiguientes cruces y trasvases de tradiciones gramaticales, es de significativo interés como destacan: (i) las tesis doctorales defendidas en los últimos quince años; (ii) los proyectos de investigación dirigidos y coordinados por prestigiosos estudiosos del sector; (iii) los congresos organizados para destacar y compartir las principales actualizaciones en torno a los estudios gramaticográficos; y (iv) las publicaciones que surgen de los tres puntos anteriores. El estudio presenta dos partes centrales: la primera (constituida por los capítulos 2 y 3) es la de catálogo y estudio de las diecinueve gramáticas que conforman el corpus en base a ocho áreas descriptivas (1. información catalográfica, 2. autor, 3. editor, 4. hiperestructura, 5. elementos peritextuales, gramaticales y didácticos, 6. variedad de textos y su secuencia didáctica, 7. caracterización, fuentes e influencias, y 8. localización); la segunda (capítulo 4) es la de estudio gramaticográfico de conjunto de los datos más relevantes de las areas de estudio utilizadas en las dos primeras partes. De este modo, daremos un panorama de conjunto sobre (i) la cronología de las obras y sus ediciones y rempresiones; (ii) la nacionalidad, profesión, condición religiosa, etc. de autores; (iii) la geografía de ediciones y editores; (iv) la descripción hiperestructural de las obras; (v) la estructura de los elementos peritextuales; (vi) las partes gramaticales y elementos que las componen; (vii) el verbo: definiciones y paradigma verbal; (vii) los elementos didácticos; (viii) las líneas de descripción gramatical; (ix) la localización de las gramáticas en las bibliotecas españolas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present dissertation aims at simulating the construction of lexicographic layouts for an Italian combinatory dictionary based on real linguistic data, extracted from corpora by using computational methods. This work is based on the assumption that the intuition of the native speaker, or the lexicographer, who manually extracts and classifies all the relevant data, are not adequate to provide sufficient information on the meaning and use of words. Therefore, a study of the real use of language is required and this is particularly true for dictionaries that collect the combinatory behaviour of words, where the task of the lexicographer is to identify typical combinations where a word occurs. This study is conducted in the framework of the CombiNet project aimed at studying Italian Word Combinationsand and at building an online, corpus-based combinatory lexicographic resource for the Italian language. This work is divided into three chapters. Chapter 1 describes the criteria considered for the classification of word combinations according to the work of Ježek (2011). Chapter 1 also contains a brief comparison between the most important Italian combinatory dictionaries and the BBI Dictionary of Word Combinations in order to describe how word combinations are considered in these lexicographic resources. Chapter 2 describes the main computational methods used for the extraction of word combinations from corpora, taking into account the advantages and disadvantages of the two methods. Chapter 3 mainly focuses on the practical word carried out in the framework of the CombiNet project, with reference to the tools and resources used (EXTra, LexIt and "La Repubblica" corpus). Finally, the data extracted and the lexicographic layout of the lemmas to be included in the combinatory dictionary are commented, namely the words "acqua" (water), "braccio" (arm) and "colpo" (blow, shot, stroke).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a corpus-based analysis of the humanizing metaphor and supports that constitutive metaphor in science and technology may be highly metaphorical and active. The study, grounded in Lakoff’s Theory of Metaphor and in Langacker’s relational networks, consists of two phases: firstly, Earth Science metaphorical terms were extracted from databases and dictionaries and, then, contextualized by means of the “Wordsmith” tool in a digitalized corpus created to establish their productivity. Secondly, the terms were classified to disclose the main conceptual metaphors underlying them; then, the mappings and the relational networks of the metaphor were described. Results confirm the systematicity and productivity of the metaphor in this field, show evidence that metaphoricity of scientific terms is gradable, and support that Earth Science metaphors are not only created in terms of their concrete salient properties and attributes, but also on abstract human anthropocentric projections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper surveys some of the fundamental problems in natural language (NL) understanding (syntax, semantics, pragmatics, and discourse) and the current approaches to solving them. Some recent developments in NL processing include increased emphasis on corpus-based rather than example- or intuition-based work, attempts to measure the coverage and effectiveness of NL systems, dealing with discourse and dialogue phenomena, and attempts to use both analytic and stochastic knowledge. Critical areas for the future include grammars that are appropriate to processing large amounts of real language; automatic (or at least semi-automatic) methods for deriving models of syntax, semantics, and pragmatics; self-adapting systems; and integration with speech processing. Of particular importance are techniques that can be tuned to such requirements as full versus partial understanding and spoken language versus text. Portability (the ease with which one can configure an NL system for a particular application) is one of the largest barriers to application of this technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The field of natural language processing (NLP) has seen a dramatic shift in both research direction and methodology in the past several years. In the past, most work in computational linguistics tended to focus on purely symbolic methods. Recently, more and more work is shifting toward hybrid methods that combine new empirical corpus-based methods, including the use of probabilistic and information-theoretic techniques, with traditional symbolic methods. This work is made possible by the recent availability of linguistic databases that add rich linguistic annotation to corpora of natural language text. Already, these methods have led to a dramatic improvement in the performance of a variety of NLP systems with similar improvement likely in the coming years. This paper focuses on these trends, surveying in particular three areas of recent progress: part-of-speech tagging, stochastic parsing, and lexical semantics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the automatic process of building a dependency annotated corpus based on Ancora constituent structures. The Ancora corpus already has a dependency structure information layer, but the new annotated data applies a purely syntactic orientation and offers in this way a new resource to the linguistic research community. The paper details the process of reannotating the corpus, the linguistic criteria used and the obtained results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’estudi de la neologia és indestriable de l’estudi del canvi lingüístic i, doncs, de la diacronia. Ens proposem ací descriure el procés de canvi semàntic que va experimentar el verb esmar, forma patrimonial del llatí *adaestimare, paral·lela del cultisme estimar. Aquesta recerca es fonamenta en l’aprofitament dels corpus textuals i altres materials despullats manualment. Sobre aquests materials, s’ha assajat l’anàlisi de la subjectivació i de les inferències que proposa la teoria de la inferència invitada del canvi semàntic (= TIICS).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this study is to determine if various measures of contraction rate are regionally patterned in written Standard American English. In order to answer this question, this study employs a corpus-based approach to data collection and a statistical approach to data analysis. Based on a spatial autocorrelation analysis of the values of eleven measures of contraction across a 25 million word corpus of letters to the editor representing the language of 200 cities from across the contiguous United States, two primary regional patterns were identified: easterners tend to produce relatively few standard contractions (not contraction, verb contraction) compared to westerners, and northeasterners tend to produce relatively few non-standard contractions (to contraction, non-standard not contraction) compared to southeasterners. These findings demonstrate that regional linguistic variation exists in written Standard American English and that regional linguistic variation is more common than is generally assumed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present thesis is located within the framework of descriptive translation studies and critical discourse analysis. Modern translation studies have increasingly taken into account the complexities of power relations and ideological management involved in the production of translations. Paradoxically, persuasive political discourse has not been much touched upon, except for studies following functional (e.g. Schäffner 2002) or systemic-linguistic approaches (e.g. Calzada Pérez 2001). By taking 11 English translations of Hitler’s Mein Kampf as prime examples, the thesis aims to contribute to a better understanding of the translation of politically sensitive texts. Actors involved in political discourse are usually more concerned with the emotional appeal of their message than they are with its factual content. When such political discourse becomes the locus of translation, it may equally be crafted rhetorically, being used as a tool to persuade. It is thus the purpose of the thesis to describe subtle ‘persuasion strategies’ in institutionally translated political discourse. The subject of the analysis is an illustrative corpus of four full-text translations, two abridgements, and five extract translations of Mein Kampf. Methodologically, the thesis pursues a top-down approach. It begins by delineating sociocultural and situative-agentive conditions as causal factors impinging on the individual translations. Such interactive and interpersonal factors determined textual choices. The overall textual analysis consists of an interrelated corpus-driven and corpus-based approach. It demonstrates how corpus software can be fruitfully harnessed to discern ‘ideological significations’ in the translated texts. Altogether, the thesis investigates how translational decision-makers attempted to position the source text author and his narrative in line with overall rhetorical purposes.