5 resultados para English as an additional language
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
The aim of this dissertation is to show the power of contrastive analysis in successfully predicting the errors a language learner will make by means of a concrete case study. First, there is a description of what language transfer is and why it is important in the matter of second language acquisition. Second, a brief explanation of the history and development of contrastive analysis will be offered. Third, the focus of the thesis will move to an analysis of errors usually made by language learners. To conclude, the dissertation will focus on the concrete case study of a Russian learner of English: after an analysis of the errors the student is likely to make, a recorded conversation will be examined.
Resumo:
The translation of allusions has presented an issue for translators, in a trend that has seen a shift in translation studies to a more culture-oriented perspective. “Allusion” is defined by doctor Ritva Leppihalme as a culture-bound element that is expected to convey a meaning that goes beyond the mere words used and can only be accurately translated through knowledge of both the source and target culture. Allusions in comedy, and more specifically, allusive jokes, can pose an additional challenge to translators, since failing to translate them in a satisfactory way, can lead to unfunny and puzzling results that completely miss the original comedic value of the allusion itself. For the purposes of this dissertation, an experiment, based on the one done by doctor Ritva Leppihalme, was conducted: a focus group consisting of eight people from different socio-demographic groups was asked to discuss three comedic scenes, translated in Italian, containing an allusive joke, from three different American sitcoms: Community, The Office, and Superstore. The purpose of this research was to find the best and most effective strategies, according to the average Italian viewer, to translate in Italian allusive jokes from the American culture and the English language. The participants were asked to state if they understood the translated joke, and if they did, to rate how funny they found it, and to discuss among themselves on possible reasons for their responses, and on possible alternative solutions. The results seem to indicate that the best course of action involves choices that stray from a literal translation of the words used, by changing items that need a deeper knowledge of the source culture to be understood and therefore cause hilarity, with items more familiar to the target culture. The worst possible solutions seem to be ones that focus on the literal translation of the words used without considering the cultural and situational context of the allusion.
Resumo:
This thesis proposes a translation from Persian into Italian and English of an ancient Persian epic called Shahname, or literally “The Book of Kings,” by Ferdosi, first published in the 11th century CE. The translation proposed, however, is not based on the original book by Ferdosi, which is written all in verse, but rather, an edited, shorter, and simplified version written in prose, by Mohamad Hosseini, first published in 2013. Nonetheless, in his version, Hosseini included some of the verses from the original poems in order to show the value and the beauty of Ferdosi’s writing. Many translations of Ferdosi’s book have been made into English, but only one translation has been made into Italian, by one Italo Pizzi, in 8 volumes, all in verse, in 1886. This thesis analyses and discusses the choices made for the two translations presented into English and Italian. My project is not only to propose translations of Hosseini’s version, but to also introduce the reader to the Persian culture, and to the life of the most famous Iranian epic writer, Ferdosi, and his masterpiece, Shahname.
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Resumo:
State-of-the-art NLP systems are generally based on the assumption that the underlying models are provided with vast datasets to train on. However, especially when working in multi-lingual contexts, datasets are often scarce, thus more research should be carried out in this field. This thesis investigates the benefits of introducing an additional training step when fine-tuning NLP models, named Intermediate Training, which could be exploited to augment the data used for the training phase. The Intermediate Training step is applied by training models on NLP tasks that are not strictly related to the target task, aiming to verify if the models are able to leverage the learned knowledge of such tasks. Furthermore, in order to better analyze the synergies between different categories of NLP tasks, experimentations have been extended also to Multi-Task Training, in which the model is trained on multiple tasks at the same time.