3 resultados para Jurisprudence - Law and language
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
J. M. Coetzee's Foe is not only a post-colonial novel, but it is also a re-writing of a classic, and its main themes are language, authorship, power and identity. Moreover, Foe is narrated by a woman, while written by a male, Nobel prize winning South African author. The aim of my tesina is to focus on the question of authorship and the role of language in Foe. Without any claim to be exhaustive, in the first section I will examine some selected extracts of Coetzee's book, in order to provide an analysis of the novel. These quotations will mainly be its metalinguistic parts and will be analysed in the “theory” sections of my work, relying on literary theory and on previous works on the novel. Among others, I will cover themes such as the relationship between speech and writing, the connection between writing, history, and memory, the role of silence and alternative ways of communicating and the relationship between literary authority and truth. These arguments will be the foundation for my second section, in which I will attempt to shed a light on the importance of the novel from a linguistic point of view, but always keeping an eye on the implication that this has on authorship. While it is true that it is less politically-permeated than Coetzee's previous works, Foe is above all a “journey of discovery” in the world of language and authorship. In fact, it becomes a warning for any person immersed in the ocean of language since, while everyone naturally tends to trust speech and writing as the only medium through which one can get closer to the truth, authority never is a synonym of reliability, and language is a system of communication behind which structures of power, misconceptions, lies, and treacherous tides easily hide.
Resumo:
The aim of this work is to develop a prototype of an e-learning environment that can foster Content and Language Integrated Learning (CLIL) for students enrolled in an aircraft maintenance training program, which allows them to obtain a license valid in all EU member states. Background research is conducted to retrace the evolution of the field of educational technology, analyzing different learning theories – behaviorism, cognitivism, and (socio-)constructivism – and reflecting on how technology and its use in educational contexts has changed over time. Particular attention is given to technologies that have been used and proved effective in Computer Assisted Language Learning (CALL). Based on the background research and on students’ learning objectives, i.e. learning highly specialized contents and aeronautical technical English, a bilingual approach is chosen, three main tools are identified – a hypertextbook, an exercise creation activity, and a discussion forum – and the learning management system Moodle is chosen as delivery medium. The hypertextbook is based on the technical textbook written in English students already use. In order to foster text comprehension, the hypertextbook is enriched by hyperlinks and tooltips. Hyperlinks redirect students to webpages containing additional information both in English and in Italian, while tooltips show Italian equivalents of English technical terms. The exercise creation activity and the discussion forum foster interaction and collaboration among students, according to socio-constructivist principles. In the exercise creation activity, students collaboratively create a workbook, which allow them to deeply analyze and master the contents of the hypertextbook and at the same time create a learning tool that can help them, as well as future students, to enhance learning. In the discussion forum students can discuss their individual issues, content-related, English-related or e-learning environment-related, helping one other and offering instructors suggestions on how to improve both the hypertextbook and the workbook based on their needs.
Resumo:
Natural Language Processing (NLP) has seen tremendous improvements over the last few years. Transformer architectures achieved impressive results in almost any NLP task, such as Text Classification, Machine Translation, and Language Generation. As time went by, transformers continued to improve thanks to larger corpora and bigger networks, reaching hundreds of billions of parameters. Training and deploying such large models has become prohibitively expensive, such that only big high tech companies can afford to train those models. Therefore, a lot of research has been dedicated to reducing a model’s size. In this thesis, we investigate the effects of Vocabulary Transfer and Knowledge Distillation for compressing large Language Models. The goal is to combine these two methodologies to further compress models without significant loss of performance. In particular, we designed different combination strategies and conducted a series of experiments on different vertical domains (medical, legal, news) and downstream tasks (Text Classification and Named Entity Recognition). Four different methods involving Vocabulary Transfer (VIPI) with and without a Masked Language Modelling (MLM) step and with and without Knowledge Distillation are compared against a baseline that assigns random vectors to new elements of the vocabulary. Results indicate that VIPI effectively transfers information of the original vocabulary and that MLM is beneficial. It is also noted that both vocabulary transfer and knowledge distillation are orthogonal to one another and may be applied jointly. The application of knowledge distillation first before subsequently applying vocabulary transfer is recommended. Finally, model performance due to vocabulary transfer does not always show a consistent trend as the vocabulary size is reduced. Hence, the choice of vocabulary size should be empirically selected by evaluation on the downstream task similar to hyperparameter tuning.