9 resultados para Language in Science
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Through the analysis of American TV show Game of Thrones, this dissertation will focus on the linguistic issues concerning the adaptation from books to television, the power of language over the audience, and the creation of two languages, with all the linguistic and cultural implications related to this phenomenon.
Resumo:
J. M. Coetzee's Foe is not only a post-colonial novel, but it is also a re-writing of a classic, and its main themes are language, authorship, power and identity. Moreover, Foe is narrated by a woman, while written by a male, Nobel prize winning South African author. The aim of my tesina is to focus on the question of authorship and the role of language in Foe. Without any claim to be exhaustive, in the first section I will examine some selected extracts of Coetzee's book, in order to provide an analysis of the novel. These quotations will mainly be its metalinguistic parts and will be analysed in the “theory” sections of my work, relying on literary theory and on previous works on the novel. Among others, I will cover themes such as the relationship between speech and writing, the connection between writing, history, and memory, the role of silence and alternative ways of communicating and the relationship between literary authority and truth. These arguments will be the foundation for my second section, in which I will attempt to shed a light on the importance of the novel from a linguistic point of view, but always keeping an eye on the implication that this has on authorship. While it is true that it is less politically-permeated than Coetzee's previous works, Foe is above all a “journey of discovery” in the world of language and authorship. In fact, it becomes a warning for any person immersed in the ocean of language since, while everyone naturally tends to trust speech and writing as the only medium through which one can get closer to the truth, authority never is a synonym of reliability, and language is a system of communication behind which structures of power, misconceptions, lies, and treacherous tides easily hide.
Resumo:
L'Open innovation (OI) rappresenta un concetto complesso, pieno di sfaccettature e dimensioni di analisi. Il seguente elaborato adotta un approccio micro-fondazionale (Abell et al., 2008) al fine di studiare l'Individuo nella sua relazione con questo costrutto. Nello specifico l'obiettivo è quello di studiare comportamenti che l'individuo adotta al fine di perseguire determinati "Social Outcomes"(pratiche di Open innovation implementate a livello di organizzazione). Allo studio dei comportamenti si aggiunge la proposta di un set di KPI, atti a misurarli in modo dinamico. Un driver fondamentale è il focus sul contesto scientifico: nello specifico vengono studiati i comportamenti collaborativi messi in atto da scienziati lungo il processo innovativo (Beck et al., 2022). Lo studio deriva da una meta analisi e interviste etnografiche. L'analisi è stata condotta su Scopus e Scholar dove è stato inserito un ampio set di parole chiave (e.g. ”Open innovation & Measures/scales”, "Open innovation in Science (OIS)", “University-Industry collaboration”, …). Sono stati estratti articoli rilevanti che proponevano scale per misurare diversi costrutti collegati ad OI(Antons et al., 2017) (Boardman and Corley, 2008). Sono stati estratti gli items dalle scale e convertiti in comportamenti relativi alla pratica di OIS, sulla base del framework proposto da Beck et al. (2022). Sono stati ottenuti 48 comportamenti unici da 8 scale diverse: sono stati clusterizzati al fine di ottenere 10 cluster di comportamenti omogenei. La clusterizzazione è stata condotta a partire da una matrice di similarità creata da 5 esperti, sottoposta al Software Ucinet. I cluster così formati sono stati la base per generare un set di 24 kpi, derivanti dai comportamenti, e suddivisi tra i vari cluster. Sono stati definiti 5 ulteriori indicatori a partire da interviste proposte a 13 partecipanti tra ricercatori e professori. Questi ultimi KPI derivano dai valori trainanti per i ricercatori, emersi come insights.
Resumo:
The aim of my dissertation is to analyze how selected elements of language are addressed in two contemporary dystopias, Feed by M. T. Anderson (2002) and Super Sad True Love Story by Gary Shteyngart (2010). I chose these two novels because language plays a key role in both of them: both are primarily focused on the pervasiveness of technology, and on how the use/abuse of technology affects language in all its forms. In particular, I examine four key aspects of language: books, literacy, diary writing, as well as oral language. In order to analyze how the aforementioned elements of language are dealt with in Feed and Super Sad True Love Story, I consider how the same aspects of language are presented in a sample of classical dystopias selected as benchmarks: We by Yevgeny Zamyatin (1921), Brave New World by Aldous Huxley (1932), Animal Farm (1945) and Nineteen Eighty-Four (1949) by George Orwell, Fahrenheit 451 by Ray Bradbury (1952), and The Handmaid's Tale by Margaret Atwood (1986). In this way, I look at how language, books, literacy, and diaries are dealt with in Anderson’s Feed and in Shteyngart’s Super Sad True Love Story, both in comparison with the classical dystopias as well as with one another. This allows for an analysis of the similarities, as well as the differences, between the two novels. The comparative analysis carried out also takes into account the fact that the two contemporary dystopias have different target audiences: one is for young adults (Feed), whereas the other is for adults (Super Sad True Love Story). Consequently, I also consider whether further differences related to target readers affect differences in how language is dealt with. Preliminary findings indicate that, despite their different target audiences, the linguistic elements considered are addressed in the two novels in similar ways.
Resumo:
Without a doubt, one of the biggest changes that affected XXth century art is the introduction of words into paintings and, in more recent years, in installations. For centuries, if words were part of a visual composition, they functioned as reference; strictly speaking, they were used as a guideline for a better perception of the subject represented. With the developments of the XXth century, words became a very important part of the visual composition, and sometimes embodied the composition itself. About this topic, American art critic and collector Russell Bowman wrote an interesting article called Words and images: A persistent paradox, in which he examines the American and the European art of the XXth century in almost its entirety, dividing it up in six “categories of intention”. The aforementioned categories are not based on the art history timeline, but on the role that language played for specific artists or movements. Taking inspiration from Bowman's article, this paper is structured in three chapters, respectively: words in juxtaposition and free association, words as means of exploration of language structures, and words as means for political and personal messages. The purpose of this paper is therefore to reflect on the role of language in contemporary art and on the way it has changed from artist to artist.
Resumo:
Smooth intercultural communication requires very complex tasks, especially when participants are very different in their cultural and linguistic backgrounds: this is the case of native Italian and Japanese speakers. A further difficulty in such a context can be found in the usage of a foreign language not mastered perfectly by speakers, which is the case for Italian intermediate learners of Japanese. The aim of this study is therefore to identify the linguistic difficulties common among Italian learners of Japanese as a foreign language and to further examine the consequences of incorrect pragma-linguistic deliveries in actual conversations. To this end, a series of linguistic aspects selected on the basis of the author's experience have been taken into consideration. Some aspects are expected to be difficult to master because of linguistic differences between Italian and Japanese, while some may be difficult due to their connection to the specific Japanese cultural context. The present study consists of six parts. The Introduction presents the state of the art on the research topic and defines the purpose of this research. Chapter 1 outlines the linguistic aspects of the Japanese language investigated in the study, specifically focusing on the following topics: writing system, phonology, loan words, numbers, ellipsis, levels of speech and honorifics. Chapter 2 presents an overview of the environment of teaching Japanese as a foreign language in the university setting in Italy. In Chapter 3 the first phase of the research is described, i.e. an online survey aimed at identifying the most problematic linguistic aspects. Chapter 4 presents the second phase of this study: a series of oral interactions between Japanese and Italian native speakers, conversing exclusively in Japanese, focusing on the management of misunderstandings with the use of actual linguistic data. The Conclusion outlines the results and possible future developments.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
The aim of this dissertation is to provide a translation from English into Italian of a specialised scientific article published in the Cambridge Working Papers in Economics series. In this text, the authors estimate the economic consequences of the earthquake that hit the Abruzzo region in 2009. An extract of this translation will be published as part of conference proceedings. The main reason behind this choice is a personal interest in specialised translation in the economic domain. Moreover, the subject of the article is of particular interest to the Italian readership. The aim of this study is to show how a non-specialised translator can tackle with such a highly specialised translation with the use of appropriate terminology resources and the collaboration of field experts. The translation could be of help to other Italian linguists looking for translated material in this particular domain where English seems to be the dominant language. In order to ensure consistent terminology and adequate style, the document has been translated with the use of different resources, such as dictionaries, glossaries and specialised corpora. I also contacted field experts and the authors of text. The collaboration with the authors proved to be an invaluable resource yet one to be carefully managed. This work is divided into 5 chapters. The first deals with domain-specific sublanguages. The second gives an overview of corpus linguistics and describes the corpora designed for the translation. The third provides an analysis of the article, focusing on syntactical, lexical and structural features while the fourth presents the translation, side-by-side with the source text. The fifth comments on the main difficulties encountered in the translation and the strategies used, as well as the relationship with the authors and their review of the published text. Appendix I contains the econometric glossary English – Italian.