2 resultados para Education, Language and Literature|Education, Curriculum and Instruction
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
The aim of my dissertation is to analyze how selected elements of language are addressed in two contemporary dystopias, Feed by M. T. Anderson (2002) and Super Sad True Love Story by Gary Shteyngart (2010). I chose these two novels because language plays a key role in both of them: both are primarily focused on the pervasiveness of technology, and on how the use/abuse of technology affects language in all its forms. In particular, I examine four key aspects of language: books, literacy, diary writing, as well as oral language. In order to analyze how the aforementioned elements of language are dealt with in Feed and Super Sad True Love Story, I consider how the same aspects of language are presented in a sample of classical dystopias selected as benchmarks: We by Yevgeny Zamyatin (1921), Brave New World by Aldous Huxley (1932), Animal Farm (1945) and Nineteen Eighty-Four (1949) by George Orwell, Fahrenheit 451 by Ray Bradbury (1952), and The Handmaid's Tale by Margaret Atwood (1986). In this way, I look at how language, books, literacy, and diaries are dealt with in Anderson’s Feed and in Shteyngart’s Super Sad True Love Story, both in comparison with the classical dystopias as well as with one another. This allows for an analysis of the similarities, as well as the differences, between the two novels. The comparative analysis carried out also takes into account the fact that the two contemporary dystopias have different target audiences: one is for young adults (Feed), whereas the other is for adults (Super Sad True Love Story). Consequently, I also consider whether further differences related to target readers affect differences in how language is dealt with. Preliminary findings indicate that, despite their different target audiences, the linguistic elements considered are addressed in the two novels in similar ways.
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.