943 resultados para Textual complexity for Romanian language
Resumo:
This paper introduces a novel, in-depth approach of analyzing the differences in writing style between two famous Romanian orators, based on automated textual complexity indices for Romanian language. The considered authors are: (a) Mihai Eminescu, Romania’s national poet and a remarkable journalist of his time, and (b) Ion C. Brătianu, one of the most important Romanian politicians from the middle of the 18th century. Both orators have a common journalistic interest consisting in their desire to spread the word about political issues in Romania via the printing press, the most important public voice at that time. In addition, both authors exhibit writing style particularities, and our aim is to explore these differences through our ReaderBench framework that computes a wide range of lexical and semantic textual complexity indices for Romanian and other languages. The used corpus contains two collections of speeches for each orator that cover the period 1857–1880. The results of this study highlight the lexical and cohesive textual complexity indices that reflect very well the differences in writing style, measures relying on Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) semantic models.
Resumo:
The goal of this article is to reveal the computational structure of modern principle-and-parameter (Chomskian) linguistic theories: what computational problems do these informal theories pose, and what is the underlying structure of those computations? To do this, I analyze the computational complexity of human language comprehension: what linguistic representation is assigned to a given sound? This problem is factored into smaller, interrelated (but independently statable) problems. For example, in order to understand a given sound, the listener must assign a phonetic form to the sound; determine the morphemes that compose the words in the sound; and calculate the linguistic antecedent of every pronoun in the utterance. I prove that these and other subproblems are all NP-hard, and that language comprehension is itself PSPACE-hard.
Resumo:
Monográfico con el título: 'The debate on language acquisitions: constructivism versus innatism'. Resumen basado en el de la publicación
Resumo:
Using online knowledge communities (OKCs) as informal learning environments poses the question how likely these will integrate newcomers as peripheral participants. Previous research has identified surface characteristics of the OKC dialog as integrativity predictors. Yet, little is known about the role of dialogic textual complexity. This contribution proposes a comprehensive approach based on previously validated textual complexity indexes and applies it to predict OKC integrativity. The dialog analysis of N = 14 blogger communities with a total of 1937 participants identified three main components of textual complexity: dialog participation, structure and cohesion. From these, dialog cohesion was higher in integrative OKCs, thus significantly predicting OKC integrativity. This result adds to previous OKC research by uncovering the depth of OKC discourse. For educational practice, the study suggests a way of empowering learners by automatically assessing the integrativity of OKCs in which they may attempt to participate and access community knowledge.
Resumo:
Social media tools are increasingly popular in Computer Supported Collaborative Learning and the analysis of students' contributions on these tools is an emerging research direction. Previous studies have mainly focused on examining quantitative behavior indicators on social media tools. In contrast, the approach proposed in this paper relies on the actual content analysis of each student's contributions in a learning environment. More specifically, in this study, textual complexity analysis is applied to investigate how student's writing style on social media tools can be used to predict their academic performance and their learning style. Multiple textual complexity indices are used for analyzing the blog and microblog posts of 27 students engaged in a project-based learning activity. The preliminary results of this pilot study are encouraging, with several indexes predictive of student grades and/or learning styles.
Resumo:
Methods from statistical physics, such as those involving complex networks, have been increasingly used in the quantitative analysis of linguistic phenomena. In this paper, we represented pieces of text with different levels of simplification in co-occurrence networks and found that topological regularity correlated negatively with textual complexity. Furthermore, in less complex texts the distance between concepts, represented as nodes, tended to decrease. The complex networks metrics were treated with multivariate pattern recognition techniques, which allowed us to distinguish between original texts and their simplified versions. For each original text, two simplified versions were generated manually with increasing number of simplification operations. As expected, distinction was easier for the strongly simplified versions, where the most relevant metrics were node strength, shortest paths and diversity. Also, the discrimination of complex texts was improved with higher hierarchical network metrics, thus pointing to the usefulness of considering wider contexts around the concepts. Though the accuracy rate in the distinction was not as high as in methods using deep linguistic knowledge, the complex network approach is still useful for a rapid screening of texts whenever assessing complexity is essential to guarantee accessibility to readers with limited reading ability. Copyright (c) EPLA, 2012
Resumo:
Tese de doutoramento, Linguística (Linguística Educacional), Universidade de Lisboa, Faculdade de Letras, 2016
Resumo:
The study reported in this article is a part of a large-scale study investigating syntactic complexity in second language (L2) oral data in commonly taught foreign languages (English, German, Japanese, and Spanish; Ortega, Iwashita, Rabie, & Norris, in preparation). In this article, preliminary findings of the analysis of the Japanese data are reported. Syntactic complexity, which is referred to as syntactic maturity or the use of a range of forms with degrees of sophistication (Ortega, 2003), has long been of interest to researchers in L2 writing. In L2 speaking, researchers have examined syntactic complexity in learner speech in the context of pedagogic intervention (e.g., task type, planning time) and the validation of rating scales. In these studies complexity is examined using measures commonly employed in L2 writing studies. It is assumed that these measures are valid and reliable, but few studies explain what syntactic complexity measures actually examine. The language studied is predominantly English, and little is known about whether the findings of such studies can be applied to languages that are typologically different from English. This study examines how syntactic complexity measures relate to oral proficiency in Japanese as a foreign language. An in-depth analysis of speech samples from 33 learners of Japanese is presented. The results of the analysis are compared across proficiency levels and cross-referenced with 3 other proficiency measures used in the study. As in past studies, the length of T-units and the number of clauses per T-unit is found to be the best way to predict learner proficiency; the measure also had a significant linear relation with independent oral proficiency measures. These results are discussed in light of the notion of syntactic complexity and the interfaces between second language acquisition and language testing. Adapted from the source document
Resumo:
In this paper we introduce the online version of our ReaderBench framework, which includes multi-lingual comprehension-centered web services designed to address a wide range of individual and collaborative learning scenarios, as follows. First, students can be engaged in reading a course material, then eliciting their understanding of it; the reading strategies component provides an in-depth perspective of comprehension processes. Second, students can write an essay or a summary; the automated essay grading component provides them access to more than 200 textual complexity indices covering lexical, syntax, semantics and discourse structure measurements. Third, students can start discussing in a chat or a forum; the Computer Supported Collaborative Learning (CSCL) component provides indepth conversation analysis in terms of evaluating each member’s involvement in the CSCL environments. Eventually, the sentiment analysis, as well as the semantic models and topic mining components enable a clearer perspective in terms of learner’s points of view and of underlying interests.
Resumo:
Dascalu, M., Stavarache, L.L., Dessus, P., Trausan-Matu, S., McNamara, D.S., & Bianco, M. (2015). ReaderBench: An Integrated Cohesion-Centered Framework. In G. Conole, T. Klobucar, C. Rensing, J. Konert & É. Lavoué (Eds.), 10th European Conf. on Technology Enhanced Learning (pp. 505–508). Toledo, Spain: Springer.
Shifting meanings : The role of metaphors in collective meaning–making in complex project leadership
Resumo:
This paper examines the use of metaphors in collective meaning-making in the work of managers and leaders of megaprojects, drawing on interviews with thirty-three leaders of complex projects in a case study organisation responsible for the delivery of major acquisitions. Recognising the notion of both contextualised and decontextualised approaches to either seeking to elicit or project metaphors, the paper describes the various ways in practising project leaders describe their work and the synergies these metaphors have with the broader social discourse and theorisation around complexity and the language of complex adaptive systems. The paper presents our case study findings where we outline our typology of meta-metaphors describing project leaders’ multiple roles and our interpretation of the significance of these choices.
Resumo:
The primary goal of this report is to demonstrate how considerations from computational complexity theory can inform grammatical theorizing. To this end, generalized phrase structure grammar (GPSG) linguistic theory is revised so that its power more closely matches the limited ability of an ideal speaker--hearer: GPSG Recognition is EXP-POLY time hard, while Revised GPSG Recognition is NP-complete. A second goal is to provide a theoretical framework within which to better understand the wide range of existing GPSG models, embodied in formal definitions as well as in implemented computer programs. A grammar for English and an informal explanation of the GPSG/RGPSG syntactic features are included in appendices.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Informática e Computadores
Resumo:
Mémoire par article