756 resultados para Language Analysis
Resumo:
This paper reports on a process to validate a revised version of a system for coding classroom discourse in foreign language lessons, a context in which the dual role of language (as content and means of communication) and the speakers' specific pedagogical aims lead to a certain degree of ambiguity in language analysis. The language used by teachers and students has been extensively studied, and a framework of concepts concerning classroom discourse well-established. Models for coding classroom language need, however, to be revised when they are applied to specific research contexts. The application and revision of an initial framework can lead to the development of earlier models, and to the re-definition of previously established categories of analysis that have to be validated. The procedures followed to validate a coding system are related here as guidelines for conducting research under similar circumstances. The advantages of using instruments that incorporate two types of data, that is, quantitative measures and qualitative information from raters' metadiscourse, are discussed, and it is suggested that such procedure can contribute to the process of validation itself, towards attaining reliability of research results, as well as indicate some constraints of the adopted research methodology.
Resumo:
One of the main challenges to be addressed in text summarization concerns the detection of redundant information. This paper presents a detailed analysis of three methods for achieving such goal. The proposed methods rely on different levels of language analysis: lexical, syntactic and semantic. Moreover, they are also analyzed for detecting relevance in texts. The results show that semantic-based methods are able to detect up to 90% of redundancy, compared to only the 19% of lexical-based ones. This is also reflected in the quality of the generated summaries, obtaining better summaries when employing syntactic- or semantic-based approaches to remove redundancy.
Resumo:
The best results in the application of computer science systems to automatic translation are obtained in word processing when texts pertain to specific thematic areas, with structures well defined and a concise and limited lexicon. In this article we present a plan of systematic work for the analysis and generation of language applied to the field of pharmaceutical leaflet, a type of document characterized by format rigidity and precision in the use of lexicon. We propose a solution based in the use of one interlingua as language pivot between source and target languages; we are considering Spanish and Arab languages in this case of application.
Resumo:
Using examples from contemporary policy and business discourses, and exemplary historical texts dealing with the notion of value, I put forward an argument as to why a critical scholarship that draws on media history, language analysis, philosophy and political economy is necessary to understand the dynamics of what is being called 'the global knowledge economy'. I argue that the social changes associated with new modes of value determination are closely associated with new media forms.
Resumo:
Using examples from contemporary policy and business discourses, and exemplary historical texts dealing with the notion of value, I put forward an argument as to why a critical scholarship that draws on media history, language analysis, philosophy and political economy is necessary to understand the dynamics of what is being called 'the global knowledge economy'. I argue that the social changes associated with new modes of value determination are closely associated with new media forms.
Resumo:
This work is aimed at building an adaptable frame-based system for processing Dravidian languages. There are about 17 languages in this family and they are spoken by the people of South India.Karaka relations are one of the most important features of Indian languages. They are the semabtuco-syntactic relations between verbs and other related constituents in a sentence. The karaka relations and surface case endings are analyzed for meaning extraction. This approach is comparable with the borad class of case based grammars.The efficiency of this approach is put into test in two applications. One is machine translation and the other is a natural language interface (NLI) for information retrieval from databases. The system mainly consists of a morphological analyzer, local word grouper, a parser for the source language and a sentence generator for the target language. This work make contributios like, it gives an elegant account of the relation between vibhakthi and karaka roles in Dravidian languages. This mapping is elegant and compact. The same basic thing also explains simple and complex sentence in these languages. This suggests that the solution is not just ad hoc but has a deeper underlying unity. This methodology could be extended to other free word order languages. Since the frame designed for meaning representation is general, they are adaptable to other languages coming in this group and to other applications.
Resumo:
Numerous linguistic operations have been assigned to cortical brain areas, but the contributions of subcortical structures to human language processing are still being discussed. Using simultaneous EEG recordings directly from deep brain structures and the scalp, we show that the human thalamus systematically reacts to syntactic and semantic parameters of auditorily presented language in a temporally interleaved manner in coordination with cortical regions. In contrast, two key structures of the basal ganglia, the globus pallidus internus and the subthalamic nucleus, were not found to be engaged in these processes. We therefore propose that syntactic and semantic language analysis is primarily realized within cortico-thalamic networks, whereas a cohesive basal ganglia network is not involved in these essential operations of language analysis.
Resumo:
This paper presents a study about the role of grammar in on-line interactions conducted in Portuguese and in English, between Brazilian and English-speaking interactants, with the aim of teaching Portuguese as a foreign language (PFL). The interactions occurred by means of chat and the MSN Messenger, and generated audio and video data for language analysis. Grammar is dealt with from two perspectives, an inductive and a deductive approach, so as to investigate the relevance of systematization of grammar rules in the process of learning PFL in teletandem interactions.
Resumo:
Software corpora facilitate reproducibility of analyses, however, static analysis for an entire corpus still requires considerable effort, often duplicated unnecessarily by multiple users. Moreover, most corpora are designed for single languages increasing the effort for cross-language analysis. To address these aspects we propose Pangea, an infrastructure allowing fast development of static analyses on multi-language corpora. Pangea uses language-independent meta-models stored as object model snapshots that can be directly loaded into memory and queried without any parsing overhead. To reduce the effort of performing static analyses, Pangea provides out-of-the box support for: creating and refining analyses in a dedicated environment, deploying an analysis on an entire corpus, using a runner that supports parallel execution, and exporting results in various formats. In this tool demonstration we introduce Pangea and provide several usage scenarios that illustrate how it reduces the cost of analysis.
Resumo:
This paper addresses the problem of the automatic recognition and classification of temporal expressions and events in human language. Efficacy in these tasks is crucial if the broader task of temporal information processing is to be successfully performed. We analyze whether the application of semantic knowledge to these tasks improves the performance of current approaches. We therefore present and evaluate a data-driven approach as part of a system: TIPSem. Our approach uses lexical semantics and semantic roles as additional information to extend classical approaches which are principally based on morphosyntax. The results obtained for English show that semantic knowledge aids in temporal expression and event recognition, achieving an error reduction of 59% and 21%, while in classification the contribution is limited. From the analysis of the results it may be concluded that the application of semantic knowledge leads to more general models and aids in the recognition of temporal entities that are ambiguous at shallower language analysis levels. We also discovered that lexical semantics and semantic roles have complementary advantages, and that it is useful to combine them. Finally, we carried out the same analysis for Spanish. The results obtained show comparable advantages. This supports the hypothesis that applying the proposed semantic knowledge may be useful for different languages.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
Includes index.
Resumo:
Mode of access: Internet.
Resumo:
Poets have a licence to couch great truths in succinct, emotionally powerful, and perhaps slightly mysterious and ambiguous ways. On the other hand, it is the task of academics to explore such truths intellectually, in depth and detail, identifying the key constructs and their underlying relations and structures, hopefully without impairing the essential truth. So it could be said that in January 2013, around 60 academics gathered at the University of Texas, Austin under the benign and encouraging eye of their own muse, Professor Rod Hart, to play their role in exploring and explaining the underlying truth of Yan Zhen’s words. The goals of this chapter are quite broad. Rod was explicit and yet also somewhat Delphic in his expectations and aspirations for the chapter. Even though DICTION was a key analytic tool in most chapters, this chapter was not to be about DICTION per se, or simply a critique of the individual chapters forming this section of the book. Rather DICTION and these studies, as well as some others that got our attention, were to be more a launching pad for observations on what they revealed about the current state of understanding and research into the language of institutions, as well as some ‘adventurous’, but not too outlandish reflections on future challenges and opportunities.