6 resultados para Textual simplification
em Aston University Research Archive
Resumo:
Simplification of texts has traditionally been carried out by replacing words and structures with appropriate semantic equivalents in the learner's interlanguage, omitting whichever items prove intractable, and thereby bringing the language of the original within the scope of the learner's transitional linguistic competence. This kind of simplification focuses mainly on the formal features of language. The simplifier can, on the other hand, concentrate on making explicit the propositional content and its presentation in the original in order to bring what is communicated in the original within the scope of the learner's transitional communicative competence. In this case, simplification focuses on the communicative function of the language. Up to now, however, approaches to the problem of simplification have been mainly concerned with the first kind, using the simplifier’s intuition as to what constitutes difficulty for the learner. There appear to be few objective principles underlying this process. The main aim of this study is to investigate the effect of simplification on the communicative aspects of narrative texts, which includes the manner in which narrative units at higher levels of organisation are structured and presented and also the temporal and logical relationships between lower level structures such as sentences/clauses, with the intention of establishing an objective approach to the problem of simplification based on a set of principled procedures which could be used as a guideline in the simplification of material for foreign students at an advanced level.
Resumo:
The present work studies the overall structuring of radio news discourse via investigating three metatextual/interactive functions: (1) Discourse Organizing Elements (DOEs), (2) Attribution and (3) Sentential and Nominal Background Information (SBI & NBI). An extended corpus of about 73,000 words from BBC and Radio Damascus news is used to study DOEs and a restricted corpus of 38,000 words for Attribution and S & NBI. A situational approach is adopted to assess the influence of factors such as medium and audience on these functions and their frequence. It is found that: (1) DOEs are organizational and their frequency is determined by length of text; (2) Attribution Function in accordance with the editor's strategy and its frequency is audience sensitive; and (3) BI provides background information and is determined by audience and news topics. Secondly, the salient grammatical elements in DOEs are discourse deictic demonstratives, address pronouns and nouns referring to `the news'. Attribution is realized in reporting/reported clauses, and BI in a sentence, a clause or a nominal group. Thirdly, DOEs establish a hierarchy of (1) news, (2) summary/expansion and (3) item: including topic introduction and details. While Attribution is generally, and SBI solely, a function of detailing, NBI and proper names are generally a function of summary and topic introduction. Being primarily addressed to audience and referring metatextually, the functions investigated support Sinclair's interactive and autonomous planes of discourse. They also shed light on the part(s) of the linguistic system which realize the metatextual/interactive function. Strictly, `discourse structure' inevitably involves a rank-scale; but news discourse also shows a convention of item `listing'. Hence only within the boundary of variety (ultimately interpreted across language and in its situation) can textual functions and discourse structure be studied. Finally, interlingual variety study provides invaluable insights into a level of translation that goes beyond matching grammatical systems or situational factors, an interpretive level which has to be described in linguistic analysis of translation data.
Resumo:
Little research has been undertaken into high stakes deception, and even less into high stakes deception in written text. This study addresses that gap. In this thesis, I present a new approach to detecting deception in written narratives based on the definition of deception as a progression and focusing on identifying deceptive linguistic strategy rather than individual cues. I propose a new approach for subdividing whole narratives into their constituent episodes, each of which is linguistically profiled and their progression mapped to identify authors’ deceptive strategies based on cue interaction. I conduct a double blind study using qualitative and quantitative analysis in which linguistic strategy (cue interaction and progression) and overall cue presence are used to predict deception in witness statements. This results in linguistic strategy analysis correctly predicting 85% of deceptive statements (92% overall) compared to 54% (64% overall) with cues identified on a whole statement basis. These results suggest that deception cues are not static, and that the value of individual cues as deception predictors is linked to their interaction with other cues. Results also indicate that in certain cue combinations, individual self-references (I, Me and My), previously believed to be indicators of truthfulness, are effective predictors of deceptive linguistic strategy at work
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Existing parsers for textual model representation formats such as XMI and HUTN are unforgiving and fail upon even the smallest inconsistency between the structure and naming of metamodel elements and the contents of serialised models. In this paper, we demonstrate how a fuzzy parsing approach can transparently and automatically resolve a number of these inconsistencies, and how it can eventually turn XML into a human-readable and editable textual model representation format for particular classes of models.