34 resultados para Linguistic analysis
em Aston University Research Archive
Resumo:
Simplification of texts has traditionally been carried out by replacing words and structures with appropriate semantic equivalents in the learner's interlanguage, omitting whichever items prove intractable, and thereby bringing the language of the original within the scope of the learner's transitional linguistic competence. This kind of simplification focuses mainly on the formal features of language. The simplifier can, on the other hand, concentrate on making explicit the propositional content and its presentation in the original in order to bring what is communicated in the original within the scope of the learner's transitional communicative competence. In this case, simplification focuses on the communicative function of the language. Up to now, however, approaches to the problem of simplification have been mainly concerned with the first kind, using the simplifier’s intuition as to what constitutes difficulty for the learner. There appear to be few objective principles underlying this process. The main aim of this study is to investigate the effect of simplification on the communicative aspects of narrative texts, which includes the manner in which narrative units at higher levels of organisation are structured and presented and also the temporal and logical relationships between lower level structures such as sentences/clauses, with the intention of establishing an objective approach to the problem of simplification based on a set of principled procedures which could be used as a guideline in the simplification of material for foreign students at an advanced level.
Resumo:
This research explores how news media reports construct representations of a business crisis through language. In an innovative approach to dealing with the vast pool of potentially relevant texts, media texts concerning the BP Deepwater Horizon oil spill are gathered from three different time points: immediately after the explosion in 2010, one year later in 2011 and again in 2012. The three sets of 'BP texts' are investigated using discourse analysis and semi-quantitative methods within a semiotic framework that gives an account of language at the semiotic levels of sign, code, mythical meaning and ideology. The research finds in the texts three discourses of representation concerning the crisis that show a movement from the ostensibly representational to the symbolic and conventional: a discourse of 'objective factuality', a discourse of 'positioning' and a discourse of 'redeployment'. This progression can be shown to have useful parallels with Peirce's sign classes of Icon, Index and Symbol, with their implied movement from a clear motivation by the Object (in this case the disaster events), to an arbitrary, socially-agreed connection. However, the naturalisation of signs, whereby ideologies are encoded in ways of speaking and writing that present them as 'taken for granted' is at its most complete when it is least discernible. The findings suggest that media coverage is likely to move on from symbolic representation to a new kind of iconicity, through a fourth discourse of 'naturalisation'. Here the representation turns back towards ostensible factuality or iconicity, to become the 'naturalised icon'. This work adds to the study of media representation a heuristic for understanding how the meaning-making of a news story progresses. It offers a detailed account of what the stages of this progression 'look like' linguistically, and suggests scope for future research into both language characteristics of phases and different news-reported phenomena.
Resumo:
One of the major problems for Critical Discourse Analysts is how to move on from their insightful critical analyses to successfully 'acting on the world in order to transform it'. This paper discusses, with detailed exemplification, some of the areas where linguists have moved beyond description to acting on and changing the world. Examples from three murder trials show how essential it is, in order to protect the rights of witnesses and defendants, to have audio records of significant interviews with police officers. The article moves on to discuss the potentially serious consequences of the many communicative problems inherent in legal/lay interaction and illustrates a few of the linguist-led improvements to important texts. Finally, the article turns to the problems of using linguistic data to try to determine the geographical origin of asylum seekers. The intention of the article is to act as a call to arms to linguists; it concludes with the observation that 'innumerable mountains remain for those with a critical linguistic perspective who would like to try to move one'. © 2011 John Benjamins Publishing Company.
Resumo:
The present thesis focuses on the overall structure of the language of two types of Speech Exchange Systems (SES) : Interview (INT) and Conversation (CON). The linguistic structure of INT and CON are quantitatively investigated on three different but interrelated levels of analysis : Lexis, Syntax and Information Structure. The corpus of data 1n vest1gated for the project consists of eight sessions of pairs of conversants in carefully planned interviews followed by unplanned, surreptitiously recorded conversational encounters of the same pairs of speakers. The data comprise a total of approximately 15.200 words of INT talk and of about 19.200 words in CON. Taking account of the debatable assumption that the language of SES might be complex on certain linguistic levels (e.g. syntax) (Halliday 1979) and might be simple on others (e.g. lexis) in comparison to written discourse, the thesis sets out to investigate this complexity using a statistical approach to the computation of the structures recurrent in the language of INT and CON. The findings indicate clearly the presence of linguistic complexity in both types. They also show the language of INT to be slightly more syntactically and lexically complex than that of CON. Lexical density seems to be relatively high in both types of spoken discourse. The language of INT seems to be more complex than that of CON on the level of information structure too. This is manifested in the greater use of Inferable and other linguistically complex entities of discourse. Halliday's suggestion that the language of SES is syntactically complex is confirmed but not the one that the more casual the conversation is the more syntactically complex it becomes. The results of the analysis point to the general conclusion that the linguistic complexity of types of SES is not only in the high recurrence of syntactic structures, but also in the combination of these features with each other and with other linguistic and extralinguistic features. The linguistic analysis of the language of SES can be useful in understanding and pinpointing the intricacies of spoken discourse in general and will help discourse analysts and applied linguists in exploiting it both for theoretical and pedagogical purposes.
Resumo:
The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.
Resumo:
The present investigation is based on a linguistic analysis of the 'Housing Act 1980' and attempts to examine the role of qualifications in the structuring of the legislative statement. The introductory chapter isolates legislative writing as a "sub-variety “of legal language and provides an overview of the controversies surrounding the way it is written and the problems it poses to its readers. Chapter two emphasizes the limitations of the available work on the description of language-varieties for the analysis of legislative writing and outlines the approach adopted for the present analysis. This chapter also gives some idea of the information-structuring of legislative provisions and establishes qualification as a key element in their textualisation. The next three chapters offer a detailed account of the ten major qualification-types identified in the corpus, concentrating on the surface form they take, the features of legislative statements they textualize and the syntactic positions to which they are generally assigned in the statement of legislative provisions. The emerging hypotheses in these chapters have often been verified through a specialist reaction from a Parliamentary Counsel, largely responsible for the writing of the ‘Housing Act 1980’• The findings suggest useful correlations between a number of qualificational initiators and the various aspects of the legislative statement. They also reveal that many of these qualifications typically occur in those clause-medial syntactic positions which are sparingly used in other specialist discourse, thus creating syntactic discontinuity in the legislative sentence. Such syntactic discontinuities, on the evidence from psycholinguistic experiments reported in chapter six, create special problems in the processing and comprehension of legislative statements. The final chapter converts the main linguistic findings into a series of pedagogical generalizations, offers indications of how this may be applied in EALP situations and concludes with other considerations of possible applications.
Resumo:
The present work studies the overall structuring of radio news discourse via investigating three metatextual/interactive functions: (1) Discourse Organizing Elements (DOEs), (2) Attribution and (3) Sentential and Nominal Background Information (SBI & NBI). An extended corpus of about 73,000 words from BBC and Radio Damascus news is used to study DOEs and a restricted corpus of 38,000 words for Attribution and S & NBI. A situational approach is adopted to assess the influence of factors such as medium and audience on these functions and their frequence. It is found that: (1) DOEs are organizational and their frequency is determined by length of text; (2) Attribution Function in accordance with the editor's strategy and its frequency is audience sensitive; and (3) BI provides background information and is determined by audience and news topics. Secondly, the salient grammatical elements in DOEs are discourse deictic demonstratives, address pronouns and nouns referring to `the news'. Attribution is realized in reporting/reported clauses, and BI in a sentence, a clause or a nominal group. Thirdly, DOEs establish a hierarchy of (1) news, (2) summary/expansion and (3) item: including topic introduction and details. While Attribution is generally, and SBI solely, a function of detailing, NBI and proper names are generally a function of summary and topic introduction. Being primarily addressed to audience and referring metatextually, the functions investigated support Sinclair's interactive and autonomous planes of discourse. They also shed light on the part(s) of the linguistic system which realize the metatextual/interactive function. Strictly, `discourse structure' inevitably involves a rank-scale; but news discourse also shows a convention of item `listing'. Hence only within the boundary of variety (ultimately interpreted across language and in its situation) can textual functions and discourse structure be studied. Finally, interlingual variety study provides invaluable insights into a level of translation that goes beyond matching grammatical systems or situational factors, an interpretive level which has to be described in linguistic analysis of translation data.
Resumo:
This study investigates plagiarism detection, with an application in forensic contexts. Two types of data were collected for the purposes of this study. Data in the form of written texts were obtained from two Portuguese Universities and from a Portuguese newspaper. These data are analysed linguistically to identify instances of verbatim, morpho-syntactical, lexical and discursive overlap. Data in the form of survey were obtained from two higher education institutions in Portugal, and another two in the United Kingdom. These data are analysed using a 2 by 2 between-groups Univariate Analysis of Variance (ANOVA), to reveal cross-cultural divergences in the perceptions of plagiarism. The study discusses the legal and social circumstances that may contribute to adopting a punitive approach to plagiarism, or, conversely, reject the punishment. The research adopts a critical approach to plagiarism detection. On the one hand, it describes the linguistic strategies adopted by plagiarists when borrowing from other sources, and, on the other hand, it discusses the relationship between these instances of plagiarism and the context in which they appear. A focus of this study is whether plagiarism involves an intention to deceive, and, in this case, whether forensic linguistic evidence can provide clues to this intentionality. It also evaluates current computational approaches to plagiarism detection, and identifies strategies that these systems fail to detect. Specifically, a method is proposed to translingual plagiarism. The findings indicate that, although cross-cultural aspects influence the different perceptions of plagiarism, a distinction needs to be made between intentional and unintentional plagiarism. The linguistic analysis demonstrates that linguistic elements can contribute to finding clues for the plagiarist’s intentionality. Furthermore, the findings show that translingual plagiarism can be detected by using the method proposed, and that plagiarism detection software can be improved using existing computer tools.
Resumo:
The ways in which an interpreter affects the processes and, possibly, the outcomes of legal proceedings has formed the focus of much recent research, most of it centred upon courtroom discourse. However comparatively little research has been carried out into the effect of interpreting on the interview with a suspect, despite its 'upstream' position in the legal process and vital importance as evidence. As a speech event in the judicial system, the interview differs radically from that which takes place 'downstream', that is, in court. The interview with suspect represents an entirely different construct, in which a range of registers is apparent, and participants use distinctive means to achieve their institutional goals. When a transcript of an interpreter-mediated interview is read out in court, it is assumed that this is a representation of an event, which is essentially identical to a monolingual interview. This thesis challenges that assumption. Using conservation analytic techniques, it examines data from a corpus of monolingual and interpreter-mediated, taped interviews with suspects, in order to identify potentially significant interactional differences and describe ways in which the interpreter affects the processes and may affect the outcomes of the interview. It is argued that although individually, the interactional differences may appear slight, their cumulative effect is significant, particularly since the primary participants in the event are unaware of the full force of the interpreting effect. Finally, the thesis suggests that the insights provided by linguistic analysis of the interpreting on interviews may provide the basis for training, both for interpreters themselves, and for officers in techniques for interpreter-mediated interviews.
Resumo:
Networked Learning, e-Learning and Technology Enhanced Learning have each been defined in different ways, as people's understanding about technology in education has developed. Yet each could also be considered as a terminology competing for a contested conceptual space. Theoretically this can be a ‘fertile trans-disciplinary ground for represented disciplines to affect and potentially be re-orientated by others’ (Parchoma and Keefer, 2012), as differing perspectives on terminology and subject disciplines yield new understandings. Yet when used in government policy texts to describe connections between humans, learning and technology, terms tend to become fixed in less fertile positions linguistically. A deceptively spacious policy discourse that suggests people are free to make choices conceals an economically-based assumption that implementing new technologies, in themselves, determines learning. Yet it actually narrows choices open to people as one route is repeatedly in the foreground and humans are not visibly involved in it. An impression that the effective use of technology for endless improvement is inevitable cuts off critical social interactions and new knowledge for multiple understandings of technology in people's lives. This paper explores some findings from a corpus-based Critical Discourse Analysis of UK policy for educational technology during the last 15 years, to help to illuminate the choices made. This is important when through political economy, hierarchical or dominant neoliberal logic promotes a single ‘universal model’ of technology in education, without reference to a wider social context (Rustin, 2013). Discourse matters, because it can ‘mould identities’ (Massey, 2013) in narrow, objective economically-based terms which 'colonise discourses of democracy and student-centredness' (Greener and Perriton, 2005:67). This undermines subjective social, political, material and relational (Jones, 2012: 3) contexts for those learning when humans are omitted. Critically confronting these structures is not considered a negative activity. Whilst deterministic discourse for educational technology may leave people unconsciously restricted, I argue that, through a close analysis, it offers a deceptively spacious theoretical tool for debate about the wider social and economic context of educational technology. Methodologically it provides insights about ways technology, language and learning intersect across disciplinary borders (Giroux, 1992), as powerful, mutually constitutive elements, ever-present in networked learning situations. In sharing a replicable approach for linguistic analysis of policy discourse I hope to contribute to visions others have for a broader theoretical underpinning for educational technology, as a developing field of networked knowledge and research (Conole and Oliver, 2002; Andrews, 2011).
Resumo:
This article uses a research project into the online conversations of sex offenders and the children they abuse to further the arguments for the acceptability of experimental work as a research tool for linguists. The research reported here contributes to the growing body of work within linguistics that has found experimental methods to be useful in answering questions about representation and constraints on linguistic expression (Hemforth 2013). The wider project examines online identity assumption in online paedophile activity and the policing of such activity, and involves dealing with the linguistic analysis of highly sensitive sexual grooming transcripts. Within the linguistics portion of the project, we examine theories of idiolect and identity through analysis of the ‘talk’ of perpetrators of online sexual abuse, and of the undercover officers that must assume alternative identities in order to investigate such crimes. The essential linguistic question in this article is methodological and concerns the applicability of experimental work to exploration of online identity and identity disguise. Although we touch on empirical questions, such as the sufficiency of linguistic description that will enable convincing identity disguise, we do not explore the experimental results in detail. In spite of the preference within a range of discourse analytical paradigms for ‘naturally occurring’ data, we argue that not only does the term prove conceptually problematic, but in certain contexts, and particularly in the applied forensic context described, a rejection of experimentally elicited data would limit the possible types and extent of analyses. Thus, it would restrict the contribution that academic linguistics can make in addressing a serious social problem.
Resumo:
Technology discloses man’s mode of dealing with Nature, the process of production by which he sustains his life, and thereby also lays bare the mode of formation of his social relations, and of the mental conceptions that flow from them (Marx, 1990: 372) My thesis is a Sociological analysis of UK policy discourse for educational technology during the last 15 years. My framework is a dialogue between the Marxist-based critical social theory of Lieras and a corpus-based Critical Discourse Analysis (CDA) of UK policy for Technology Enhanced Learning (TEL) in higher education. Embedded in TEL is a presupposition: a deterministic assumption that technology has enhanced learning. This conceals a necessary debate that reminds us it is humans that design learning, not technology. By omitting people, TEL provides a vehicle for strong hierarchical or neoliberal, agendas to make simplified claims politically, in the name of technology. My research has two main aims: firstly, I share a replicable, mixed methodological approach for linguistic analysis of the political discourse of TEL. Quantitatively, I examine patterns in my corpus to question forms of ‘use’ around technology that structure a rigid basic argument which ‘enframes’ educational technology (Heidegger, 1977: 38). In a qualitative analysis of findings, I ask to what extent policy discourse evaluates technology in one way, to support a Knowledge Based Economy (KBE) in a political economy of neoliberalism (Jessop 2004, Fairclough 2006). If technology is commodified as an external enhancement, it is expected to provide an ‘exchange value’ for learners (Marx, 1867). I therefore examine more closely what is prioritised and devalued in these texts. Secondly, I disclose a form of austerity in the discourse where technology, as an abstract force, undertakes tasks usually ascribed to humans (Lieras, 1996, Brey, 2003:2). This risks desubjectivisation, loss of power and limits people’s relationships with technology and with each other. A view of technology in political discourse as complete without people closes possibilities for broader dialectical (Fairclough, 2001, 2007) and ‘convivial’ (Illich, 1973) understandings of the intimate, material practice of engaging with technology in education. In opening the ‘black box’ of TEL via CDA I reveal talking points that are otherwise concealed. This allows me as to be reflexive and self-critical through praxis, to confront my own assumptions about what the discourse conceals and what forms of resistance might be required. In so doing, I contribute to ongoing debates about networked learning, providing a context to explore educational technology as a technology, language and learning nexus.
Resumo:
This article is the first linguistic analysis of a new category of lifestyle magazines in the German speaking countries, based on methods of corpus linguistics and multimodal discourse analysis. Since the launch of the magazine LandLust in Germany in 2005, more than twenty publications of so called "land magazines" have appeared on the market, attracting millions of readers. Our research analyses land magazines as discursive events. We examine the specific combination of discourses land magazines are serving or creating by looking at the semiotic practices – writing and images – they manifest themselves by. Our results show that the magazine under scrutiny does not simply provide new forms of escapism but also positions itself politically in subtle ways as part of the traditional-conservative spectrum by reacting to metalinguistic discourses such as purism and feminist criticism.