980 resultados para Functional Written Language
Resumo:
Key features include: • Discussion of language in relation to various aspects of identity, such as those connected with nation and region, as well as in relation to social aspects such as social class and race. • A chapter on undertaking research that will equip students with appropriate research methods for their own projects. • An analysis of language and identity within the context of written as well as spoken texts. Language and Identity in Englishes examines the core issues and debates surrounding the relationship between English, language and identity. Drawing on a range of international examples from the UK, US, China and India, Clark uses both cutting-edge fieldwork and her own original research to give a comprehensive account of the study of language and identity. With its accessible structure, international scope and the inclusion of leading research in the area, this book is ideal for any student taking modules in language and identity or sociolinguistics.
Resumo:
Previous research into formulaic language has focussed on specialised groups of people (e.g. L1 acquisition by infants and adult L2 acquisition) with ordinary adult native speakers of English receiving less attention. Additionally, whilst some features of formulaic language have been used as evidence of authorship (e.g. the Unabomber’s use of you can’t eat your cake and have it too) there has been no systematic investigation into this as a potential marker of authorship. This thesis reports the first full-scale study into the use of formulaic sequences by individual authors. The theory of formulaic language hypothesises that formulaic sequences contained in the mental lexicon are shaped by experience combined with what each individual has found to be communicatively effective. Each author’s repertoire of formulaic sequences should therefore differ. To test this assertion, three automated approaches to the identification of formulaic sequences are tested on a specially constructed corpus containing 100 short narratives. The first approach explores a limited subset of formulaic sequences using recurrence across a series of texts as the criterion for identification. The second approach focuses on a word which frequently occurs as part of formulaic sequences and also investigates alternative non-formulaic realisations of the same semantic content. Finally, a reference list approach is used. Whilst claiming authority for any reference list can be difficult, the proposed method utilises internet examples derived from lists prepared by others, a procedure which, it is argued, is akin to asking large groups of judges to reach consensus about what is formulaic. The empirical evidence supports the notion that formulaic sequences have potential as a marker of authorship since in some cases a Questioned Document was correctly attributed. Although this marker of authorship is not universally applicable, it does promise to become a viable new tool in the forensic linguist’s tool-kit.
Resumo:
Many people think of language as words. Words are small, convenient units, especially in written English, where they are separated by spaces. Dictionaries seem to reinforce this idea, because entries are arranged as a list of alphabetically-ordered words. Traditionally, linguists and teachers focused on grammar and treated words as self-contained units of meaning, which fill the available grammatical slots in a sentence. More recently, attention has shifted from grammar to lexis, and from words to chunks. Dictionary headwords are convenient points of access for the user, but modern dictionary entries usually deal with chunks, because meanings often do not arise from individual words, but from the chunks in which the words occur. Corpus research confirms that native speakers of a language actually work with larger “chunks” of language. This paper will show that teachers and learners will benefit from treating language as chunks rather than words.
Resumo:
In the present state of the art of authorship attribution there seems to be an opposition between two approaches: cognitive and stylistic methodologies. It is proposed in this article that these two approaches are complementary and that the apparent gap between them can be bridged using Systemic Functional Linguistics (SFL) and in particular some of its theoretical constructions, such as codal variation. This article deals with the theoretical explanation of why such a theory would solve the debate between the two approaches and shows how these two views of authorship attribution are indeed complementary. Although the article is fundamentally theoretical, two example experimental trials are reported to show how this theory can be developed into a workable methodology of doing authorship attribution. In Trial 1, a SFL analysis was carried out on a small dataset consisting of three 300-word texts collected from three different authors whose socio-demographic background matched across a number of parameters. This trial led to some conclusions about developing a methodology based on SFL and suggested the development of another trial, which might hint at a more accurate and useful methodology. In Trial 2, Biber's (1988) multidimensional framework is employed, and a final methodology of authorship analysis based on this kind of analysis is proposed for future research. © 2013, EQUINOX PUBLISHING.
Resumo:
This paper investigates whether the position of adverb phrases in sentences is regionally patterned in written Standard American English, based on an analysis of a 25 million word corpus of letters to the editor representing the language of 200 cities from across the United States. Seven measures of adverb position were tested for regional patterns using the global spatial autocorrelation statistic Moran’s I and the local spatial autocorrelation statistic Getis-Ord Gi*. Three of these seven measures were indentified as exhibiting significant levels of spatial autocorrelation, contrasting the language of the Northeast with language of the Southeast and the South Central states. These results demonstrate that continuous regional grammatical variation exists in American English and that regional linguistic variation exists in written Standard English.
Resumo:
Modern technology has moved on and completely changed the way that people can use the telephone or mobile to dialogue with information held on computers. Well developed “written speech analysis” does not work with “verbal speech”. The main purpose of our article is, firstly, to highlights the problems and, secondly, to shows the possible ways to solve these problems.
Resumo:
* The following text has been originally published in the Proceedings of the Language Recourses and Evaluation Conference held in Lisbon, Portugal, 2004, under the title of "Towards Intelligent Written Cultural Heritage Processing - Lexical processing". I present here a revised contribution of the aforementioned paper and I add here the latest efforts done in the Center for Computational Linguistic in Prague in the field under discussion.
Resumo:
Бойко Бл. Банчев - Представена е обосновка и описание на език за програмиране в композиционен стил за опитни и учебни цели. Под “композиционен” имаме предвид функционален стил на програмиране, при който пресмятането е йерархия от композиции и прилагания на функции. Един от данновите типове на езика е този на геометричните фигури, които могат да бъдат получавани чрез прости правила за съотнасяне и така също образуват йерархични композиции. Езикът е силно повлиян от GeomLab, но по редица свойства се различава от него значително. Статията разглежда основните черти на езика; подробното му описание и фигурноконструктивните му възможности ще бъдат представени в съпътстваща публикация.
Resumo:
Functional programming has a lot to offer to the developers of global Internet-centric applications, but is often applicable only to a small part of the system or requires major architectural changes. The data model used for functional computation is often simply considered a consequence of the chosen programming style, although inappropriate choice of such model can make integration with imperative parts much harder. In this paper we do the opposite: we start from a data model based on JSON and then derive the functional approach from it. We outline the identified principles and present Jsonya/fn — a low-level functional language that is defined in and operates with the selected data model. We use several Jsonya/fn implementations and the architecture of a recently developed application to show that our approach can improve interoperability and can achieve additional reuse of representations and operations at relatively low cost. ACM Computing Classification System (1998): D.3.2, D.3.4.
Resumo:
2000 Mathematics Subject Classification: C2P99.
Resumo:
This book presents a novel approach to discussing how to research language teacher cognition and practice. An introductory chapter by theeditors and an overview of the research field by Simon Borg precede eigh case studies written by new researchers, each of which focuses on one approach to collecting data. These approaches range from questionnaires and focus groups to think aloud, stimulated recall, and oral reflective journals. Each case study is commented on by a leading expert in the field - JD Brown, Martin Bygate, Donald Freeman, Alan Maley, Jerry Gebhard, Thoma Farrell, Susan Gass, and Jill Burton. Readers are encouraged to enter th conversation by reflecting on a set of questions and tasks in each chapter.
Resumo:
Background: Recent morpho-functional evidence pointed out that abnormalities in the thalamus could play a major role in the expression of migraine neurophysiological and clinical correlates. Whether this phenomenon is primary or secondary to its functional disconnection from the brainstem remains to be determined. We used a Functional Source Separation algorithm of EEG signal to extract the activity of the different neuronal pools recruited at different latencies along the somatosensory pathway in interictal migraine without aura (MO) patients. Methods: Twenty MO patients and 20 healthy volunteers (HV) underwent EEG recording. Four ad-hoc functional constraints, two sub-cortical (FS14 at brainstem and FS16 at thalamic level) and two cortical (FS20 radial and FS22 tangential parietal sources), were used to extract the activity of successive stages of somatosensory information processing in response to the separate left and right median nerve electric stimulation. A band-pass digital filter (450-750 Hz) was applied offline in order to extract high-frequency oscillatory (HFO) activity from the broadband EEG signal. Results: In both stimulated sides, significant reduced sub-cortical brainstem (FS14) and thalamic (FS16) HFO activations characterized MO patients when compared with HV. No difference emerged in the two cortical HFO activations between the two groups. Conclusions: Present results are the first neurophysiological evidence supporting the hypothesis that a functional disconnection of the thalamus from the subcortical monoaminergic system may underline the interictal cortical abnormal information processing in migraine. Further studies are needed to investigate the precise directional connectivity across the entire primary subcortical and cortical somatosensory pathway in interictal MO. Written informed consent to publication was obtained from the patient(s).
Resumo:
The first study of its kind, Regional Variation in Written American English takes a corpus-based approach to map over a hundred grammatical alternation variables across the United States. A multivariate spatial analysis of these maps shows that grammatical alternation variables follow a relatively small number of common regional patterns in American English, which can be explained based on both linguistic and extra-linguistic factors. Based on this rigorous analysis of extensive data, Grieve identifies five primary modern American dialect regions, demonstrating that regional variation is far more pervasive and complex in natural language than is generally assumed. The wealth of maps and data and the groundbreaking implications of this volume make it essential reading for students and researchers in linguistics, English language, geography, computer science, sociology and communication studies.
Resumo:
In this article I first divide Forensic Linguistics into three sub-disciplines: the language of written legal texts, the spoken language of legal proceedings, and the linguist as expert witness and then go on to give a small number of examples of the research undertaken in these three areas. For the language of written legal texts, I present work on the (in) comprehensibility of police cautions and of judges instructions to juries. For the spoken language of legal proceedings, I report work on the problems of interpreted interaction, of vulnerable witnesses and the need for more detailed research comparing the interactive rules in adversarial and investigative systems. Finally, to illustrate the role of the linguist as expert witness I report a trademark case, five different authorship attribution cases, three very different plagiarism cases and I end reporting briefly the contribution of linguists to language assessment techniques used in the linguistic classification of asylum seekers. © Langage et société no 132 - juin 2010.