988 resultados para written language
The effective use of implicit parallelism through the use of an object-oriented programming language
Resumo:
This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.
Resumo:
Could language be a reason why women are under-representedat senior level in the business world? The Language of Female Leadership investigates how female leaders actually use language to achieve their business and relational goals. The author proposes that the language of women leaders is shaped by the type of corporation they work for. Based on the latest research, three types of ‘gendered corporation’ appear to affect the way women interact with colleagues: the male-dominated,the gender-divided and the gender-multiple. This book shows that senior women have to carry out extra ‘linguistic work’ to make their mark in the boardroom. In male-dominated and gender-divided corporations, women must develop an extraordinarylinguistic expertise just to survive. In gender-multiple corporations, this linguistic expertise helps them to be highly regarded and effective leaders.Judith Baxter lectures in Applied Linguistics at the University of Aston. She has written and edited many publications in the field of language and gender, language and education and the language of leadership. She won a government award to conduct a major research study in the language of female leadership.
Resumo:
Der vorliegende Beitrag untersucht die Frage, in welchem Maße sich Institutionen, die niederdeutsche Kulturszene und individuelle Sprecher des Niederdeutschen moderne Kommunikationstechnologien wie das Internet zunutze machen und ob computervermittelte Kommunikation helfen kann, dem Rückgang des Niederdeutschen Einhalt zu gebieten. Die grundsätzliche Herangehensweise ist eine soziolinguistische, die das Internet als sozialen Handlungsraum versteht, in dem Individuen und Institutionen kommunizieren. Für eine derartige Perspektive stehen weniger das Medium oder das Genre im Mittelpunkt des Interesses als vielmehr das kommunizierende Individuum und die Sprachgemeinschaft, in diesem Fall die virtuelle Sprachgemeinschaft. Based on studies that analyse the potential of computer-mediated communication (cmc) to help fight language shift in lesser-used languages, this paper discusses the situation of Low German in Northern Germany. Over the last three decades, Low German has lost more than half of its active speakers. The article raises the question of whether and, if so, how Low German speakers make use of cmc to stem this tide. Following a sociolinguistic approach focussed on the individual speakers who use the Internet as a space for social interaction, it gives an overview of the discursive field of Low German on the internet and analyses in detail the most popular Low German discussion board. It shows that one of the main obstacles to a more successful use of cmc can be found in speakers' complex attitude toward written Low German. © Franz Steiner Verlag Stuttgart.
Resumo:
Key features include: • Discussion of language in relation to various aspects of identity, such as those connected with nation and region, as well as in relation to social aspects such as social class and race. • A chapter on undertaking research that will equip students with appropriate research methods for their own projects. • An analysis of language and identity within the context of written as well as spoken texts. Language and Identity in Englishes examines the core issues and debates surrounding the relationship between English, language and identity. Drawing on a range of international examples from the UK, US, China and India, Clark uses both cutting-edge fieldwork and her own original research to give a comprehensive account of the study of language and identity. With its accessible structure, international scope and the inclusion of leading research in the area, this book is ideal for any student taking modules in language and identity or sociolinguistics.
Resumo:
Previous research into formulaic language has focussed on specialised groups of people (e.g. L1 acquisition by infants and adult L2 acquisition) with ordinary adult native speakers of English receiving less attention. Additionally, whilst some features of formulaic language have been used as evidence of authorship (e.g. the Unabomber’s use of you can’t eat your cake and have it too) there has been no systematic investigation into this as a potential marker of authorship. This thesis reports the first full-scale study into the use of formulaic sequences by individual authors. The theory of formulaic language hypothesises that formulaic sequences contained in the mental lexicon are shaped by experience combined with what each individual has found to be communicatively effective. Each author’s repertoire of formulaic sequences should therefore differ. To test this assertion, three automated approaches to the identification of formulaic sequences are tested on a specially constructed corpus containing 100 short narratives. The first approach explores a limited subset of formulaic sequences using recurrence across a series of texts as the criterion for identification. The second approach focuses on a word which frequently occurs as part of formulaic sequences and also investigates alternative non-formulaic realisations of the same semantic content. Finally, a reference list approach is used. Whilst claiming authority for any reference list can be difficult, the proposed method utilises internet examples derived from lists prepared by others, a procedure which, it is argued, is akin to asking large groups of judges to reach consensus about what is formulaic. The empirical evidence supports the notion that formulaic sequences have potential as a marker of authorship since in some cases a Questioned Document was correctly attributed. Although this marker of authorship is not universally applicable, it does promise to become a viable new tool in the forensic linguist’s tool-kit.
Resumo:
Many people think of language as words. Words are small, convenient units, especially in written English, where they are separated by spaces. Dictionaries seem to reinforce this idea, because entries are arranged as a list of alphabetically-ordered words. Traditionally, linguists and teachers focused on grammar and treated words as self-contained units of meaning, which fill the available grammatical slots in a sentence. More recently, attention has shifted from grammar to lexis, and from words to chunks. Dictionary headwords are convenient points of access for the user, but modern dictionary entries usually deal with chunks, because meanings often do not arise from individual words, but from the chunks in which the words occur. Corpus research confirms that native speakers of a language actually work with larger “chunks” of language. This paper will show that teachers and learners will benefit from treating language as chunks rather than words.
Resumo:
This paper investigates whether the position of adverb phrases in sentences is regionally patterned in written Standard American English, based on an analysis of a 25 million word corpus of letters to the editor representing the language of 200 cities from across the United States. Seven measures of adverb position were tested for regional patterns using the global spatial autocorrelation statistic Moran’s I and the local spatial autocorrelation statistic Getis-Ord Gi*. Three of these seven measures were indentified as exhibiting significant levels of spatial autocorrelation, contrasting the language of the Northeast with language of the Southeast and the South Central states. These results demonstrate that continuous regional grammatical variation exists in American English and that regional linguistic variation exists in written Standard English.
Resumo:
Modern technology has moved on and completely changed the way that people can use the telephone or mobile to dialogue with information held on computers. Well developed “written speech analysis” does not work with “verbal speech”. The main purpose of our article is, firstly, to highlights the problems and, secondly, to shows the possible ways to solve these problems.
Resumo:
* The following text has been originally published in the Proceedings of the Language Recourses and Evaluation Conference held in Lisbon, Portugal, 2004, under the title of "Towards Intelligent Written Cultural Heritage Processing - Lexical processing". I present here a revised contribution of the aforementioned paper and I add here the latest efforts done in the Center for Computational Linguistic in Prague in the field under discussion.
Resumo:
2000 Mathematics Subject Classification: C2P99.
Resumo:
This book presents a novel approach to discussing how to research language teacher cognition and practice. An introductory chapter by theeditors and an overview of the research field by Simon Borg precede eigh case studies written by new researchers, each of which focuses on one approach to collecting data. These approaches range from questionnaires and focus groups to think aloud, stimulated recall, and oral reflective journals. Each case study is commented on by a leading expert in the field - JD Brown, Martin Bygate, Donald Freeman, Alan Maley, Jerry Gebhard, Thoma Farrell, Susan Gass, and Jill Burton. Readers are encouraged to enter th conversation by reflecting on a set of questions and tasks in each chapter.
Resumo:
The first study of its kind, Regional Variation in Written American English takes a corpus-based approach to map over a hundred grammatical alternation variables across the United States. A multivariate spatial analysis of these maps shows that grammatical alternation variables follow a relatively small number of common regional patterns in American English, which can be explained based on both linguistic and extra-linguistic factors. Based on this rigorous analysis of extensive data, Grieve identifies five primary modern American dialect regions, demonstrating that regional variation is far more pervasive and complex in natural language than is generally assumed. The wealth of maps and data and the groundbreaking implications of this volume make it essential reading for students and researchers in linguistics, English language, geography, computer science, sociology and communication studies.
Resumo:
In this article I first divide Forensic Linguistics into three sub-disciplines: the language of written legal texts, the spoken language of legal proceedings, and the linguist as expert witness and then go on to give a small number of examples of the research undertaken in these three areas. For the language of written legal texts, I present work on the (in) comprehensibility of police cautions and of judges instructions to juries. For the spoken language of legal proceedings, I report work on the problems of interpreted interaction, of vulnerable witnesses and the need for more detailed research comparing the interactive rules in adversarial and investigative systems. Finally, to illustrate the role of the linguist as expert witness I report a trademark case, five different authorship attribution cases, three very different plagiarism cases and I end reporting briefly the contribution of linguists to language assessment techniques used in the linguistic classification of asylum seekers. © Langage et société no 132 - juin 2010.
Resumo:
Corwin and Wilcox (1985) sent surveys to more than 100 American colleges and universities to determine the policies on the matter of accepting American Sign Language (ASL) as a foreign language. Their results indicated that 81% of those surveyed rejected ASL as a foreign/modern language equivalent. The most frequently stated opposition to ASL was that it lacked a culture. Some of the other objections to ASL were: ASL is not foreign; there is no written form and therefore no original body of literature; it is a derivative of English; and it is indigenous to the United States and hence not foreign. Based on the work of Corwin and Wilcox this study sent surveys to 222 American colleges and universities. Noting an expanding cognizance and social awareness of ASL and deafness (as seen in the increasing number of movies, plays, television programs, the Americans with Disabilities Act, and related news stories), this study sought to find out if ASL was now considered an acceptable foreign language equivalent. The hypothesis of this study was that change has occurred since the 1985 study: that a significant percent of post secondary schools accepting ASL as a foreign/modern language equivalent has increased. The 165 colleges and universities that responded to this author's survey confirmed there has been a significant shift towards the acceptance of ASL. Only 50% of the respondents objected to ASL as a foreign language equivalent, a significant decrease from the 1985 findings. Of those who objected to granting ASL foreign language credit, the reasons were similar to those of the Corwin and Wilcox study, except that the belief in an absence of a Deaf culture dropped from the top reason listed, to the fifth. That ASL is not foreign was listed as the most frequent objection in this study. One important change which may account for increased acceptance of ASL, is that 16 states (compared to 10 in 1985) now have policies stating that ASL is acceptable as a foreign language equivalent. Two-year colleges, in this study, were more likely to accept ASL than were four-year colleges and universities. Neither two- nor four-year colleges and universities are likely to include ASL in their foreign language departments, and most schools that have foreign language entrance requirements are unlikely to accept ASL. In colleges and universities where ASL was already offered in some department within the system, there was a significantly higher likelihood that foreign language credit was given for ASL. Respondents from states with laws governing the inclusion of ASL did not usually know their state had a policy. Most respondents, 84%, indicated their knowledge on the topic of ASL was fair to poor. ^