923 resultados para Linguistic input
Resumo:
In a time of rapid shift and loss of smaller, regional and minority languages it becomes apparent that many of them continue to play a role as post-vernacular varieties. As Shandler (2006) points out for Yiddish in the United States, some languages serve the purpose of identity-building within a community even after they have ceased to be used as a vernacular for daily communication. This occurs according to Shandler through a number of cultural practices, such as amateur theatre, music and folklore, translation, attempts to learn the language in evening classes, etc. This paper will demonstrate that the paradigm developed by Shandler for Yiddish can be applied to other linguistic communities, by comparing the post-vernacular use of Yiddish with Low German in Northern Germany. It will focus on the linguistic strategies that individuals or groups of speakers apply in order to participate in a post-vernacular language community.
Resumo:
The focus of this paper is on the doctoral research training experienced by one of the authors and the ways in which the diverse linguistic and disciplinary perspectives of her two supervisors (co-authors of this paper) mediated the completion of her study. The doctoral candidate is a professional translator/interpreter and translation teacher. The paper describes why and how she identified her research area and then focused on the major research questions in collaboration with her two supervisors, who brought their differing perspectives from the field of linguistics to this translation research, even though they are not translators by profession or disciplinary background and do not speak Korean. In addition, the discussion considers the focus, purpose and theoretical orientation of the research itself (which addressed questions of readability in translated English-Korean texts through detailed analysis of a corpus and implications for professional translator training) as well as the supervisory and conceptual processes and practices involved. The authors contend that doctoral research of this kind can be seen as a mutual learning process and that inter-disciplinary research can make a contribution not only to the development of rigorous research in the field of translation studies but also to the other disciplinary fields involved.
Resumo:
As mobile technologies continue to penetrate increasingly diverse domains of use, we accordingly need to understand the feasibility of different interaction technologies across such varied domains. This case study describes an investigation into whether speechbased input is a feasible interaction option for use in a complex, and arguably extreme, environment of use – that is, lobster fishing vessels. We reflect on our approaches to bringing the “high seas” into lab environments for this purpose, comparing the results obtained via our lab and our field studies. Our hope is that the work presented here will go some way to enhancing the literature in terms of approaches to bringing complex real-world contexts into lab environments for the purpose of evaluating the feasibility of specific interaction technologies.
Resumo:
This paper discusses the first of three studies which collectively represent a convergence of two ongoing research agendas: (1) the empirically-based comparison of the effects of evaluation environment on mobile usability evaluation results; and (2) the effect of environment - in this case lobster fishing boats - on achievable speech-recognition accuracy. We describe, in detail, our study and outline our results to date based on preliminary analysis. Broadly speaking, the potential for effective use of speech for data collection and vessel control looks very promising - surprisingly so! We outline our ongoing analysis and further work.
Resumo:
In Old and Middle French (12th-16th centuries), va ["goes"] + inf was used in narrations in the past. A similar usage seems to have reappeared and be spreading today. However, the old construction combined with past Tenses whereas the new one is found only with forms anchored in present and future. We argue that the conTemporary construction derives not from the old one, but from a metanarrative construction. On the basis of its future in Terpretation, va + inf aids the organization of the narration, announcing subsequent events through a hypernymic process. The periphrasis thus approaches a narrative value by projecting the time of events onto that of narration. With the disappearance of all deictic markers, the go-periphrases are no longer hypernyms: they appear on the same temporal line of events as the neighboring situations and are understood as fully completed. © John Benjamins Publishing Company.
Resumo:
We analyze a Big Data set of geo-tagged tweets for a year (Oct. 2013–Oct. 2014) to understand the regional linguistic variation in the U.S. Prior work on regional linguistic variations usually took a long time to collect data and focused on either rural or urban areas. Geo-tagged Twitter data offers an unprecedented database with rich linguistic representation of fine spatiotemporal resolution and continuity. From the one-year Twitter corpus, we extract lexical characteristics for twitter users by summarizing the frequencies of a set of lexical alternations that each user has used. We spatially aggregate and smooth each lexical characteristic to derive county-based linguistic variables, from which orthogonal dimensions are extracted using the principal component analysis (PCA). Finally a regionalization method is used to discover hierarchical dialect regions using the PCA components. The regionalization results reveal interesting linguistic regional variations in the U.S. The discovered regions not only confirm past research findings in the literature but also provide new insights and a more detailed understanding of very recent linguistic patterns in the U.S.
Resumo:
This letter compares two nonlinear media for simultaneous carrier recovery and generation of frequency symmetric signals from a 42.7-Gb/s nonreturn-to-zero binary phase-shift-keyed input by exploiting four-wave mixing in a semiconductor optical amplifier and a highly nonlinear optical fiber for use in a phase-sensitive amplifier.
Resumo:
The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.
Resumo:
As mobile technologies continue to penetrate increasingly diverse domains of use, we accordingly need to understand the feasibility of different interaction technologies across such varied domains. This case study describes an investigation into whether speechbased input is a feasible interaction option for use in a complex, and arguably extreme, environment of use – that is, lobster fishing vessels. We reflect on our approaches to bringing the “high seas” into lab environments for this purpose, comparing the results obtained via our lab and our field studies. Our hope is that the work presented here will go some way to enhancing the literature in terms of approaches to bringing complex real-world contexts into lab environments for the purpose of evaluating the feasibility of specific interaction technologies.
Resumo:
Engineers logbooks are an important part of the CDIO process, as a prequel to the logooks they will be expected to keep when in industry. Previously however, students logbooks were insufficient and students did not appear to appreciate the importance of the logbooks or how they would be assessed. In an attempt to improve the students understanding and quality of logbooks, a group of ~100 1st year CDIO students were asked to collaboratively develop a marking matrix with the tutors. The anticipated outcome was that students would have more ownership in, and a deeper understanding of, the logbook and what is expected from the student during assessment. A revised marking matrix was developed in class and a short questionnaire was implemented on delivery of the adapted matrix to gauge the students response to the process. Marks from the logbooks were collected twice during teaching period one and two and compared to marks from previous years. This poster will deliver the methodology and outcomes for this venture.
Resumo:
While conventional Data Envelopment Analysis (DEA) models set targets for each operational unit, this paper considers the problem of input/output reduction in a centralized decision making environment. The purpose of this paper is to develop an approach to input/output reduction problem that typically occurs in organizations with a centralized decision-making environment. This paper shows that DEA can make an important contribution to this problem and discusses how DEA-based model can be used to determine an optimal input/output reduction plan. An application in banking sector with limitation in IT investment shows the usefulness of the proposed method.
Resumo:
The past two decades has seen a plethora of papers and academic research conducted on investigative interviews with victims, witnesses and suspected offenders, with a particular focus on questioning techniques and typologies. However, despite this research, there still remain significant discrepancies amongst academic researchers and practitioners over how best to describe types of questions. This article considers the available literature relating to interviews with children and adults from both a psychological and linguistic perspective. In particular, we examine how different types of questions are described, and explore the discrepancies between competing definitions. © 2010, equinox publishing.
Resumo:
Performance evaluation in conventional data envelopment analysis (DEA) requires crisp numerical values. However, the observed values of the input and output data in real-world problems are often imprecise or vague. These imprecise and vague data can be represented by linguistic terms characterised by fuzzy numbers in DEA to reflect the decision-makers' intuition and subjective judgements. This paper extends the conventional DEA models to a fuzzy framework by proposing a new fuzzy additive DEA model for evaluating the efficiency of a set of decision-making units (DMUs) with fuzzy inputs and outputs. The contribution of this paper is threefold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA, (2) we propose a new fuzzy additive DEA model derived from the a-level approach and (3) we demonstrate the practical aspects of our model with two numerical examples and show its comparability with five different fuzzy DEA methods in the literature. Copyright © 2011 Inderscience Enterprises Ltd.