995 resultados para linguistic need


Relevância:

60.00% 60.00%

Publicador:

Resumo:

La evolución de los movimientos poblacionales en Vitoria-Gasteiz nos ha llevado a conocer la inmigración, con ella han desembarcado nuevas necesidades acordes a este nuevo fenómeno. Entre estas, nos encontramos la lingüística, fundamental para cubrir otro tipo de necesidades de la vida cotidiana. En este ámbito trabajan diferentes entidades con el objetivo de ayudar a estas personas con el aprendizaje del euskera o castellano. Este trabajo analiza la situación en la que se encuentran dichas asociaciones y facilita, mediante la creación de una página web, información sobre dónde y cómo aprender las lenguas autóctonas a las personas recién llegadas así como el contacto entre entidades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we take a close look at the literacy demands of one task from the ‘Marvellous Micro-organisms Stage 3 Life and Living’ Primary Connections unit (Australian Academy of Science, 2005). One lesson from the unit, ‘Exploring Bread’, (pp 4-8) asks students to ‘use bread labels to locate ingredient information and synthesise understanding of bread ingredients’. We draw upon a framework offered by the New London Group (2000), that of linguistic, visual and spatial design, to consider in more detail three bread wrappers and from there the complex literacies that students need to interrelate to undertake the required task. Our findings are that although bread wrappers are an example of an everyday science text, their linguistic, visual and spatial designs and their interrelationship are not trivial. We conclude by reinforcing the need for teachers of science to also consider how the complex design elements of everyday science texts and their interrelated literacies are made visible through instructional practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing global competitiveness worldwide has forced manufacturing organizations to produce high-quality products more quickly and at a competitive cost. In order to reach these goals, they need good quality components from suppliers at optimum price and lead time. This actually forced all the companies to adapt different improvement practices such as lean manufacturing, Just in Time (JIT) and effective supply chain management. Applying new improvement techniques and tools cause higher establishment costs and more Information Delay (ID). On the contrary, these new techniques may reduce the risk of stock outs and affect supply chain flexibility to give a better overall performance. But industry people are unable to measure the overall affects of those improvement techniques with a standard evaluation model .So an effective overall supply chain performance evaluation model is essential for suppliers as well as manufacturers to assess their companies under different supply chain strategies. However, literature on lean supply chain performance evaluation is comparatively limited. Moreover, most of the models assumed random values for performance variables. The purpose of this paper is to propose an effective supply chain performance evaluation model using triangular linguistic fuzzy numbers and to recommend optimum ranges for performance variables for lean implementation. The model initially considers all the supply chain performance criteria (input, output and flexibility), converts the values to triangular linguistic fuzzy numbers and evaluates overall supply chain performance under different situations. Results show that with the proposed performance measurement model, improvement area for each variable can be accurately identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As increasing numbers of Chinese language learners choose to learn English online (CNNIC, 2012), there is a need to investigate popular websites and their language learning designs. This paper reports on the first stage of a study that analysed the pedagogical, linguistic and content features of 25 Chinese English Language Learning (ELL) websites ranked according to their value and importance to users. The website ranking was undertaken using a system known as PageRank. The aim of the study was to identify the features characterising popular sites as opposed to those of less popular sites for the purpose of producing a framework for ELL website design in the Chinese context. The study found that a pedagogical focus with developmental instructional materials accommodating diverse proficiency levels was a major contributor to website popularity. Chinese language use for translations and teaching directives and intermediate level English for learning materials were also significant features. Content topics included Anglophone/Western and non-Anglophone/Eastern contexts. Overall, popular websites were distinguished by their mediation of access to and scaffolded support for ELL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A user’s query is considered to be an imprecise description of their information need. Automatic query expansion is the process of reformulating the original query with the goal of improving retrieval effectiveness. Many successful query expansion techniques ignore information about the dependencies that exist between words in natural language. However, more recent approaches have demonstrated that by explicitly modeling associations between terms significant improvements in retrieval effectiveness can be achieved over those that ignore these dependencies. State-of-the-art dependency-based approaches have been shown to primarily model syntagmatic associations. Syntagmatic associations infer a likelihood that two terms co-occur more often than by chance. However, structural linguistics relies on both syntagmatic and paradigmatic associations to deduce the meaning of a word. Given the success of dependency-based approaches and the reliance on word meanings in the query formulation process, we argue that modeling both syntagmatic and paradigmatic information in the query expansion process will improve retrieval effectiveness. This article develops and evaluates a new query expansion technique that is based on a formal, corpus-based model of word meaning that models syntagmatic and paradigmatic associations. We demonstrate that when sufficient statistical information exists, as in the case of longer queries, including paradigmatic information alone provides significant improvements in retrieval effectiveness across a wide variety of data sets. More generally, when our new query expansion approach is applied to large-scale web retrieval it demonstrates significant improvements in retrieval effectiveness over a strong baseline system, based on a commercial search engine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Localization of technology is now widely applied to the preservation and revival of the culture of indigenous peoples around the world, most commonly through the translation into indigenous languages, which has been proven to increase the adoption of technology. However, this current form of localization excludes two demographic groups, which are key to the effectiveness of localization efforts in the African context: the younger generation (under the age of thirty) with an Anglo- American cultural view who have no need or interest in their indigenous culture; and the older generation (over the age of fifty) who are very knowledgeable about their indigenous culture, but have little or no knowledge on the use of a computer. This paper presents the design of a computer game engine that can be used to provide an interface for both technology and indigenous culture learning for both generations. Four indigenous Ugandan games are analyzed and identified for their attractiveness to both generations, to both rural and urban populations, and for their propensity to develop IT skills in older generations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Teachers in the Pacific region have often signalled the need for more locally produced information texts in both the vernacular and English, to engage their readers with local content and to support literacy development across the curriculum. The Information Text Awareness Project (ITAP), initially informed by the work of Nea Stewart-Dore, has provided a means to address this need through supporting local teachers to write their own information texts. The article reports on the impact of an ITAP workshop carried out in Nadi, Fiji in 2012. Nine teacher volunteers from the project trialled the use of the texts in their classrooms with positive results in relation to student learning and belief in themselves as writers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine institutional work from a discursive perspective and argue that reasonability, the existence of acceptable justifying reasons for beliefs and practices, is a key part of legitimation. Drawing on philosophy of language, we maintain that institutional work takes place in the context of ‘space of reasons’ determined by widely held assumptions about what is reasonable and what is not. We argue that reasonability provides the main contextual constraint of institutional work, its major outcome, and a key trigger for actors to engage in it. We draw on Hilary Putnam’s concept ‘division of linguistic labor’ to highlight the specialized distribution of knowledge and authority in defining valid ways of reasoning. In this view, individuals use institutionalized vocabularies to reason about their choices and understand their context with limited understanding of how and why these structures have become what they are. We highlight the need to understand how professions and other actors establish and maintain the criteria of reasoning in various areas of expertise through discursive institutional work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models of normal word production are well specified about the effects of frequency of linguistic stimuli on lexical access, but are less clear regarding the same effects on later stages of word production, particularly word articulation. In aphasia, this lack of specificity of down-stream frequency effects is even more noticeable because there is relatively limited amount of data on the time course of frequency effects for this population. This study begins to fill this gap by comparing the effects of variation of word frequency (lexical, whole word) and bigram frequency (sub-lexical, within word) on word production abilities in ten normal speakers and eight mild–moderate individuals with aphasia. In an immediate repetition paradigm, participants repeated single monosyllabic words in which word frequency (high or low) was crossed with bigram frequency (high or low). Indices for mapping the time course for these effects included reaction time (RT) for linguistic processing and motor preparation, and word duration (WD) for speech motor performance (word articulation time). The results indicated that individuals with aphasia had significantly longer RT and WD compared to normal speakers. RT showed a significant main effect only for word frequency (i.e., high-frequency words had shorter RT). WD showed significant main effects of word and bigram frequency; however, contrary to our expectations, high-frequency items had longer WD. Further investigation of WD revealed that independent of the influence of word and bigram frequency, vowel type (tense or lax) had the expected effect on WD. Moreover, individuals with aphasia differed from control speakers in their ability to implement tense vowel duration, even though they could produce an appropriate distinction between tense and lax vowels. The results highlight the importance of using temporal measures to identify subtle deficits in linguistic and speech motor processing in aphasia, the crucial role of phonetic characteristics of stimuli set in studying speech production and the need for the language production models to account more explicitly for word articulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we explore whether cross-linguistic differences in grammatical aspect encoding may give rise to differences in memory and cognition. We compared native speakers of two languages that encode aspect differently (English and Swedish) in four tasks that examined verbal descriptions of stimuli, online triads matching, and memory-based triads matching with and without verbal interference. Results showed between-group differences in verbal descriptions and in memory-based triads matching. However, no differences were found in online triads matching and in memory-based triads matching with verbal interference. These findings need to be interpreted in the context of the overall pattern of performance, which indicated that both groups based their similarity judgments on common perceptual characteristics of motion events. These results show for the first time a cross-linguistic difference in memory as a function of differences in grammatical aspect encoding, but they also contribute to the emerging view that language fine tunes rather than shapes perceptual processes that are likely to be universal and unchanging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Deakin University (Melbourne, Australia) operational policy on 'International and Culturally Inclusive Curricula' states that Deakin will incorporate international/intercultural perspectives and inclusive pedagogies into its courses in order to prepare all students to perform capably, ethically and sensitively in international, multicultural, professional and social contexts.

This paper is about a specific project to internationalise the teacher education curriculum through the use of Information and Communication Technologies (ICT). This project is scoped in the context of the UNESCO thrust of 'Education for All' in agreeing that inclusive societies begin with inclusive education practices. In our view current strategies have been insufficient to ensure that marginalized and excluded children receive access to their right to education.

The project aims to operationalise part of the UNESCO Dakar Framework for high quality learning environments by responding to ‘…the diverse needs and circumstances of learners and giving appropriate weight to the abilities, skills and knowledge they bring to the teaching and learning process’ by minimising language acquisition barriers that can otherwise impede effective communication and learning.

In addition, we need to be mindful of the marginalisation of people from non-English speaking backgrounds and therefore, in this initiative we use ICT to bridge the 'tyranny of distance' and offer a curriculum that values cultural and linguistic diversity.

In this paper we will discuss how we intend to develop these project principles. In particular we will indicate our plans to use relatively low cost, accessible software to develop a virtual environment where students can enter text in their native language, view foreign language text in their native language, hear text in their own language and automatically encode text into MP3 files and attach the files to messages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When using linguistic approaches to solve decision problems, we need the techniques for computing with words (CW). Together with the 2-tuple fuzzy linguistic representation models (i.e., the Herrera and Mart´ınez model and the Wang and Hao model), some computational techniques for CW are also developed. In this paper, we define the concept of numerical scale and extend the 2-tuple fuzzy linguistic representation models under the numerical scale.We find that the key of computational techniques
based on linguistic 2-tuples is to set suitable numerical scale with
the purpose of making transformations between linguistic 2-tuples
and numerical values. By defining the concept of the transitive
calibration matrix and its consistent index, this paper develops an optimization model to compute the numerical scale of the linguistic term set. The desired properties of the optimization model are also presented. Furthermore, we discuss how to construct the transitive calibration matrix for decision problems using linguistic preference relations and analyze the linkage between the consistent index of the transitive calibration matrix and one of the linguistic preference relations. The results in this paper are pretty helpful to complete the fuzzy 2-tuple representation models for CW.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of Linked Data technology to the publication of linguistic data promises to facilitate interoperability of these data and has lead to the emergence of the so called Linguistic Linked Data Cloud (LLD) in which linguistic data is published following the Linked Data principles. Three essential issues need to be addressed for such data to be easily exploitable by language technologies: i) appropriate machine-readable licensing information is needed for each dataset, ii) minimum quality standards for Linguistic Linked Data need to be defined, and iii) appropriate vocabularies for publishing Linguistic Linked Data resources are needed. We propose the notion of Licensed Linguistic Linked Data (3LD) in which different licensing models might co-exist, from totally open to more restrictive licenses through to completely closed datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dyslexia is one of the most common childhood disorders with a prevalence of around 5-10% in school-age children. Although an important genetic component is known to have a role in the aetiology of dyslexia, we are far from understanding the molecular mechanisms leading to the disorder. Several candidate genes have been implicated in dyslexia, including DYX1C1, DCDC2, KIAA0319, and the MRPL19/C2ORF3 locus, each with reports of both positive and no replications. We generated a European cross-linguistic sample of school-age children-the NeuroDys cohort-that includes more than 900 individuals with dyslexia, sampled with homogenous inclusion criteria across eight European countries, and a comparable number of controls. Here, we describe association analysis of the dyslexia candidate genes/locus in the NeuroDys cohort. We performed both case-control and quantitative association analyses of single markers and haplotypes previously reported to be dyslexia-associated. Although we observed association signals in samples from single countries, we did not find any marker or haplotype that was significantly associated with either case-control status or quantitative measurements of word-reading or spelling in the meta-analysis of all eight countries combined. Like in other neurocognitive disorders, our findings underline the need for larger sample sizes to validate possibly weak genetic effects. © 2014 Macmillan Publishers Limited All rights reserved.