993 resultados para lexical approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] [EN] The lexical approach identifies lexis as the basis of language and focuses on the principle that language consists of grammaticalised lexis. in second language acquisition, over the past few years, this approach has generated great interest as an alternative to traditional grammar-based teaching methods. From a psycholinguistic point of view, the lexical approach consists of the capacity of understanding and producing lexical phrases as non-analysed entities (chunks). A growing body of literature concerning spoken fluency is in favour of integrating automaticity and formulaic language units into classroom practice. in line with the latest theories on SlA, we recommend the inclusion of a language awareness component as an integral part of this approach. The purpose is to induce what Schmidt (1990) calls noticing , i.e., registering forms in the input so as to store themin memory. This paper, which is in keeping with the interuniversity Research Project “Evidentialityin a multidisciplinary corpus of English research papers” of the University of las Palmas de Gran Canaria, provides a theoretical overview on theresearch of this approach taking into account both the methodological foundationson the subject and its pedagogical implications for SLA

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article suggests a theoretical and methodological framework for a systematic contrastive discourse analysis across languages and discourse communities through keywords, constituting a lexical approach to discourse analysis which is considered to be particularly fruitful for comparative analysis. We use a corpus assisted methodology, presuming meaning to be constituted, revealed and constrained by collocation environment. We compare the use of the keyword intégration and Integration in French and German public discourses about migration on the basis of newspaper corpora built from two French and German newspapers from 1998 to 2011. We look at the frequency of these keywords over the given time span, group collocates into thematic categories and discuss indicators of discursive salience by comparing the development of collocation profiles over time in both corpora as well as the occurrence of neologisms and compounds based on intégration/Integration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As more and more open-source software components become available on the internet we need automatic ways to label and compare them. For example, a developer who searches for reusable software must be able to quickly gain an understanding of retrieved components. This understanding cannot be gained at the level of source code due to the semantic gap between source code and the domain model. In this paper we present a lexical approach that uses the log-likelihood ratios of word frequencies to automatically provide labels for software components. We present a prototype implementation of our labeling/comparison algorithm and provide examples of its application. In particular, we apply the approach to detect trends in the evolution of a software system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este estudio presenta los resultados de una investigación que examina la efectividad del enfoque léxico como forma de instrucción explícita sobre la adquisición de la competencia léxica en aprendices de español como lengua extranjera. El estudio esta guiado por dos preguntas de investigación. La primera pregunta de investigación (PI 1) examina el impacto del enfoque léxico sobre la adquisición de la competencia léxica. La segunda pregunta de investigación (PI 2) examina si la efectividad del enfoque léxico en el grupo de alumnos examinados viene condicionada por las creencias de los participantes acerca de las estrategias empleadas en dicho método. La aplicación del enfoque léxico se basó en una propuesta pedagógica consistente en una unidad didáctica de creación propia. Se analizaron los datos obtenidos tanto de forma cuantitativa como cualitativa. Los resultados confirmaron empíricamente la validez del enfoque léxico como principio metodológico para adquirir la competencia léxica. Del mismo modo, se encontró una relación entre las creencias de los participantes y las estrategias de aprendizaje empleadas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper aims to investigate the use of collocations in DELE B1. We select the reading texts from DELE B1 (2010 to 2014) as research data. The investigation includes: First of all, we will study the theory of collocation and the classification as well as its application to the foreign language learning and teaching. Second, we will analyze the types of collocation annotated by Corpus Tool. Third, we tend to calculate the frequency use of each type of collocations written in Spanish reading texts. Next, we will discuss the interrelationship between collocations and text themes. Finally, we would like to compare the differences of results of collocation use between these two corpus tools: Corpus Tool and Corpus del Español in order to understand the native speakers’ preference of use collocations as well as to provide supplementay materials for teaching of Spanish reading. We hope that the expected results of our research will offer useful references for improving students' Spanish reading comprehension to pass DELE B1 examinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of cognates between two distinct languages has recently start- ed to attract the attention of NLP re- search, but there has been little research into using semantic evidence to detect cognates. The approach presented in this paper aims to detect English-French cog- nates within monolingual texts (texts that are not accompanied by aligned translat- ed equivalents), by integrating word shape similarity approaches with word sense disambiguation techniques in order to account for context. Our implementa- tion is based on BabelNet, a semantic network that incorporates a multilingual encyclopedic dictionary. Our approach is evaluated on two manually annotated da- tasets. The first one shows that across different types of natural text, our method can identify the cognates with an overall accuracy of 80%. The second one, con- sisting of control sentences with semi- cognates acting as either true cognates or false friends, shows that our method can identify 80% of semi-cognates acting as cognates but also identifies 75% of the semi-cognates acting as false friends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study provides validity evidence for the Capture-Recapture (CR) method, borrowed from ecology, as a measure of second language (L2) productive vocabulary size (PVS). Two separate “captures” of productive vocabulary were taken using written word association tasks (WAT). At Time 1, 47 bilinguals provided at least 4 associates to each of 30 high-frequency stimulus words in English, their first language (L1), and in French, their L2. A few days later (Time 2), this procedure was repeated with a different set of stimulus words in each language. Since the WAT was used, both Lex30 and CR PVS scores were calculated in each language. Participants also completed an animacy judgment task assessing the speed and efficiency of lexical access. Results indicated that, in both languages, CR and Lex30 scores were significantly positively correlated (evidence of convergent validity). CR scores were also significantly larger in the L1, and correlated significantly with the speed of lexical access in the L2 (evidence of construct validity). These results point to the validity of the technique for estimating relative L2 PVS. However, CR scores are not a direct indication of absolute vocabulary size. A discussion of the method’s underlying assumptions and their implications for interpretation are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Online forums are becoming a popular way of finding useful
information on the web. Search over forums for existing discussion
threads so far is limited to keyword-based search due
to the minimal effort required on part of the users. However,
it is often not possible to capture all the relevant context in a
complex query using a small number of keywords. Examplebased
search that retrieves similar discussion threads given
one exemplary thread is an alternate approach that can help
the user provide richer context and vastly improve forum
search results. In this paper, we address the problem of
finding similar threads to a given thread. Towards this, we
propose a novel methodology to estimate similarity between
discussion threads. Our method exploits the thread structure
to decompose threads in to set of weighted overlapping
components. It then estimates pairwise thread similarities
by quantifying how well the information in the threads are
mutually contained within each other using lexical similarities
between their underlying components. We compare our
proposed methods on real datasets against state-of-the-art
thread retrieval mechanisms wherein we illustrate that our
techniques outperform others by large margins on popular
retrieval evaluation measures such as NDCG, MAP, Precision@k
and MRR. In particular, consistent improvements of
up to 10% are observed on all evaluation measures

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper seeks to discover in what sense we can classify vocabulary items as technical terms in the later medieval period. In order to arrive at a principled categorization of technicality, distribution is taken as a diagnostic factor: vocabulary shared across the widest range of text types may be assumed to be both prototypical for the semantic field, but also the most general and therefore least technical terms since lexical items derive at least part of their meaning from context, a wider range of contexts implying a wider range of senses. A further way of addressing the question of technicality is tested through the classification of the lexis into semantic hierarchies: in the terms of componential analysis, having more components of meaning puts a term lower in the semantic hierarchy and flags it as having a greater specificity of sense, and thus as more technical. The various text types are interrogated through comparison of the number of levels in their hierarchies and number of lexical items at each level within the hierarchies. Focusing on the vocabulary of a single semantic field, DRESS AND TEXTILES, this paper investigates how four medieval text types (wills, sumptuary laws, petitions, and romances) employ technical terminology in the establishment of the conventions of their genres.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse constitue une étude systématique du lexique du déné sųłiné, une langue athabaskane du nord-ouest canadien. Elle présente les définitions et les patrons de combinatoire syntaxique et lexicale de plus de 200 unités lexicales, lexèmes et phrasèmes, qui représentent une partie importante du vocabulaire déné sųłiné dans sept domaines: les émotions, le caractère humain, la description physique des entités, le mouvement des êtres vivants, la position des entités, les conditions atmospheriques et les formations topologiques, en les comparant avec le vocubulaire équivalent de l'anglais. L’approche théorique choisie est la Théorie Sens-Texte (TST), une approche formelle qui met l’accent sur la description sémantique et lexicographique empiriques. La présente recherche relève d'importantes différences entre le lexique du déné sųłiné et celui de l'anglais à tous les niveaux: dans la correspondence entre la représentation conceptuelle, considérée (quasi-)extralinguistique, et la structure sémantique; dans les patrons de lexicalisation des unités lexicales, et dans les patrons de combinatoire syntaxique et lexicale, qui montrent parfois des traits propres au déné sųłiné intéressants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article discusses issues in measuring lexical diversity, before outlining an approach based on mathematical modelling that produces a measure, D, designed to address these problems. The procedure for obtaining values for D directly from transcripts using software (vocd) is introduced, and then applied to thirty-two children from the Bristol Study of Language Development (Wells 1985) at ten different ages. A significant developmental trend is shown for D and an indication is given of the average scores and ranges to be expected between the ages of 18 and 42 months and at 5 years for these L1 English speakers. The meaning attributable to further ranges of values for D is illustrated by analysing the lexical diversity of academic writing, and its wider application is demonstrated with examples from specific language impairment, morphological development, and foreign/second language learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Treffers-Daller and Korybski propose to operationalize language dominance on the basis of measures of lexical diversity, as computed, in this particular study, on transcripts of stories told by Polish-English bilinguals in each of their languages They compute four different Indices of Language Dominance (ILD) on the basis of two different measures of lexical diversity, the Index of Guiraud (Guiraud, 1954) and HD-D (McCarthy & Jarvis, 2007). They compare simple indices, which are based on subtracting scores from one language from scores for another language, to more complex indices based on the formula Birdsong borrowed from the field of handedness, namely the ratio of (Difference in Scores) / (Sum of Scores). Positive scores on each of these Indices of Language Dominance mean that informants are more English-dominant and negative scores that they are more Polish-dominant. The authors address the difficulty of comparing scores across languages by carefully lemmatizing the data. Following Flege, Mackay and Piske (2002) they also look into the validity of these indices by investigating to what extent they can predict scores on other, independently measured variables. They use correlations and regression analysis for this, which has the advantage that the dominance indices are used as continuous variables and arbitrary cut-off points between balanced and dominant bilinguals need not be chosen. However, they also show how the computation of z-scores can help facilitate a discussion about the appropriateness of different cut-off points across different data sets and measurement scales in those cases where researchers consider it necessary to make categorial distinctions between balanced and dominant bilinguals. Treffers-Daller and Korybski correlate the ILD scores with four other variables, namely Length of Residence in the UK, attitudes towards English and life in the UK, frequency of usage of English at home and frequency of code-switching. They found that the indices correlated significantly with most of these variables, but there were clear differences between the Guiraud-based indices and the HDD-based indices. In a regression analysis three of the measures were also found to be a significant predictor of English language usage at home. They conclude that the correlations and the regression analyses lend strong support to the validity of their approach to language dominance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an approach for assisting low-literacy readers in accessing Web online information. The oEducational FACILITAo tool is a Web content adaptation tool that provides innovative features and follows more intuitive interaction models regarding accessibility concerns. Especially, we propose an interaction model and a Web application that explore the natural language processing tasks of lexical elaboration and named entity labeling for improving Web accessibility. We report on the results obtained from a pilot study on usability analysis carried out with low-literacy users. The preliminary results show that oEducational FACILITAo improves the comprehension of text elements, although the assistance mechanisms might also confuse users when word sense ambiguity is introduced, by gathering, for a complex word, a list of synonyms with multiple meanings. This fact evokes a future solution in which the correct sense for a complex word in a sentence is identified, solving this pervasive characteristic of natural languages. The pilot study also identified that experienced computer users find the tool to be more useful than novice computer users do.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In all applications of clone detection it is important to have precise and efficient clone identification algorithms. This paper proposes and outlines a new algorithm, KClone for clone detection that incorporates a novel combination of lexical and local dependence analysis to achieve precision, while retaining speed. The paper also reports on the initial results of a case study using an implementation of KClone with which we have been experimenting. The results indi- cate the ability of KClone to find types-1,2, and 3 clones compared to token-based and PDG-based techniques. The paper also reports results of an initial empirical study of the performance of KClone compared to CCFinderX.