958 resultados para Euivalência lexical
Resumo:
[EN] [EN] The lexical approach identifies lexis as the basis of language and focuses on the principle that language consists of grammaticalised lexis. in second language acquisition, over the past few years, this approach has generated great interest as an alternative to traditional grammar-based teaching methods. From a psycholinguistic point of view, the lexical approach consists of the capacity of understanding and producing lexical phrases as non-analysed entities (chunks). A growing body of literature concerning spoken fluency is in favour of integrating automaticity and formulaic language units into classroom practice. in line with the latest theories on SlA, we recommend the inclusion of a language awareness component as an integral part of this approach. The purpose is to induce what Schmidt (1990) calls noticing , i.e., registering forms in the input so as to store themin memory. This paper, which is in keeping with the interuniversity Research Project “Evidentialityin a multidisciplinary corpus of English research papers” of the University of las Palmas de Gran Canaria, provides a theoretical overview on theresearch of this approach taking into account both the methodological foundationson the subject and its pedagogical implications for SLA
Resumo:
Abstract This dissertation investigates the notion of equivalence with particular reference to lexical cohesion in the translation of political speeches. Lexical cohesion poses a particular challenge to the translators of political speeches and thus preserving lexical cohesion elements as one of the major elements of cohesion is undoubtedly crucial to their translation equivalence. We rely on Halliday’s (1994) classification of lexical cohesion which comprises: repetition, synonymy, antonymy, meronymy and hyponymy. Other traditional models of lexical cohesion are examined. We include Grammatical Parallelism for its role in creating textual semantic unity which is what cohesion is all about. The study shed light on the function of lexical cohesion elements as rhetorical device. The study also deals with lexical problems resulting from the transfer of lexical cohesion elements from the SL into the TL, which is often beset by many problems that most often result from the differences between languages. Three key issues are identified as being fundamental to equivalence and lexical cohesion in the translation of political speeches: sociosemiotic approach, register analysis, rhetoric, and poetic function. The study also investigates the lexical cohesion elements in the translation of political speeches from English into Arabic, Italian and French in relation to ideology, and its control, through bias and distortion. The findings are discussed, implications examined and topics for further research suggested.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
This research tests the hypothesis that knowledge of derivational morphology facilitates vocabulary acquisition in beginning adult second language learners. Participants were mono-lingual English-speaking college students aged 18 years and older enrolled inintroductory Spanish courses. Knowledge of Spanish derivational morphology was tested through the use of a forced-choice translation task. Spanish lexical knowledge was measured by a translation task using direct translation (English word) primes and conceptual (picture) primes. A 2x2x2 mixed factor ANOVA examined the relationships between morphological knowledge (strong, moderate), error type (form-based, conceptual), and prime type (direct translation, picture). The results are consistent with the existence of a relationship between knowledge of derivational morphology andacquisition of second language vocabulary. Participants made more conceptually-based errors than form-based errors F (1,22)=7.744, p=.011. This result is consistent with Clahsen & Felser’s (2006) and Ullman’s (2004) models of second language processing. Additionally, participants with Strong morphological knowledge made fewer errors onthe lexical knowledge task than participants with Moderate morphological knowledge t(23)=-2.656, p=.014. I suggest future directions to clarify the relationship between morphological knowledge and lexical development in adult second language learners.
Resumo:
The group analysed some syntactic and phonological phenomena that presuppose the existence of interrelated components within the lexicon, which motivate the assumption that there are some sublexicons within the global lexicon of a speaker. This result is confirmed by experimental findings in neurolinguistics. Hungarian speaking agrammatic aphasics were tested in several ways, the results showing that the sublexicon of closed-class lexical items provides a highly automated complex device for processing surface sentence structure. Analysing Hungarian ellipsis data from a semantic-syntactic aspect, the group established that the lexicon is best conceived of being as split into at least two main sublexicons: the store of semantic-syntactic feature bundles and a separate store of sound forms. On this basis they proposed a format for representing open-class lexical items whose meanings are connected via certain semantic relations. They also proposed a new classification of verbs to account for the contribution of the aspectual reading of the sentence depending on the referential type of the argument, and a new account of the syntactic and semantic behaviour of aspectual prefixes. The partitioned sets of lexical items are sublexicons on phonological grounds. These sublexicons differ in terms of phonotactic grammaticality. The degrees of phonotactic grammaticality are tied up with the problem of psychological reality, of how many degrees of this native speakers are sensitive to. The group developed a hierarchical construction network as an extension of the original General Inheritance Network formalism and this framework was then used as a platform for the implementation of the grammar fragments.
Resumo:
In the context of a synchronic lexical study of the Ede varieties of West Africa, this paper investigates whether the use of different criteria sets to judge the similarity of lexical features in different language varieties yields matching conclusions regarding the relative relationships and clustering of the investigated varieties and thus leads to similar recommendations for further sociolinguistic research. Word lists elicited in 28 Ede varieties were analyzed with the inspection method. To explore the effects of different similarity judgment criteria, two different similarity judgment criteria sets were applied to the elicited data to identify similar lexical items. The quantification of these similarity decisions led to the computation of two similarity matrices which were subsequently analyzed by means of correlation analysis and multidimensional scaling. The findings of this analysis suggest compatible conclusions regarding the relative relationships and clustering of the investigated Ede varieties. However, the matching clustering results do not necessarily lead to the same recommendations for more in-depth sociolinguistic research, when interpreted in terms of an absolute lexical similarity threshold. The indicated ambiguities suggest the usefulness of focusing on the relative, rather than absolute in establishing recommendations for further sociolinguistic research.