983 resultados para Lexical valency
Resumo:
Cette recherche porte sur l’interface entre la sémantique lexicale et la syntaxe, et elle s’inscrit dans le cadre du projet de base lexicale DiCo (acronyme pour Dictionnaire de combinatoire) à l’Observatoire de Linguistique Sens-Texte [OLST] de l’Université de Montréal. Le projet découle d'une volonté d'inscrire de façon concise et complète, à même le dictionnaire, le comportement syntaxique typique à chaque unité lexicale. Dans cette optique, nous encodons la cooccurrence des lexies nominales du DiCo avec leurs actants à l'intérieur d'un tableau de régime lexical (aussi connu sous le nom de schéma valenciel, structure argumentale, cadre de sous-catégorisation, structure prédicats-arguments, etc.), en notant entre autres les dépendances syntaxiques de surface impliquées. Dans ce mémoire, nous présentons les propriétés syntaxiques d'une dépendance nominale du français, celle que nous avons nommée attributive adnominale, de façon à exposer une méthodologie d'identification et de caractérisation des dépendances syntaxiques de surface. Nous donnons également la liste des dépendances nominales régies identifiées au cours de ce travail. Par la suite, nous exposons la création d'une base de données de régimes généralisés du français nommée CARNAVAL. Finalement, nous discutons des applications possibles de notre travail, particulièrement en ce qui a trait à la création d'une typologie des régimes lexicaux du français.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
This article presents a characterization of the lexical competence (vocabulary knowledge and use) of students learning to read in EFL in a public university in São Paulo state. Although vocabulary has been consistently cited as one of the EFL reader´s main source of difficulty, there is no data in the literature which shows the extent of the difficulties. The data for this study is part of a previous research, which investigates, from the perspective of an interactive model of reading, the relationship between lexical competence and EFL reading comprehension. Quantitative as well as qualitative data was considered. For this study, the quantitative data is the product of vocabulary tests of 49 subjects while the qualitative data comprises pause protocols of three subjects, with levels of reading ability ranging from good to poor, selected upon their performance in the quantitative study. A rich concept of vocabulary knowledge was adapted and used for the development of vocabulary tests and analysis of protocols. The results on both studies show, with a few exceptions, the lexical competence of the group to be vague and imprecise in two dimensions: quantitative (number of known words or vocabulary size) and qualitative (depth or width of this knowledge). Implications for the teaching of reading in a foreign context are discussed.
Resumo:
Nine individuals with complex language deficits following left-hemisphere cortical lesions and a matched control group (n 5 9) performed speeded lexical decisions on the third word of auditory word triplets containing a lexical ambiguity. The critical conditions were concordant (e.g., coin–bank–money), discordant (e.g., river–bank–money), neutral (e.g., day–bank– money), and unrelated (e.g., river–day–money). Triplets were presented with an interstimulus interval (ISI) of 100 and 1250 ms. Overall, the left-hemisphere-damaged subjects appeared able to exhaustively access meanings for lexical ambiguities rapidly, but were unable to reduce the level of activation for contextually inappropriate meanings at both short and long ISIs, unlike control subjects. These findings are consistent with a disruption of the proposed role of the left hemisphere in selecting and suppressing meanings via contextual integration and a sparing of the right-hemisphere mechanisms responsible for maintaining alternative meanings.
Resumo:
The Coefficient of Variance (mean standard deviation/mean Response time) is a measure of response time variability that corrects for differences in mean Response time (RT) (Segalowitz & Segalowitz, 1993). A positive correlation between decreasing mean RTs and CVs (rCV-RT) has been proposed as an indicator of L2 automaticity and more generally as an index of processing efficiency. The current study evaluates this claim by examining lexical decision performance by individuals from three levels of English proficiency (Intermediate ESL, Advanced ESL and L1 controls) on stimuli from four levels of item familiarity, as defined by frequency of occurrence. A three-phase model of skill development defined by changing rCV-RT.values was tested. Results showed that RTs and CVs systematically decreased as a function of increasing proficiency and frequency levels, with the rCV-RT serving as a stable indicator of individual differences in lexical decision performance. The rCV-RT and automaticity/restructuring account is discussed in light of the findings. The CV is also evaluated as a more general quantitative index of processing efficiency in the L2.
Resumo:
This study examined spoken-word recognition in children with specific language impairment (SLI) and normally developing children matched separately for age and receptive language ability. Accuracy and reaction times on an auditory lexical decision task were compared. Children with SLI were less accurate than both control groups. Two subgroups of children with SLI, distinguished by performance accuracy only, were identified. One group performed within normal limits, while a second group was significantly less accurate. Children with SLI were not slower than the age-matched controls or language-matched controls. Further, the time taken to detect an auditory signal, make a decision, or initiate a verbal response did not account for the differences between the groups. The findings are interpreted as evidence for language-appropriate processing skills acting upon imprecise or underspecified stored representations.
Resumo:
Recent semantic priming investigations in Parkinsons disease (PD) employed variants of Neelys (1977) lexical decision paradigm to dissociate the automatic and attentional aspects of semantic activation (McDonald, Brown, Gorell, 1996; Spicer, Brown, Gorell, 1994). In our earlier review, we claimed that the results of Spicer, McDonald and colleagues normal control participants violated the two-process model of information processing (Posner Snyder, 1975) upon which their experimental paradigm had been based (Arnott Chenery, 1999). We argued that, even at the shortest SOA employed, key design modifications to Neelys original experiments biased the tasks employed by Spicer et al. and McDonald et al. towards being assessments of attention-dependent processes. Accordingly, we contended that experimental procedures did not speak to issues of automaticity and, therefore, Spicer, McDonald and colleagues claims of robust automatic semantic activation in PD must be treated with caution.
Resumo:
The processing of lexical ambiguity in context was investigated in eight individuals with schizophrenia and a matched control group. Participants made speeded lexical decisions on the third word in auditory word triplets representing concordant (coin-bank-money), discordant (river-bank-money). neutral (day-bank-money), and unrelated (river-day-money) conditions. When the interstimulus interval (ISI) between the words was 100 ms. individuals with schizophrenia demonstrated priming consistent with selective. context-based lexical activation. At 1250 ms ISI a pattern of nonselective meaning facilitation was obtained. These results suggest an attentional breakdown in the sustained inhibition of meanings on the basis of lexical context. (C) 2002 Elsevier Science (USA).
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para a obtenção do grau de mestre em Ciências da Educação - Especialização em Educação Especial
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ciências da Educação - Especialidade Educação Especial
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para a obtenção de grau de Mestre em Didática da Língua Portuguesa no 1.º e 2.º Ciclos do Ensino Básico
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em ciências da educação - especialização em educação especial
Resumo:
O desenvolvimento lexical é condição fundamental no processo de desenvolvimento e aquisição de novos saberes, cabendo à escola, a partir dos 6 anos, a responsabilidade de potenciar o alargamento do capital lexical das crianças e promover o desenvolvimento progressivo da sua consciência lexical. Tendo presente a importância do desenvolvimento lexical nos primeiros anos do ensino básico, a investigação subjacente ao artigo que agora se apresenta teve como objetivos: (i) conhecer as conceções de professores do 1.º Ciclo sobre o ensino do léxico; (ii) caracterizar práticas docentes de professores do 1.º Ciclo no que se refere ao ensino do léxico (planificação, implementação e avaliação); (iii) relacionar conceções e práticas dos professores relativamente ao ensino do léxico. Para dar resposta aos objetivos definidos, realizou-se um estudo de caso múltiplo envolvendo quatro turmas e respetivos professores de uma escola do 1.º Ciclo do Ensino Básico da periferia de Lisboa. Como processos de recolha de dados recorreu-se à entrevista semiestruturada aos professores e à observação direta em sala de aula. Os resultados dessa investigação mostram que o ensino do léxico não está arredado das práticas dos professores de 1.º ciclo, tal como não é, de todo, ignorada a importância da promoção do desenvolvimento lexical. Os professores consideram, na generalidade, que o léxico constitui um instrumento fundamental em todas as aprendizagens. É uma ferramenta imprescindível da língua, pois é através dele que acedemos ao conhecimento nos vários domínios do saber (Calaque, 2004). Do ponto de vista dos professores, o problema reside, não na forma como perspetivam o ensino do léxico, mas sim na insuficiente preparação científica e didática para desenvolver um ensino explícito do léxico, o que vai ao encontro dos resultados da descrição das práticas observadas.