918 resultados para Lexical semantic classes
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A presente dissertação tem como objetivo central identificar, mapear e descrever a variação lexical do português falado na zona rural de seis municípios da mesorregião Sudeste Paraense: Curionópolis, Itupiranga, Santana do Araguaia, São Félix do Xingu, São João do Araguaia e Tucuruí. Esta mesorregião apresenta importância considerável no contexto sócio-político-econômico-cultural do Estado do Pará. A pesquisa é orientada pelos pressupostos da dialetologia, sob o método da geolinguística. Este trabalho faz parte do projeto GeoLinTerm, mas com pesquisa específica do eixo do projeto ALiPA. Fizemos o levantamento de alguns trabalhos realizados ao longo dos estudos geolinguísticos. A metodologia utilizada contou com a aplicação de um questionário semântico lexical, adaptado, contendo quatorze campos semânticos, que foi respondido pelos informantes selecionados. Os dados coletados nos seis municípios, objeto da pesquisa, contêm registros de fala de 22 informantes da zona rural da mesorregião Sudeste Paraense, dentro do perfil metodológico estabelecido pelo ALiPA. Após a coleta, fizemos o tratamento dos dados com a seleção, a transcrição, a elaboração de 30 cartas e a descrição dos resultados. Das 256 perguntas do questionário, selecionamos as 30 mais frequentes e com maior variação para serem desenvolvidas nas cartas. Em seguida às cartas, mostramos as ocorrências por localidade, sexo e faixa etária.
Resumo:
Princeton WordNet (WN.Pr) lexical database has motivated efficient compilations of bulky relational lexicons since its inception in the 1980's. The EuroWordNet project, the first multilingual initiative built upon WN.Pr, opened up ways of building individual wordnets, and interrelating them by means of the so-called Inter-Lingual-Index, an unstructured list of the WN.Pr synsets. Other important initiative, relying on a slightly different method of building multilingual wordnets, is the MultiWordNet project, where the key strategy is building language specific wordnets keeping as much as possible of the semantic relations available in the WN.Pr. This paper, in particular, stresses that the additional advantage of using WN.Pr lexical database as a resource for building wordnets for other languages is to explore possibilities of implementing an automatic procedure to map the WN.Pr conceptual relations as hyponymy, co-hyponymy, troponymy, meronymy, cause, and entailment onto the lexical database of the wordnet under construction, a viable possibility, for those are language-independent relations that hold between lexicalized concepts, not between lexical units. Accordingly, combining methods from both initiatives, this paper presents the ongoing implementation of the WN.Br lexical database and the aforementioned automation procedure illustrated with a sample of the automatic encoding of the hyponymy and co-hyponymy relations.
Resumo:
This paper describes the prepositions sob and sobre in the Functional Discourse Grammar framework (HENGEVELD; MACKENZIE, 2008) aiming at checking their lexical or grammatical status on the basis of classification criteria postulated by Keizer (2007). The following aspects point to their lexical status : (i) they consist of an Ascription Subact; (ii) they contain a specific content on the vertical axis signaling inferiority and superiority position in relation to a limit; (iii) they are not required by any predicate, but they are predicates by themselves, which require complementation by an argument playing Reference semantic function; (iv) they may be combined with de, em, para and por; which are genuine grammatical prepositions; and, finally, (v) they are not subjected to any phonological process of reduction and fusion.
Resumo:
Ce travail a le but de quantifier les formes les plus courantes d’adjetifs en portuguais brésilien, il s’agit d’un travail qui porte sur les usages synchroniques de la langue. Notre objetif c’est d’indentifier les formations lexicales propres aux adjetifs et de quantifier les usages de ces fomations pour en découvrir les tendances synchroniques de ces usages en portuguais brésilien. Le corpus est de base écrite, composé des lettres des lecteurs de magazine, ce qui permet plus de stabilité pour l’analyse. Vu qu’il s’agit d’un suport d’expression populaire le corpus nous a possibilité de faire un approche au langage plus informelle dont la façon d’expression permet la formation de vocabulaires les plus insolites et les plus récentes, tandis que l’écrite formelle résiste plus aux changements. La recheche des formes nous a donné des résultats quantitatifs des formes déjà connues, les plus nouvelles aussi et des usages des mots appartenants à d’autres classes gramaticales qui avaient une valeur d’adjetive dans quelques contextes spécifiques, en expressions typiques de langage informel. Ce travail, donc, contribue comme um petit portrait de la réalité lexicale brésilienne dans l’usage dynamique de la langue
Resumo:
Este artigo apresenta contrastivamente os diferentes tipos de categorização gramatical do item lexical advérbio em uma gramática brasileira e em duas alemãs. O objetivo é apontar a complexidade de descrição do advérbio em uma classe única. A característica heterogeneidade do advérbio é exemplificada por três tipos de advérbio: advérbios modalizadores discursivos (CASTILHO 2010), advérbios de comentário (DUDEN 2006) e palavras modais (HELBIG & BUSCHA 2001).
Resumo:
Abstract This dissertation investigates the notion of equivalence with particular reference to lexical cohesion in the translation of political speeches. Lexical cohesion poses a particular challenge to the translators of political speeches and thus preserving lexical cohesion elements as one of the major elements of cohesion is undoubtedly crucial to their translation equivalence. We rely on Halliday’s (1994) classification of lexical cohesion which comprises: repetition, synonymy, antonymy, meronymy and hyponymy. Other traditional models of lexical cohesion are examined. We include Grammatical Parallelism for its role in creating textual semantic unity which is what cohesion is all about. The study shed light on the function of lexical cohesion elements as rhetorical device. The study also deals with lexical problems resulting from the transfer of lexical cohesion elements from the SL into the TL, which is often beset by many problems that most often result from the differences between languages. Three key issues are identified as being fundamental to equivalence and lexical cohesion in the translation of political speeches: sociosemiotic approach, register analysis, rhetoric, and poetic function. The study also investigates the lexical cohesion elements in the translation of political speeches from English into Arabic, Italian and French in relation to ideology, and its control, through bias and distortion. The findings are discussed, implications examined and topics for further research suggested.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
We explored the functional organization of semantic memory for music by comparing priming across familiar songs both within modalities (Experiment 1, tune to tune; Experiment 3, category label to lyrics) and across modalities (Experiment 2, category label to tune; Experiment 4, tune to lyrics). Participants judged whether or not the target tune or lyrics were real (akin to lexical decision tasks). We found significant priming, analogous to linguistic associative-priming effects, in reaction times for related primes as compared to unrelated primes, but primarily for within-modality comparisons. Reaction times to tunes (e.g., "Silent Night") were faster following related tunes ("Deck the Hall") than following unrelated tunes ("God Bless America"). However, a category label (e.g., Christmas) did not prime tunes from within that category. Lyrics were primed by a related category label, but not by a related tune. These results support the conceptual organization of music in semantic memory, but with potentially weaker associations across modalities.
Resumo:
The group analysed some syntactic and phonological phenomena that presuppose the existence of interrelated components within the lexicon, which motivate the assumption that there are some sublexicons within the global lexicon of a speaker. This result is confirmed by experimental findings in neurolinguistics. Hungarian speaking agrammatic aphasics were tested in several ways, the results showing that the sublexicon of closed-class lexical items provides a highly automated complex device for processing surface sentence structure. Analysing Hungarian ellipsis data from a semantic-syntactic aspect, the group established that the lexicon is best conceived of being as split into at least two main sublexicons: the store of semantic-syntactic feature bundles and a separate store of sound forms. On this basis they proposed a format for representing open-class lexical items whose meanings are connected via certain semantic relations. They also proposed a new classification of verbs to account for the contribution of the aspectual reading of the sentence depending on the referential type of the argument, and a new account of the syntactic and semantic behaviour of aspectual prefixes. The partitioned sets of lexical items are sublexicons on phonological grounds. These sublexicons differ in terms of phonotactic grammaticality. The degrees of phonotactic grammaticality are tied up with the problem of psychological reality, of how many degrees of this native speakers are sensitive to. The group developed a hierarchical construction network as an extension of the original General Inheritance Network formalism and this framework was then used as a platform for the implementation of the grammar fragments.
Resumo:
'Weak senses' are a specific type of semantic information as opposed to assertions and presuppositions. The universal trait of weak senses is that they assume 'if' modality in negative contexts. In addition they exhibit several other diagnostic properties, e.g. they fill at least one of their valency places with a semantic element sensitive to negation (i.e. with an assertion or other weak sense), they normally do not fall within the scope of functors, do not play any role in causal relations, and resist intensification. As weak senses are widespread in lexical, grammatical and referential semantics, this notion holds the clue to phenomena as diverse as the oppositions little - a little, few - a few, edva ('hardly') - cut' ('slightly), where a little, a few, cut, convey 'weakly' approximately what little, few, and edva do in an assertive way, the semantics of the Russian perfect aspect, and the formation rules for conjunction strings. Zeldovich outlines a typology of weak senses, the main distinction being between weak senses unilaterally dependent upon the truthfulness of what they saturate their valency with, and weak senses exerting their own influence on the main situation. The latter, called, non-trivial, are instantiated by existential quantifiers involved in the semantics of indefinite pronouns, iterative verbs, etc.
Resumo:
Unconscious perception is commonly described as a phenomenon that is not under intentional control and relies on automatic processes. We challenge this view by arguing that some automatic processes may indeed be under intentional control, which is implemented in task-sets that define how the task is to be performed. In consequence, those prime attributes that are relevant to the task will be most effective. To investigate this hypothesis, we used a paradigm which has been shown to yield reliable short-lived priming in tasks based on semantic classification of words. This type of study uses fast, well practised classification responses, whereby responses to targets are much less accurate if prime and target belong to a different category than if they belong to the same category. In three experiments, we investigated whether the intention to classify the same words with respect to different semantic categories had a differential effect on priming. The results suggest that this was indeed the case: Priming varied with the task in all experiments. However, although participants reported not seeing the primes, they were able to classify the primes better than chance using the classification task they had used before with the targets. When a lexical task was used for discrimination in experiment 4, masked primes could however not be discriminated. Also, priming was as pronounced when the primes were visible as when they were invisible. The pattern of results suggests that participants had intentional control on prime processing, even if they reported not seeing the primes.
Resumo:
In his in uential article about the evolution of the Web, Berners-Lee [1] envisions a Semantic Web in which humans and computers alike are capable of understanding and processing information. This vision is yet to materialize. The main obstacle for the Semantic Web vision is that in today's Web meaning is rooted most often not in formal semantics, but in natural language and, in the sense of semiology, emerges not before interpretation and processing. Yet, an automated form of interpretation and processing can be tackled by precisiating raw natural language. To do that, Web agents extract fuzzy grassroots ontologies through induction from existing Web content. Inductive fuzzy grassroots ontologies thus constitute organically evolved knowledge bases that resemble automated gradual thesauri, which allow precisiating natural language [2]. The Web agents' underlying dynamic, self-organizing, and best-effort induction, enable a sub-syntactical bottom up learning of semiotic associations. Thus, knowledge is induced from the users' natural use of language in mutual Web interactions, and stored in a gradual, thesauri-like lexical-world knowledge database as a top-level ontology, eventually allowing a form of computing with words [3]. Since when computing with words the objects of computation are words, phrases and propositions drawn from natural languages, it proves to be a practical notion to yield emergent semantics for the Semantic Web. In the end, an improved understanding by computers on the one hand should upgrade human- computer interaction on the Web, and, on the other hand allow an initial version of human- intelligence amplification through the Web.
Resumo:
Background: Semantic memory processes have been well described in literature. However, the available findings are mostly based on relatively young subjects and concrete word material (e.g. tree). Comparatively little information exists about semantic memory for abstract words (e.g. mind) and possible age related changes in semantic retrieval. In this respect, we developed a paradigm that is useful to investigate the implicit (i.e. attentionindependent) access to concrete and abstract semantic memory. These processes were then compared between young and elderly healthy subjects. Methods: A well established tool for investigating semantic memory processes is the semantic priming paradigm, which consists both of semantically unrelated and related word pairs. In our behavioral task these noun-noun word pairs were further divided into concrete, abstract and matched pronounceable non-word conditions. With this premise, the young and elderly participants performed a lexical decision task: they were asked to press a choice of two buttons as an indication for whether the word pair contained a non-word or not. In order to minimize controlled (i.e. attention-dependent) retrieval strategies, a short stimulus onset asynchrony (SOA) of 150ms was set. Reaction time (RT) changes and accuracy to related and unrelated words (priming effect) in the abstract vs. concrete condition (concreteness effect) were the dependent variables of interest. Results and Discussion: Statistical analysis confirmed both a significant priming effect (i.e. shorter RTs in semantically related compared to unrelated words) and a concreteness effect (i.e. RT decrease for concrete compared to abstract words) in the young and elderly subjects. There was no age difference in accuracy. The only age effect was a commonly known general slowing in RT over all conditions. In conclusion, age is not a critical factor in the implicit access to abstract and concrete semantic memory.
Resumo:
With the progressing course of Alzheimer's disease (AD), deficits in declarative memory increasingly restrict the patients' daily activities. Besides the more apparent episodic (biographical) memory impairments, the semantic (factual) memory is also affected by this neurodegenerative disorder. The episodic pathology is well explored; instead the underlying neurophysiological mechanisms of the semantic deficits remain unclear. For a profound understanding of semantic memory processes in general and in AD patients, the present study compares AD patients with healthy controls and Semantic Dementia (SD) patients, a dementia subgroup that shows isolated semantic memory impairments. We investigate the semantic memory retrieval during the recording of an electroencephalogram, while subjects perform a semantic priming task. Precisely, the task demands lexical (word/nonword) decisions on sequentially presented word pairs, consisting of semantically related or unrelated prime-target combinations. Our analysis focuses on group-dependent differences in the amplitude and topography of the event related potentials (ERP) evoked by related vs. unrelated target words. AD patients are expected to differ from healthy controls in semantic retrieval functions. The semantic storage system itself, however, is thought to remain preserved in AD, while SD patients presumably suffer from the actual loss of semantic representations.