979 resultados para lexical semantics
Resumo:
There is evidence that the explicit lexical-semantic processing deficits which characterize aphasia may be observed in the absence of implicit semantic impairment. The aim of this article was to critically review the international literature on lexical-semantic processing in aphasia, as tested through the semantic priming paradigm. Specifically, this review focused on aphasia and lexical-semantic processing, the methodological strengths and weaknesses of the semantic paradigms used, and recent evidence from neuroimaging studies on lexical-semantic processing. Furthermore, evidence on dissociations between implicit and explicit lexical-semantic processing reported in the literature will be discussed and interpreted by referring to functional neuroimaging evidence from healthy populations. There is evidence that semantic priming effects can be found both in fluent and in non-fluent aphasias, and that these effects are related to an extensive network which includes the temporal lobe, the pre-frontal cortex, the left frontal gyrus, the left temporal gyrus and the cingulated cortex.
Resumo:
Este artigo apresenta contrastivamente os diferentes tipos de categorização gramatical do item lexical advérbio em uma gramática brasileira e em duas alemãs. O objetivo é apontar a complexidade de descrição do advérbio em uma classe única. A característica heterogeneidade do advérbio é exemplificada por três tipos de advérbio: advérbios modalizadores discursivos (CASTILHO 2010), advérbios de comentário (DUDEN 2006) e palavras modais (HELBIG & BUSCHA 2001).
Resumo:
[ES] El presente trabajo tiene por objetivo el análisis de las listas de vocabulario bilingües español-francés creadas en torno al tema de la indumentaria e incluidas en los diferentes repertorios léxicos organizados por temas que han sido ampliamente empleados como herramienta básica para la enseñanza del vocabulario esencial de una lengua extranjera. La investigación combina, por tanto, tres líneas principales: lexicográfica, dada la naturaleza del corpus, léxico-semántica, puesto que se traza la evolución de las voces registradas en esos listados y didáctica, pues contribuye a un mejor conocimiento de la historia de la enseñanza del vocabulario.
Resumo:
[EN]This paper examines a corpus of 150 titles of research articles published between 2010 and 2013 in Anglo-American natural sciences journals (physics, chemistry and biology) in order to determine their lexical density and grammatical and morphosyntactic features. Towards that end, the frequency of present and past participles, prepositions, coordinating and subordinate conjunctions, and the frequency and length of compound words was recorded in each title. The total number of content and function words was also recorded so as to determine title lexical density. ANOVA tests were applied in order to assess whether statistically significant differences in the frequency of the above mentioned variables were detected within and across disciplines and in the whole corpus.
Resumo:
[ES]El objetivo de este trabajo es estudiar los términos médicos en uno de los primeros diccionarios monolingües y de temática general que se publicaron en Inglaterra a comienzos del siglo XVII: An English Expositor (John Bullokar, 1616). Para ello, se prestará atención a las categorías en las que se pueden clasificar, el volumen de entradas léxicas y el contenido de las definiciones.
Resumo:
[ES]El objetivo del presente artículo es demostrar que existe un español internacional (EI) en los medios de comunicación de Hispanoamérica. Para ello, hemos escogido un programa de radio de la norma culta del español Atlántico. Por un lado, analizamos las definiciones que se han realizado del español de los medios de comunicación-lengua especial, Lázaro Carreter, para constatar si coadyuvan a la consolidación del concepto de EI y, por otro lado, realizamos un estudio dialectal con los americanismos léxicos del corpus para verificar hasta qué punto constituyen una isoglosa que impide la inteligibilidad y, por tanto, la no existencia de un EI en los medios de comunicación.
Resumo:
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.
Resumo:
[EN]Applying a CLIL methodological approach marks a shift in emphasis from language learning based on linguistic form and grammatical progression to a more ‘language acquisition’ one which takes account language functions. In this article we will study the elements of the “language of instruction” of the area of Maths in Secondary Education, by focusing on the analysis of the communicative functions, and the lexical and the cultural items present in the textbook in use. Our aim is to present the CLIL teacher with the linguistic and didactic implications that he or she should take into consideration when implementing the bilingual syllabuses with their students. In order to do that, we will present our conclusions emphasizing the need for coordination in different content areas, linguistic and communicative contents, between the foreign language teacher and the CLIL subject one.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
La dissertazione è suddivisa in due capitoli più tre appendici. Nel I capitolo, Musica e dolore, si indagano i casi di metamusicalità in riferimento al dolore, che si intensificano in Euripide: si nota lo sviluppo di una riflessione sul ruolo della mousike rispetto al dolore, espressa attraverso un lessico medico e musicale. Si dimostra che in Euripide si pone il problema di quale scopo abbia la musica, se sia utile, e in quale forma lo sia. Nella prima produzione si teorizza una mousa del lamento come dolce o terapeutica per chi soffre. Molti personaggi, però, mostrano sfiducia nel potere curativo del lamento. Nell’ultima produzione si intensificano gli interrogativi sulla performance del canto, che si connotano come casi metamusicali e metateatrali. Nell’Elena, nell’Ipsipile e nelle Baccanti, E. sembra proporre una terapia ‘omeopatica’ del dolore attraverso la musica orgiastico-dionisiaca. Nel II capitolo, Natura e musica, si sceglie l’Ifigenia Taurica come esempio di mimetismo orchestico-musicale fondato – oltre che su casi di autoreferenzialità – su un immaginario naturale che, ‘facendo musica’, contribuisce all’espressività della choreia e della musica in scena. Si ipotizza inoltre un accompagnamento musicale mimetico rispetto ai suoni della natura e movimenti di danza lineari accanto a formazioni circolari, che sembrano richiamare la ‘doppia natura’ del ditirambo. L’Appendice I, Gli aggettivi poetici ξουθός e ξουθόπτερος: il loro significato e la loro potenzialità allusiva, affronta un caso particolare e problematico di ‘mimetismo lessicale’, innescato dal termine ξουθός e dal composto euripideo ξουθόπτερος. Si dimostra che l’aggettivo indica originariamente un movimento vibratorio, ma sviluppa anche un senso sonoro, ed è quindi un termine evocativo rispetto alla performance. Nell’Appendice II, Il lessico musicale in Euripide, è raccolto il lessico euripideo coreutico-musicale. Nell’Appendice III, La mousike nei drammi euripidei, sono raccolti i riferimenti alla mousike in ogni dramma.
Resumo:
Il vocabolario del potere fra intento etico-morale e tutela sociale. I lemmi dei Capitolari Carolingi nel Regnum Italicum (774-813), costituisce il risultato di una ricerca condotta sul lessico della legislazione carolingia promulgata per il Regno Italico dal momento della conquista dei Franchi sino alla morte di Carlomagno. L’analisi ha preso in esame tutti i lemmi, soprattutto sostantivi e aggettivi, riconducibili alla sfera etica e morale, e alla concezione della libertà della persona. Il lavoro si è giovato delle analisi più specifiche in merito ai concetti giuridico-istituzionali che fonti normative come quelle prese in esame portano inevitabilmente in primo piano. La ricerca, partita da una completa catalogazione dei lemmi, si è concentrata su quelli che maggiormente consentissero di valutare le interazioni fra la corte intellettuale dei primi carolingi – formata come noto da uomini di chiesa – e le caratteristiche di pensiero di quegli uomini, un pensiero sociale e istituzionale insieme. Il lavoro ha analizzato un lessico specifico per indagare come la concezione tradizionale della societas Christiana si esprimesse nella legislazione attraverso lemmi ed espressioni formulari peculiari: la scelta di questi da parte del Rex e della sua cerchia avrebbe indicato alla collettività una pacifica convivenza e definito contestualmente “l’intento ordinatore e pacificatore” del sovrano. L’analisi è stata condotta su un periodo breve ma assai significativo – un momento di frattura politica importante – per cogliere, proprio sfruttando la sovrapposizione e talvolta lo scontro fra i diversi usi di cancelleria del regno longobardo prima e carolingio poi, la volontarietà o meno da parte dei sovrani nell’uso di un lessico specifico. Questo diventa il problema centrale della tesi: tale lessico impone con la sua continuità d’uso modelli politici o invece è proprio un uso consapevole e strumentale di un determinato apparato lessicale che intende imporre alla società nuovi modelli di convivenza?
Non-normal modal logics, quantification, and deontic dilemmas. A study in multi-relational semantics
Resumo:
This dissertation is devoted to the study of non-normal (modal) systems for deontic logics, both on the propositional level, and on the first order one. In particular we developed our study the Multi-relational setting that generalises standard Kripke Semantics. We present new completeness results concerning the semantic setting of several systems which are able to handle normative dilemmas and conflicts. Although primarily driven by issues related to the legal and moral field, these results are also relevant for the more theoretical field of Modal Logic itself, as we propose a syntactical, and semantic study of intermediate systems between the classical propositional calculus CPC and the minimal normal modal logic K.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
Ce mémoire est le fruit de ma passion pour la langue française, pour la littérature et pour la traduction. Mon travail porte sur une proposition de traduction, du français vers l’italien, de trois passages tirés des trois romans composant La Trilogie des couleurs de Maxence Fermine. Il s’agit de romans brefs mais intenses qui partagent le thème dominant de la couleur, ainsi que le schéma narratif et un style unique en son genre. Dans le premier chapitre, je souhaite d’abord fournir quelques informations sur la vie de l’écrivain, sur son style d’écriture et sur les thèmes principaux de ses ouvrages. Ensuite, je vais présenter La Trilogie, pour tracer le contexte général dans lequel les passages choisis s’inscrivent. Enfin, je me pencherai sur les raisons qui m'ont amenée au choix de ces passages. Dans le deuxième chapitre, je vais analyser les textes source (les passages choisis) dans le but d’identifier les éléments les plus importants (sur le plan lexical, morphosyntaxique et stylistique) à considérer très attentivement lors de la traduction. Dans le dernier chapitre, j'essaye d'analyser ma proposition de traduction à l'aide de textes spécifiques et justifie mes choix traductifs. Enfin, je vais tirer mes conclusions : dans la traduction littéraire, l’expression doit être la plus spontanée possible, tout en gardant le style de l’auteur et un certain niveau esthétique. En outre, le but de tout traducteur est celui d’atteindre un bon compromis entre une traduction orientée vers le texte source et une traduction adressée à la culture cible, ainsi que celui de créer un nouveau texte, qui doit fonctionner aussi bien que l’original, en profitant de tous les atouts de la langue cible de la même manière que l’auteur du texte source l’a fait dans sa propre langue. Mon travail a toujours été guidé par ces principes.