998 resultados para Grammatical model
Resumo:
Virtual Worlds Generator is a grammatical model that is proposed to define virtual worlds. It integrates the diversity of sensors and interaction devices, multimodality and a virtual simulation system. Its grammar allows the definition and abstraction in symbols strings of the scenes of the virtual world, independently of the hardware that is used to represent the world or to interact with it. A case study is presented to explain how to use the proposed model to formalize a robot navigation system with multimodal perception and a hybrid control scheme of the robot.
Resumo:
Virtual Worlds Generator is a grammatical model that is proposed to define virtual worlds. It integrates the diversity of sensors and interaction devices, multimodality and a virtual simulation system. Its grammar allows the definition and abstraction in symbols strings of the scenes of the virtual world, independently of the hardware that is used to represent the world or to interact with it. A case study is presented to explain how to use the proposed model to formalize a robot navigation system with multimodal perception and a hybrid control scheme of the robot. The result is an instance of the model grammar that implements the robotic system and is independent of the sensing devices used for perception and interaction. As a conclusion the Virtual Worlds Generator adds value in the simulation of virtual worlds since the definition can be done formally and independently of the peculiarities of the supporting devices.
Resumo:
The potential of integrating multiagent systems and virtual environments has not been exploited to its whole extent. This paper proposes a model based on grammars, called Minerva, to construct complex virtual environments that integrate the features of agents. A virtual world is described as a set of dynamic and static elements. The static part is represented by a sequence of primitives and transformations and the dynamic elements by a series of agents. Agent activation and communication is achieved using events, created by the so-called event generators. The grammar defines a descriptive language with a simple syntax and a semantics, defined by functions. The semantics functions allow the scene to be displayed in a graphics device, and the description of the activities of the agents, including artificial intelligence algorithms and reactions to physical phenomena. To illustrate the use of Minerva, a practical example is presented: a simple robot simulator that considers the basic features of a typical robot. The result is a functional simple simulator. Minerva is a reusable, integral, and generic system, which can be easily scaled, adapted, and improved. The description of the virtual scene is independent from its representation and the elements that it interacts with.
Resumo:
Ce mémoire s’intéresse aux processus de formation de mots. Nous postulons que les notions de productivité et de polysémie guident les locuteurs dans la sélection des procédés de formation de mots. Afin de vérifier notre hypothèse, nous avons porté nos observations sur un répertoire de mots suffixés de l’espagnol, le « Diccionario de los sufijos de la lengua española (DISULE) » de Faitelson-Weiser (2010, cf. www.sufijos.lli.ulaval.ca ). Nous avons évalué les degrés de productivité et de polysémie de chaque segment identifié comme étant un suffixe. Nous avons ensuite mis en relation les valeurs obtenues pour chacune des propriétés. Cette démarche, que nous avons testée, reconnaît le morphème comme unité d’analyse, ce qui se correspond au modèle grammatical Item et arrangement (Hockett, 1954). Bien que le résultat de nos analyses ne nous permette pas d’établir des corrélations fortes entre les deux variables pour l’ensemble des suffixes, lorsque nous délimitons des contextes de concurrence spécifiques, nous pouvons constater que les relations entre productivité et polysémie suivent des patrons spécifiques à ceux-ci. En outre, nous remarquons que le modèle adopté est plus efficace dans la description de la polysémie que pour expliquer la productivité; ce qui nous amène à nous questionner sur la pertinence de l’opposition établie entre mot et morphème en tant qu’unités d’analyse. Nous concluons que les deux notions sont essentielles en morphologie.
Resumo:
Ce mémoire s’intéresse aux processus de formation de mots. Nous postulons que les notions de productivité et de polysémie guident les locuteurs dans la sélection des procédés de formation de mots. Afin de vérifier notre hypothèse, nous avons porté nos observations sur un répertoire de mots suffixés de l’espagnol, le « Diccionario de los sufijos de la lengua española (DISULE) » de Faitelson-Weiser (2010, cf. www.sufijos.lli.ulaval.ca ). Nous avons évalué les degrés de productivité et de polysémie de chaque segment identifié comme étant un suffixe. Nous avons ensuite mis en relation les valeurs obtenues pour chacune des propriétés. Cette démarche, que nous avons testée, reconnaît le morphème comme unité d’analyse, ce qui se correspond au modèle grammatical Item et arrangement (Hockett, 1954). Bien que le résultat de nos analyses ne nous permette pas d’établir des corrélations fortes entre les deux variables pour l’ensemble des suffixes, lorsque nous délimitons des contextes de concurrence spécifiques, nous pouvons constater que les relations entre productivité et polysémie suivent des patrons spécifiques à ceux-ci. En outre, nous remarquons que le modèle adopté est plus efficace dans la description de la polysémie que pour expliquer la productivité; ce qui nous amène à nous questionner sur la pertinence de l’opposition établie entre mot et morphème en tant qu’unités d’analyse. Nous concluons que les deux notions sont essentielles en morphologie.
Resumo:
[EN] Progress in methodology in specific fields is usually very closely linked to the technological progress in other areas of knowledge. This justifies the fact that lexicographical techniques have had to wait for the arrival of the IT era of the last decades of the 20th century in order to be able to create specialised electronic dictionaries which can house and systemise enormous amounts of information which can later be dealt with quickly and efficiently. This study proposes a practical-methodological model which aims to solve the grammatical treatment of adverbs in Ancient Latin. We have suggested a list of 5 types, in a decreasing order from a greater to lesser degree of specialisation; technical (T), semi-technical (S-T), instrumental-valued (I-V), instrumental- descriptive (I-D), instrumental-expository (I-E).
Resumo:
This paper presents a self-organizing, real-time, hierarchical neural network model of sequential processing, and shows how it can be used to induce recognition codes corresponding to word categories and elementary grammatical structures. The model, first introduced in Mannes (1992), learns to recognize, store, and recall sequences of unitized patterns in a stable manner, either using short-term memory alone, or using long-term memory weights. Memory capacity is only limited by the number of nodes provided. Sequences are mapped to unitized patterns, making the model suitable for hierarchical operation. By using multiple modules arranged in a hierarchy and a simple mapping between output of lower levels and the input of higher levels, the induction of codes representing word category and simple phrase structures is an emergent property of the model. Simulation results are reported to illustrate this behavior.
Resumo:
This paper contributes a new approach for developing UML software designs from Natural Language (NL), making use of a meta-domain oriented ontology, well established software design principles and Natural Language Processing (NLP) tools. In the approach described here, banks of grammatical rules are used to assign event flows from essential use cases. A domain specific ontology is also constructed, permitting semantic mapping between the NL input and the modeled domain. Rules based on the widely-used General Responsibility Assignment Software Principles (GRASP) are then applied to derive behavioral models.
Resumo:
This paper uses some data from Igbo-English intrasentential codeswitching involving mixed nominal expressions to test the Matrix Language Frame (MLF) model. The MLF model is one of the most highly influential frameworks used successfully in the study of grammatical aspects of codeswitching. Three principles associated with it, the Matrix Language Principle, the Asymmetry Principle and the Uniform Structure Principle, were tested on data collected from informal conversations by educated adult Igbo-English bilinguals resident in Port Harcourt. The results of the analyses suggest general support for the three principles and for identifying Igbo-English as a ‘classic’ case of codeswitching.
Resumo:
Formalizing linguists' intuitions of language change as a dynamical system, we quantify the time course of language change including sudden vs. gradual changes in languages. We apply the computer model to the historical loss of Verb Second from Old French to modern French, showing that otherwise adequate grammatical theories can fail our new evolutionary criterion.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Social behaviour is mainly based on swarm colonies, in which each individual shares its knowledge about the environment with other individuals to get optimal solutions. Such co-operative model differs from competitive models in the way that individuals die and are born by combining information of alive ones. This paper presents the particle swarm optimization with differential evolution algorithm in order to train a neural network instead the classic back propagation algorithm. The performance of a neural network for particular problems is critically dependant on the choice of the processing elements, the net architecture and the learning algorithm. This work is focused in the development of methods for the evolutionary design of artificial neural networks. This paper focuses in optimizing the topology and structure of connectivity for these networks.
Resumo:
A model of the cognitive process of natural language processing has been developed using the formalism of generalized nets. Following this stage-simulating model, the treatment of information inevitably includes phases, which require joint operations in two knowledge spaces – language and semantics. In order to examine and formalize the relations between the language and the semantic levels of treatment, the language is presented as an information system, conceived on the bases of human cognitive resources, semantic primitives, semantic operators and language rules and data. This approach is applied for modeling a specific grammatical rule – the secondary predication in Russian. Grammatical rules of the language space are expressed as operators in the semantic space. Examples from the linguistics domain are treated and several conclusions for the semantics of the modeled rule are made. The results of applying the information system approach to the language turn up to be consistent with the stages of treatment modeled with the generalized net.