842 resultados para Natural language generation
Resumo:
This thesis aims at empowering software customers with a tool to build software tests them selves, based on a gradual refinement of natural language scenarios into executable visual test models. The process is divided in five steps: 1. First, a natural language parser is used to extract a graph of grammatical relations from the textual scenario descriptions. 2. The resulting graph is transformed into an informal story pattern by interpreting structurization rules based on Fujaba Story Diagrams. 3. While the informal story pattern can already be used by humans the diagram still lacks technical details, especially type information. To add them, a recommender based framework uses web sites and other resources to generate formalization rules. 4. As a preparation for the code generation the classes derived for formal story patterns are aligned across all story steps, substituting a class diagram. 5. Finally, a headless version of Fujaba is used to generate an executable JUnit test. The graph transformations used in the browser application are specified in a textual domain specific language and visualized as story pattern. Last but not least, only the heavyweight parsing (step 1) and code generation (step 5) are executed on the server side. All graph transformation steps (2, 3 and 4) are executed in the browser by an interpreter written in JavaScript/GWT. This result paves the way for online collaboration between global teams of software customers, IT business analysts and software developers.
Resumo:
The goal of this article is to reveal the computational structure of modern principle-and-parameter (Chomskian) linguistic theories: what computational problems do these informal theories pose, and what is the underlying structure of those computations? To do this, I analyze the computational complexity of human language comprehension: what linguistic representation is assigned to a given sound? This problem is factored into smaller, interrelated (but independently statable) problems. For example, in order to understand a given sound, the listener must assign a phonetic form to the sound; determine the morphemes that compose the words in the sound; and calculate the linguistic antecedent of every pronoun in the utterance. I prove that these and other subproblems are all NP-hard, and that language comprehension is itself PSPACE-hard.
Resumo:
En este artículo se presenta el caso de Milao, un entorno virtual que ofrece a los estudiantes de idiomas extranjeros la oportunidad de desarrollar y mejorar sus habilidades comunicativas dialogando en escenarios de conversación predefinidos que simulan la interacción con un nativo. Esta tecnología propone una solución a uno de los mayores retos en el aprendizaje de lenguas extranjeras: la falta de oportunidades para poner en práctica la gramática y el vocabulario recién adquiridos. Combinando la investigación sobre la lingüística y el aprendizaje de lenguas con los avances tecnológicos en el campo del Procesamiento del Lenguaje Natural (NPL), particularmente sobre sistemas de diálogo, hemos creado oportunidades en la demanda de los estudiantes a conversar en la lengua que tratan de aprender.
Resumo:
Esta pesquisa teve como objetivo identificar e analisar quais as possíveis dificuldades advindas da linguagem que alunos enfrentam na conversão da língua natural para a linguagem matemática. A investigação foi realizada ao longo do ano letivo de 2008 em classes de Ensino Médio de duas escolas públicas da cidade de Belém, onde foram coletadas informações por meio de registros produzidos pelos alunos em testes e avaliações bimestrais. Para subsidiar a investigação foram utilizadas, como aporte teórico, idéias de Raymond Duval acerca da teoria dos registros de representação semiótica; o conceito de significado ligado a filosofia da linguagem segundo Wittgenstein; algumas considerações feitas por Gottlob Frege sobre a distinção entre sentido e referência assim como algumas idéias do filósofo Gilles-Gaston Granger no que concerne ao problema das significações e do aspecto formal da linguagem matemática. As análises das informações que foram coletadas no decorrer do processo investigativo revelaram que, na perspectiva dos alunos, a conversão da língua natural para a linguagem matemática se depara com quatro tipos de dificuldades: a primeira apontou para o fato de existirem em cada registro de representação de um mesmo objeto matemático, diferentes conteúdos a serem mobilizados; a segunda mostrou que os alunos fracassam ao realizar a conversão da língua natural para a linguagem matemática quando não interpretam corretamente as regras matemáticas implícitas no enunciado de uma situação problema; a terceira surgiu do fato de existirem no texto de uma situação problema, palavras que os alunos não compreendiam o seu significado ou que geravam ambigüidade de sentidos; a quarta surgiu a partir do fato dos alunos não conseguirem compreender o significado matemático das letras utilizadas nos enunciados dos problemas.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
With the increasing production of information from e-government initiatives, there is also the need to transform a large volume of unstructured data into useful information for society. All this information should be easily accessible and made available in a meaningful and effective way in order to achieve semantic interoperability in electronic government services, which is a challenge to be pursued by governments round the world. Our aim is to discuss the context of e-Government Big Data and to present a framework to promote semantic interoperability through automatic generation of ontologies from unstructured information found in the Internet. We propose the use of fuzzy mechanisms to deal with natural language terms and present some related works found in this area. The results achieved in this study are based on the architectural definition and major components and requirements in order to compose the proposed framework. With this, it is possible to take advantage of the large volume of information generated from e-Government initiatives and use it to benefit society.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
E-learning systems output a huge quantity of data on a learning process. However, it takes a lot of specialist human resources to manually process these data and generate an assessment report. Additionally, for formative assessment, the report should state the attainment level of the learning goals defined by the instructor. This paper describes the use of the granular linguistic model of a phenomenon (GLMP) to model the assessment of the learning process and implement the automated generation of an assessment report. GLMP is based on fuzzy logic and the computational theory of perceptions. This technique is useful for implementing complex assessment criteria using inference systems based on linguistic rules. Apart from the grade, the model also generates a detailed natural language progress report on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. This is illustrated by applying the model to the assessment of Dijkstra’s algorithm learning using a visual simulation-based graph algorithm learning environment, called GRAPHs
Resumo:
This paper describes the design, development and field evaluation of a machine translation system from Spanish to Spanish Sign Language (LSE: Lengua de Signos Española). The developed system focuses on helping Deaf people when they want to renew their Driver’s License. The system is made up of a speech recognizer (for decoding the spoken utterance into a word sequence), a natural language translator (for converting a word sequence into a sequence of signs belonging to the sign language), and a 3D avatar animation module (for playing back the signs). For the natural language translator, three technological approaches have been implemented and evaluated: an example-based strategy, a rule-based translation method and a statistical translator. For the final version, the implemented language translator combines all the alternatives into a hierarchical structure. This paper includes a detailed description of the field evaluation. This evaluation was carried out in the Local Traffic Office in Toledo involving real government employees and Deaf people. The evaluation includes objective measurements from the system and subjective information from questionnaires. The paper details the main problems found and a discussion on how to solve them (some of them specific for LSE).
Resumo:
We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction.
Resumo:
El presente trabajo desarrolla un servicio REST que transforma frases en lenguaje natural a grafos RDF. Los grafos generados son grafos dirigidos, donde los nodos se forman con los sustantivos o adjetivos de las frases, y los arcos se forman con los verbos. Se utiliza dentro del proyecto p-medicine para dar soporte a las siguientes funcionalidades: Búsquedas en lenguaje natural: actualmente la plataforma p-medicine proporciona un interfaz programático para realizar consultas en SPARQL. El servicio desarrollado permitiría generar esas consultas automáticamente a partir de frases en lenguaje natural. Anotaciones de bases de datos mediante lenguaje natural: la plataforma pmedicine incorpora una herramienta, desarrollada por el Grupo de Ingeniería Biomédica de la Universidad Politécnica de Madrid, para la anotación de bases de datos RDF. Estas anotaciones son necesarias para la posterior traducción de las bases de datos a un esquema central. El proceso de anotación requiere que el usuario construya de forma manual las vistas RDF que desea anotar, lo que requiere mostrar gráficamente el esquema RDF y que el usuario construya vistas RDF seleccionando las clases y relaciones necesarias. Este proceso es a menudo complejo y demasiado difícil para un usuario sin perfil técnico. El sistema se incorporará para permitir que la construcción de estas vistas se realice con lenguaje natural. ---ABSTRACT---The present work develops a REST service that transforms natural language sentences to RDF degrees. Generated graphs are directed graphs where nodes are formed with nouns or adjectives of phrases, and the arcs are formed with verbs. Used within the p-medicine project to support the following functionality: Natural language queries: currently the p-medicine platform provides a programmatic interface to query SPARQL. The developed service would automatically generate those queries from natural language sentences. Memos databases using natural language: the p-medicine platform incorporates a tool, developed by the Group of Biomedical Engineering at the Polytechnic University of Madrid, for the annotation of RDF data bases. Such annotations are necessary for the subsequent translation of databases to a central scheme. The annotation process requires the user to manually construct the RDF views that he wants annotate, requiring graphically display the RDF schema and the user to build RDF views by selecting classes and relationships. This process is often complex and too difficult for a user with no technical background. The system is incorporated to allow the construction of these views to be performed with natural language.
Resumo:
Esta tesis tiene por objeto estudiar las posibilidades de realizar en castellano tareas relativas a la resolución de problemas con sistemas basados en el conocimiento. En los dos primeros capítulos se plantea un análisis de la trayectoria seguida por las técnicas de tratamiento del lenguaje natural, prestando especial interés a los formalismos lógicos para la comprensión del lenguaje. Seguidamente, se plantea una valoración de la situación actual de los sistemas de tratamiento del lenguaje natural. Finalmente, se presenta lo que constituye el núcleo de este trabajo, un sistema llamado Sirena, que permite realizar tareas de adquisición, comprensión, recuperación y explicación de conocimiento en castellano con sistemas basados en el conocimiento. Este sistema contiene un subconjunto del castellano amplio pero simple formalizado con una gramática lógica. El significado del conocimiento se basa en la lógica y ha sido implementado en el lenguaje de programación lógica Prolog II vS. Palabras clave: Programación Lógica, Comprensión del Lenguaje Natural, Resolución de Problemas, Gramáticas Lógicas, Lingüistica Computacional, Inteligencia Artificial.---ABSTRACT---The purpose of this thesis is to study the possibi1 ities of performing in Spanish problem solving tasks with knowledge based systems. Ule study the development of the techniques for natural language processing with a particular interest in the logical formalisms that have been used to understand natural languages. Then, we present an evaluation of the current state of art in the field of natural language processing systems. Finally, we introduce the main contribution of our work, Sirena a system that allows the adquisition, understanding, retrieval and explanation of knowledge in Spanish with knowledge based systems. Sirena can deal with a large, although simple» subset of Spanish. This subset has been formalised by means of a logic grammar and the meaning of knowledge is based on logic. Sirena has been implemented in the programming language Prolog II v2. Keywords: Logic Programming, Understanding Natural Language, Problem Solving, Logic Grammars, Cumputational Linguistic, Artificial Intelligence.
Resumo:
This paper describes the application of language translation technologies for generating bus information in Spanish Sign Language (LSE: Lengua de Signos Española). In this work, two main systems have been developed: the first for translating text messages from information panels and the second for translating spoken Spanish into natural conversations at the information point of the bus company. Both systems are made up of a natural language translator (for converting a word sentence into a sequence of LSE signs), and a 3D avatar animation module (for playing back the signs). For the natural language translator, two technological approaches have been analyzed and integrated: an example-based strategy and a statistical translator. When translating spoken utterances, it is also necessary to incorporate a speech recognizer for decoding the spoken utterance into a word sequence, prior to the language translation module. This paper includes a detailed description of the field evaluation carried out in this domain. This evaluation has been carried out at the customer information office in Madrid involving both real bus company employees and deaf people. The evaluation includes objective measurements from the system and information from questionnaires. In the field evaluation, the whole translation presents an SER (Sign Error Rate) of less than 10% and a BLEU greater than 90%.
Resumo:
An important part of human intelligence, both historically and operationally, is our ability to communicate. We learn how to communicate, and maintain our communicative skills, in a society of communicators – a highly effective way to reach and maintain proficiency in this complex skill. Principles that might allow artificial agents to learn language this way are in completely known at present – the multi-dimensional nature of socio-communicative skills are beyond every machine learning framework so far proposed. Our work begins to address the challenge of proposing a way for observation-based machine learning of natural language and communication. Our framework can learn complex communicative skills with minimal up-front knowledge. The system learns by incrementally producing predictive models of causal relationships in observed data, guided by goal-inference and reasoning using forward-inverse models. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime TV-style interview, using multimodal communicative gesture and situated language to talk about recycling of various materials and objects. S1 can learn multimodal complex language and multimodal communicative acts, a vocabulary of 100 words forming natural sentences with relatively complex sentence structure, including manual deictic reference and anaphora. S1 is seeded only with high-level information about goals of the interviewer and interviewee, and a small ontology; no grammar or other information is provided to S1 a priori. The agent learns the pragmatics, semantics, and syntax of complex utterances spoken and gestures from scratch, by observing the humans compare and contrast the cost and pollution related to recycling aluminum cans, glass bottles, newspaper, plastic, and wood. After 20 hours of observation S1 can perform an unscripted TV interview with a human, in the same style, without making mistakes.
Resumo:
El concepto de algoritmo es básico en informática, por lo que es crucial que los alumnos profundicen en él desde el inicio de su formación. Por tanto, contar con una herramienta que guíe a los estudiantes en su aprendizaje puede suponer una gran ayuda en su formación. La mayoría de los autores coinciden en que, para determinar la eficacia de una herramienta de visualización de algoritmos, es esencial cómo se utiliza. Así, los estudiantes que participan activamente en la visualización superan claramente a los que la contemplan de forma pasiva. Por ello, pensamos que uno de los mejores ejercicios para un alumno consiste en simular la ejecución del algoritmo que desea aprender mediante el uso de una herramienta de visualización, i. e. consiste en realizar una simulación visual de dicho algoritmo. La primera parte de esta tesis presenta los resultados de una profunda investigación sobre las características que debe reunir una herramienta de ayuda al aprendizaje de algoritmos y conceptos matemáticos para optimizar su efectividad: el conjunto de especificaciones eMathTeacher, además de un entorno de aprendizaje que integra herramientas que las cumplen: GRAPHs. Hemos estudiado cuáles son las cualidades esenciales para potenciar la eficacia de un sistema e-learning de este tipo. Esto nos ha llevado a la definición del concepto eMathTeacher, que se ha materializado en el conjunto de especificaciones eMathTeacher. Una herramienta e-learning cumple las especificaciones eMathTeacher si actúa como un profesor virtual de matemáticas, i. e. si es una herramienta de autoevaluación que ayuda a los alumnos a aprender de forma activa y autónoma conceptos o algoritmos matemáticos, corrigiendo sus errores y proporcionando pistas para encontrar la respuesta correcta, pero sin dársela explícitamente. En estas herramientas, la simulación del algoritmo no continúa hasta que el usuario introduce la respuesta correcta. Para poder reunir en un único entorno una colección de herramientas que cumplan las especificaciones eMathTeacher hemos creado GRAPHs, un entorno ampliable, basado en simulación visual, diseñado para el aprendizaje activo e independiente de los algoritmos de grafos y creado para que en él se integren simuladores de diferentes algoritmos. Además de las opciones de creación y edición del grafo y la visualización de los cambios producidos en él durante la simulación, el entorno incluye corrección paso a paso, animación del pseudocódigo del algoritmo, preguntas emergentes, manejo de las estructuras de datos del algoritmo y creación de un log de interacción en XML. Otro problema que nos planteamos en este trabajo, por su importancia en el proceso de aprendizaje, es el de la evaluación formativa. El uso de ciertos entornos e-learning genera gran cantidad de datos que deben ser interpretados para llegar a una evaluación que no se limite a un recuento de errores. Esto incluye el establecimiento de relaciones entre los datos disponibles y la generación de descripciones lingüísticas que informen al alumno sobre la evolución de su aprendizaje. Hasta ahora sólo un experto humano era capaz de hacer este tipo de evaluación. Nuestro objetivo ha sido crear un modelo computacional que simule el razonamiento del profesor y genere un informe sobre la evolución del aprendizaje que especifique el nivel de logro de cada uno de los objetivos definidos por el profesor. Como resultado del trabajo realizado, la segunda parte de esta tesis presenta el modelo granular lingüístico de la evaluación del aprendizaje, capaz de modelizar la evaluación y generar automáticamente informes de evaluación formativa. Este modelo es una particularización del modelo granular lingüístico de un fenómeno (GLMP), en cuyo desarrollo y formalización colaboramos, basado en la lógica borrosa y en la teoría computacional de las percepciones. Esta técnica, que utiliza sistemas de inferencia basados en reglas lingüísticas y es capaz de implementar criterios de evaluación complejos, se ha aplicado a dos casos: la evaluación, basada en criterios, de logs de interacción generados por GRAPHs y de cuestionarios de Moodle. Como consecuencia, se han implementado, probado y utilizado en el aula sistemas expertos que evalúan ambos tipos de ejercicios. Además de la calificación numérica, los sistemas generan informes de evaluación, en lenguaje natural, sobre los niveles de competencia alcanzados, usando sólo datos objetivos de respuestas correctas e incorrectas. Además, se han desarrollado dos aplicaciones capaces de ser configuradas para implementar los sistemas expertos mencionados. Una procesa los archivos producidos por GRAPHs y la otra, integrable en Moodle, evalúa basándose en los resultados de los cuestionarios. ABSTRACT The concept of algorithm is one of the core subjects in computer science. It is extremely important, then, for students to get a good grasp of this concept from the very start of their training. In this respect, having a tool that helps and shepherds students through the process of learning this concept can make a huge difference to their instruction. Much has been written about how helpful algorithm visualization tools can be. Most authors agree that the most important part of the learning process is how students use the visualization tool. Learners who are actively involved in visualization consistently outperform other learners who view the algorithms passively. Therefore we think that one of the best exercises to learn an algorithm is for the user to simulate the algorithm execution while using a visualization tool, thus performing a visual algorithm simulation. The first part of this thesis presents the eMathTeacher set of requirements together with an eMathTeacher-compliant tool called GRAPHs. For some years, we have been developing a theory about what the key features of an effective e-learning system for teaching mathematical concepts and algorithms are. This led to the definition of eMathTeacher concept, which has materialized in the eMathTeacher set of requirements. An e-learning tool is eMathTeacher compliant if it works as a virtual math trainer. In other words, it has to be an on-line self-assessment tool that helps students to actively and autonomously learn math concepts or algorithms, correcting their mistakes and providing them with clues to find the right answer. In an eMathTeacher-compliant tool, algorithm simulation does not continue until the user enters the correct answer. GRAPHs is an extendible environment designed for active and independent visual simulation-based learning of graph algorithms, set up to integrate tools to help the user simulate the execution of different algorithms. Apart from the options of creating and editing the graph, and visualizing the changes made to the graph during simulation, the environment also includes step-by-step correction, algorithm pseudo-code animation, pop-up questions, data structure handling and XML-based interaction log creation features. On the other hand, assessment is a key part of any learning process. Through the use of e-learning environments huge amounts of data can be output about this process. Nevertheless, this information has to be interpreted and represented in a practical way to arrive at a sound assessment that is not confined to merely counting mistakes. This includes establishing relationships between the available data and also providing instructive linguistic descriptions about learning evolution. Additionally, formative assessment should specify the level of attainment of the learning goals defined by the instructor. Till now, only human experts were capable of making such assessments. While facing this problem, our goal has been to create a computational model that simulates the instructor’s reasoning and generates an enlightening learning evolution report in natural language. The second part of this thesis presents the granular linguistic model of learning assessment to model the assessment of the learning process and implement the automated generation of a formative assessment report. The model is a particularization of the granular linguistic model of a phenomenon (GLMP) paradigm, based on fuzzy logic and the computational theory of perceptions, to the assessment phenomenon. This technique, useful for implementing complex assessment criteria using inference systems based on linguistic rules, has been applied to two particular cases: the assessment of the interaction logs generated by GRAPHs and the criterion-based assessment of Moodle quizzes. As a consequence, several expert systems to assess different algorithm simulations and Moodle quizzes have been implemented, tested and used in the classroom. Apart from the grade, the designed expert systems also generate natural language progress reports on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. In addition, two applications, capable of being configured to implement the expert systems, have been developed. One is geared up to process the files output by GRAPHs and the other one is a Moodle plug-in set up to perform the assessment based on the quizzes results.