875 resultados para visualization tools


Relevância:

60.00% 60.00%

Publicador:

Resumo:

La compréhension de la structure d’un logiciel est une première étape importante dans la résolution de tâches d’analyse et de maintenance sur celui-ci. En plus des liens définis par la hiérarchie, il existe un autre type de liens entre les éléments du logiciel que nous appelons liens d’adjacence. Une compréhension complète d’un logiciel doit donc tenir compte de tous ces types de liens. Les outils de visualisation sont en général efficaces pour aider un développeur dans sa compréhension d’un logiciel en lui présentant l’information sous forme claire et concise. Cependant, la visualisation simultanée des liens hiérarchiques et d’adjacence peut donner lieu à beaucoup d’encombrement visuel, rendant ainsi ces visualisations peu efficaces pour fournir de l’information utile sur ces liens. Nous proposons dans ce mémoire un outil de visualisation 3D qui permet de représenter à la fois la structure hiérarchique d’un logiciel et les liens d’adjacence existant entre ses éléments. Notre outil utilise trois types de placements différents pour représenter la hiérarchie. Chacun peut supporter l’affichage des liens d’adjacence de manière efficace. Pour représenter les liens d’adjacence, nous proposons une version 3D de la méthode des Hierarchical Edge Bundles. Nous utilisons également un algorithme métaheuristique pour améliorer le placement afin de réduire davantage l’encombrement visuel dans les liens d’adjacence. D’autre part, notre outil offre un ensemble de possibilités d’interaction permettant à un usager de naviguer à travers l’information offerte par notre visualisation. Nos contributions ont été évaluées avec succès sur des systèmes logiciels de grande taille.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the recovering process of oil, rock heterogeneity has a huge impact on how fluids move in the field, defining how much oil can be recovered. In order to study this variability, percolation theory, which describes phenomena involving geometry and connectivity are the bases, is a very useful model. Result of percolation is tridimensional data and have no physical meaning until visualized in form of images or animations. Although a lot of powerful and sophisticated visualization tools have been developed, they focus on generation of planar 2D images. In order to interpret data as they would be in the real world, virtual reality techniques using stereo images could be used. In this work we propose an interactive and helpful tool, named ZSweepVR, based on virtual reality techniques that allows a better comprehension of volumetric data generated by simulation of dynamic percolation. The developed system has the ability to render images using two different techniques: surface rendering and volume rendering. Surface rendering is accomplished by OpenGL directives and volume rendering is accomplished by the Zsweep direct volume rendering engine. In the case of volumetric rendering, we implemented an algorithm to generate stereo images. We also propose enhancements in the original percolation algorithm in order to get a better performance. We applied our developed tools to a mature field database, obtaining satisfactory results. The use of stereoscopic and volumetric images brought valuable contributions for the interpretation and clustering formation analysis in percolation, what certainly could lead to better decisions about the exploration and recovery process in oil fields

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interactive visual representations complement traditional statistical and machine learning techniques for data analysis, allowing users to play a more active role in a knowledge discovery process and making the whole process more understandable. Though visual representations are applicable to several stages of the knowledge discovery process, a common use of visualization is in the initial stages to explore and organize a sometimes unknown and complex data set. In this context, the integrated and coordinated - that is, user actions should be capable of affecting multiple visualizations when desired - use of multiple graphical representations allows data to be observed from several perspectives and offers richer information than isolated representations. In this paper we propose an underlying model for an extensible and adaptable environment that allows independently developed visualization components to be gradually integrated into a user configured knowledge discovery application. Because a major requirement when using multiple visual techniques is the ability to link amongst them, so that user actions executed on a representation propagate to others if desired, the model also allows runtime configuration of coordinated user actions over different visual representations. We illustrate how this environment is being used to assist data exploration and organization in a climate classification problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

People store data of the most different daily events, to know yourself, detect behaviors, predict events and have an strongly knowledge to take decisions. The growth of the events, results in a large amount of data colected and this data needs to be processed to get value information. This data have a temporal component from the collect process (daily, monthly or annualy) and this need to be consider on the exploration. The exploration based on temporal component can be uni-scale or multi-scale. The data mining goes toward to extract knowledge from large databases and if combined with visualization tools, the data mining can be more effective to detect information. This visualization tools display data and allow user to manipulate and change it by interaction features toward your goal. The user can combine tools and combine the steps of visualization among the tools through messages. This monograph aim to insert interactivity on AdaptaVis architecture model, developed by Shimabukuro (2004), the InfoVis, then extends its ability of exploration and provide a consistent base for the user handle data and extract information

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present project consists in an exploratory research centralized on the main patterns of the database of treaties registered by the Secretariat of the United Nations due to its obligations stated at Article 102 of the organization?s Charter. More specifically its purpose is to adapt and apply computer visualization tools to this database. The visualization tools developed here aim the exploration, discovery and communication of relevant features to International Relations research. These tools allow the manipulation of selected variables in the database, such as dates, participant actors and treaties subject. The visual representations furthermore facilitate the analysis of several dimensions of subsets chosen by the user, such as qualitative, temporal and geographic distribution. These proceedings help bring hidden patterns to the surface. The discussion about the limits of these tools has also a central place on this research. We listed and analyzed the characteristics and steps which could have a relevant impact on the final result

Relevância:

60.00% 60.00%

Publicador:

Resumo:

INTRODUCTION Out-migration from mountain areas is leaving behind half families and elderly to deal with managing the land alongside daily life challenges. A potential reduction of labour force as well as expertise on cropping practices, maintenance of terraces and irrigation canals, slope stabilization, grazing, forest and other land management practices are further challenged by changing climate conditions and increased environmental threats. An understanding of the resilience of managed land resources in order to enhance adaptation to environmental and socio-economic variability, and evidence of the impact of Sustainable Land Management (SLM) on the mitigation of environmental threats have so far not sufficiently been tackled. The study presented here aims to find out how land management in mountains is being affected by migration in the context of natural hazards and climate change in two study sites, namely Quillacollo District of Bolivia and Panchase area of Western Nepal, and which measures are needed to increase resilience of livelihoods and land management practices. The presentation includes draft results from first field work periods in both sites. A context of high vulnerability According to UNISDR, vulnerability is defined as “the characteristics and circumstances of a community, system or asset that make it susceptible to the damaging effects of a hazard”.Hazards are another threat affecting people’s livelihood in mountainous area. They can be either natural or human induced. Landslides, debris flow and flood are affecting peopleGood land management can significantly reduce occurrence of hazards. In the opposite bad land management or land abandonment can lead to negative consequences on the land, and thus again increase vulnerability of people’s livelihoods. METHODS The study integrates bio-physical and socio-economic data through a case study as well as a mapping approach. From the social sciences, well-tested participatory qualitative methodologies, typically used in Vulnerability and Capacity Analyses, such as semi-structured interviews with so-called ‘key informants’, transect walks, participatory risk and social resource mapping are applied. The bio-physical analysis of the current environmental conditions determining hazards and structural vulnerability are obtained from remote sensing analysis, field work studies, and GIS analysis The assessment of the consequences of migration in the area of origin is linked with a mapping and appraisal of land management practices (www.wocat.net, Schwilch et al., 2011). The WOCAT mapping tool (WOCAT/LADA/DESIRE 2008) allows capturing the major land management practices / technologies, their spread, effectiveness and impact within a selected area. Data drawn from a variety of sources are compiled and harmonised by a team of experts, consisting of land degradation and conservation specialists working in consultation with land users from various backgrounds. The specialists’ and land users’ knowledge is combined with existing datasets and documents (maps, GIS layers, high-resolution satellite images, etc.) in workshops that are designed to build consensus regarding the variables used to assess land degradation and SLM. This process is also referred to as participatory expert assessment or consensus mapping. The WOCAT mapping and SLM documentation methodologies are used together with participatory mapping and other socio-economic data collection (interviews, questionnaires, focus group discussions, expert consultation) to combine information about migration types and land management issues. GIS and other spatial visualization tools (e.g. Google maps) will help to represent and understand these links. FIRST RESULTS Nepal In Nepal, migration is a common strategy to improve the livelihoods. Migrants are mostly men and they migrate to other Asian countries, first to India and then to the Gulf countries. Only a few women are migrating abroad. Women migrate essentially to main Nepali cities when they can afford it. Remittances are used primarily for food and education; however they are hardly used for agricultural purposes. Besides traditional agriculture being maintained, only few new practices are emerging, such as vegetable farming or agroforestry. The land abandonment is a growing consequence of outmigration, resulting in the spreading of invasive species. However, most impacts of migration on land management are not yet clear. Moreover, education is a major concern for the respondents; they want their children having a better education and get better opportunities. Linked to this, unemployment is another major concern of the respondents, which in turn is “solved” through outmigration. Bolivia Migration is a common livelihood strategy in Bolivia. In the area of study, whole families are migrating downward to the cities of the valleys or to other departments of Bolivia, especially to Chapare (tropics) for the coca production and to Santa Cruz. Some young people are migrating abroad, mostly to Argentina. There are few remittances and if those are sent to the families in the mountain areas, then they are mainly used for agriculture purpose. The impacts of migration on land management practices are not clear although there are some important aspects to be underlined. The people who move downward are still using their land and coming back during part of the week to work on it. As a consequence of this multi-residency, there is a tendency to reduce land management work or to change the way the land is used. As in Nepal, education is a very important issue in this area. There is no secondary school, and only one community has a primary school. After the 6th grade students have therefore to go down into the valley towns to study. The lack of basic education is pushing more and more people to move down and to leave the mountains. CONCLUSIONS This study is on-going, more data have to be collected to clearly assess the impacts of out-migration on land management in mountain areas. The first results of the study allow us to present a few interesting findings. The two case studies are very different, however in both areas, young people are not staying anymore in the mountains and leave behind half families and elderly to manage the land. Additionally in both cases education is a major reason for moving out, even though the causes are not always the same. More specifically, in the case of Nepal, the use of remittances underlines the fact that investment in agriculture is not the first choice of a family. In the case of Bolivia, some interesting findings showed that people continue to work on their lands even if they move downward. The further steps of the study will help to explore these interesting issues in more detail. REFERENCES Schwilch G., Bestelmeyer B., Bunning S., Critchley W., Herrick J., Kellner K., Liniger H.P., Nachtergaele F., Ritsema C.J., Schuster B., Tabo R., van Lynden G., Winslow M. 2011. Experiences in Monitoring and Assessment of Sustainable Land Management. Land Degradation & Development 22 (2), 214-225. Doi 10.1002/ldr.1040 WOCAT/LADA/DESIRE 2008. A Questionnaire for Mapping Land Degradation and Sustainable Land Management. Liniger H.P., van Lynden G., Nachtergaele F., Schwilch G. (eds), Centre for Development and Environment, Institute of Geography, University of Berne, Berne

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a generic preprocessor for combined static/dynamic validation and debugging of constraint logic programs. Passing programs through the preprocessor prior to execution allows detecting many bugs automatically. This is achieved by performing a repertoire of tests which range from simple syntactic checks to much more advanced checks based on static analysis of the program. Together with the program, the user may provide a series of assertions which trigger further automatic checking of the program. Such assertions are written using the assertion language presented in Chapter 2, which allows expressing a wide variety of properties. These properties extend beyond the predefined set which may be understandable by the available static analyzers and include properties defined by means of user programs. In addition to user-provided assertions, in each particular CLP system assertions may be available for predefined system predicates. Checking of both user-provided assertions and assertions for system predicates is attempted first at compile-time by comparing them with the results of static analysis. This may allow statically proving that the assertions hold (Le., they are validated) or that they are violated (and thus bugs detected). User-provided assertions (or parts of assertions) which cannot be statically proved ñor disproved are optionally translated into run-time tests. The implementation of the preprocessor is generic in that it can be easily customized to different CLP systems and dialects and in that it is designed to allow the integration of additional analyses in a simple way. We also report on two tools which are instances of the generic preprocessor: CiaoPP (for the Ciao Prolog system) and CHIPRE (for the CHIP CLP(FL>) system). The currently existing analyses include types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In an advanced program development environment, such as that discussed in the introduction of this book, several tools may coexist which handle both the program and information on the program in different ways. Also, these tools may interact among themselves and with the user. Thus, the different tools and the user need some way to communicate. It is our design principie that such communication be performed in terms of assertions. Assertions are syntactic objects which allow expressing properties of programs. Several assertion languages have been used in the past in different contexts, mainly related to program debugging. In this chapter we propose a general language of assertions which is used in different tools for validation and debugging of constraint logic programs in the context of the DiSCiPl project. The assertion language proposed is parametric w.r.t. the particular constraint domain and properties of interest being used in each different tool. The language proposed is quite general in that it poses few restrictions on the kind of properties which may be expressed. We believe the assertion language we propose is of practical relevance and appropriate for the different uses required in the tools considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ciao is a public domain, next generation multi-paradigm programming environment with a unique set of features: Ciao offers a complete Prolog system, supporting ISO-Prolog, but its novel modular design allows both restricting and extending the language. As a result, it allows working with fully declarative subsets of Prolog and also to extend these subsets (or ISO-Prolog) both syntactically and semantically. Most importantly, these restrictions and extensions can be activated separately on each program module so that several extensions can coexist in the same application for different modules. Ciao also supports (through such extensions) programming with functions, higher-order (with predicate abstractions), constraints, and objects, as well as feature terms (records), persistence, several control rules (breadth-first search, iterative deepening, ...), concurrency (threads/engines), a good base for distributed execution (agents), and parallel execution. Libraries also support WWW programming, sockets, external interfaces (C, Java, TclTk, relational databases, etc.), etc. Ciao offers support for programming in the large with a robust module/object system, module-based separate/incremental compilation (automatically -no need for makefiles), an assertion language for declaring (optional) program properties (including types and modes, but also determinacy, non-failure, cost, etc.), automatic static inference and static/dynamic checking of such assertions, etc. Ciao also offers support for programming in the small producing small executables (including only those builtins used by the program) and support for writing scripts in Prolog. The Ciao programming environment includes a classical top-level and a rich emacs interface with an embeddable source-level debugger and a number of execution visualization tools. The Ciao compiler (which can be run outside the top level shell) generates several forms of architecture-independent and stand-alone executables, which run with speed, efficiency and executable size which are very competive with other commercial and academic Prolog/CLP systems. Library modules can be compiled into compact bytecode or C source files, and linked statically, dynamically, or autoloaded. The novel modular design of Ciao enables, in addition to modular program development, effective global program analysis and static debugging and optimization via source to source program transformation. These tasks are performed by the Ciao preprocessor ( ciaopp, distributed separately). The Ciao programming environment also includes lpdoc, an automatic documentation generator for LP/CLP programs. It processes Prolog files adorned with (Ciao) assertions and machine-readable comments and generates manuals in many formats including postscript, pdf, texinfo, info, HTML, man, etc. , as well as on-line help, ascii README files, entries for indices of manuals (info, WWW, ...), and maintains WWW distribution sites.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ‘traditional’ set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified-easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ?traditional? set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified, easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El concepto de algoritmo es básico en informática, por lo que es crucial que los alumnos profundicen en él desde el inicio de su formación. Por tanto, contar con una herramienta que guíe a los estudiantes en su aprendizaje puede suponer una gran ayuda en su formación. La mayoría de los autores coinciden en que, para determinar la eficacia de una herramienta de visualización de algoritmos, es esencial cómo se utiliza. Así, los estudiantes que participan activamente en la visualización superan claramente a los que la contemplan de forma pasiva. Por ello, pensamos que uno de los mejores ejercicios para un alumno consiste en simular la ejecución del algoritmo que desea aprender mediante el uso de una herramienta de visualización, i. e. consiste en realizar una simulación visual de dicho algoritmo. La primera parte de esta tesis presenta los resultados de una profunda investigación sobre las características que debe reunir una herramienta de ayuda al aprendizaje de algoritmos y conceptos matemáticos para optimizar su efectividad: el conjunto de especificaciones eMathTeacher, además de un entorno de aprendizaje que integra herramientas que las cumplen: GRAPHs. Hemos estudiado cuáles son las cualidades esenciales para potenciar la eficacia de un sistema e-learning de este tipo. Esto nos ha llevado a la definición del concepto eMathTeacher, que se ha materializado en el conjunto de especificaciones eMathTeacher. Una herramienta e-learning cumple las especificaciones eMathTeacher si actúa como un profesor virtual de matemáticas, i. e. si es una herramienta de autoevaluación que ayuda a los alumnos a aprender de forma activa y autónoma conceptos o algoritmos matemáticos, corrigiendo sus errores y proporcionando pistas para encontrar la respuesta correcta, pero sin dársela explícitamente. En estas herramientas, la simulación del algoritmo no continúa hasta que el usuario introduce la respuesta correcta. Para poder reunir en un único entorno una colección de herramientas que cumplan las especificaciones eMathTeacher hemos creado GRAPHs, un entorno ampliable, basado en simulación visual, diseñado para el aprendizaje activo e independiente de los algoritmos de grafos y creado para que en él se integren simuladores de diferentes algoritmos. Además de las opciones de creación y edición del grafo y la visualización de los cambios producidos en él durante la simulación, el entorno incluye corrección paso a paso, animación del pseudocódigo del algoritmo, preguntas emergentes, manejo de las estructuras de datos del algoritmo y creación de un log de interacción en XML. Otro problema que nos planteamos en este trabajo, por su importancia en el proceso de aprendizaje, es el de la evaluación formativa. El uso de ciertos entornos e-learning genera gran cantidad de datos que deben ser interpretados para llegar a una evaluación que no se limite a un recuento de errores. Esto incluye el establecimiento de relaciones entre los datos disponibles y la generación de descripciones lingüísticas que informen al alumno sobre la evolución de su aprendizaje. Hasta ahora sólo un experto humano era capaz de hacer este tipo de evaluación. Nuestro objetivo ha sido crear un modelo computacional que simule el razonamiento del profesor y genere un informe sobre la evolución del aprendizaje que especifique el nivel de logro de cada uno de los objetivos definidos por el profesor. Como resultado del trabajo realizado, la segunda parte de esta tesis presenta el modelo granular lingüístico de la evaluación del aprendizaje, capaz de modelizar la evaluación y generar automáticamente informes de evaluación formativa. Este modelo es una particularización del modelo granular lingüístico de un fenómeno (GLMP), en cuyo desarrollo y formalización colaboramos, basado en la lógica borrosa y en la teoría computacional de las percepciones. Esta técnica, que utiliza sistemas de inferencia basados en reglas lingüísticas y es capaz de implementar criterios de evaluación complejos, se ha aplicado a dos casos: la evaluación, basada en criterios, de logs de interacción generados por GRAPHs y de cuestionarios de Moodle. Como consecuencia, se han implementado, probado y utilizado en el aula sistemas expertos que evalúan ambos tipos de ejercicios. Además de la calificación numérica, los sistemas generan informes de evaluación, en lenguaje natural, sobre los niveles de competencia alcanzados, usando sólo datos objetivos de respuestas correctas e incorrectas. Además, se han desarrollado dos aplicaciones capaces de ser configuradas para implementar los sistemas expertos mencionados. Una procesa los archivos producidos por GRAPHs y la otra, integrable en Moodle, evalúa basándose en los resultados de los cuestionarios. ABSTRACT The concept of algorithm is one of the core subjects in computer science. It is extremely important, then, for students to get a good grasp of this concept from the very start of their training. In this respect, having a tool that helps and shepherds students through the process of learning this concept can make a huge difference to their instruction. Much has been written about how helpful algorithm visualization tools can be. Most authors agree that the most important part of the learning process is how students use the visualization tool. Learners who are actively involved in visualization consistently outperform other learners who view the algorithms passively. Therefore we think that one of the best exercises to learn an algorithm is for the user to simulate the algorithm execution while using a visualization tool, thus performing a visual algorithm simulation. The first part of this thesis presents the eMathTeacher set of requirements together with an eMathTeacher-compliant tool called GRAPHs. For some years, we have been developing a theory about what the key features of an effective e-learning system for teaching mathematical concepts and algorithms are. This led to the definition of eMathTeacher concept, which has materialized in the eMathTeacher set of requirements. An e-learning tool is eMathTeacher compliant if it works as a virtual math trainer. In other words, it has to be an on-line self-assessment tool that helps students to actively and autonomously learn math concepts or algorithms, correcting their mistakes and providing them with clues to find the right answer. In an eMathTeacher-compliant tool, algorithm simulation does not continue until the user enters the correct answer. GRAPHs is an extendible environment designed for active and independent visual simulation-based learning of graph algorithms, set up to integrate tools to help the user simulate the execution of different algorithms. Apart from the options of creating and editing the graph, and visualizing the changes made to the graph during simulation, the environment also includes step-by-step correction, algorithm pseudo-code animation, pop-up questions, data structure handling and XML-based interaction log creation features. On the other hand, assessment is a key part of any learning process. Through the use of e-learning environments huge amounts of data can be output about this process. Nevertheless, this information has to be interpreted and represented in a practical way to arrive at a sound assessment that is not confined to merely counting mistakes. This includes establishing relationships between the available data and also providing instructive linguistic descriptions about learning evolution. Additionally, formative assessment should specify the level of attainment of the learning goals defined by the instructor. Till now, only human experts were capable of making such assessments. While facing this problem, our goal has been to create a computational model that simulates the instructor’s reasoning and generates an enlightening learning evolution report in natural language. The second part of this thesis presents the granular linguistic model of learning assessment to model the assessment of the learning process and implement the automated generation of a formative assessment report. The model is a particularization of the granular linguistic model of a phenomenon (GLMP) paradigm, based on fuzzy logic and the computational theory of perceptions, to the assessment phenomenon. This technique, useful for implementing complex assessment criteria using inference systems based on linguistic rules, has been applied to two particular cases: the assessment of the interaction logs generated by GRAPHs and the criterion-based assessment of Moodle quizzes. As a consequence, several expert systems to assess different algorithm simulations and Moodle quizzes have been implemented, tested and used in the classroom. Apart from the grade, the designed expert systems also generate natural language progress reports on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. In addition, two applications, capable of being configured to implement the expert systems, have been developed. One is geared up to process the files output by GRAPHs and the other one is a Moodle plug-in set up to perform the assessment based on the quizzes results.