54 resultados para model driven system, semantic representation, semantic modeling, enterprise system development
Resumo:
A nivel mundial, el cáncer de mama es el tipo de cáncer más frecuente además de una de las principales causas de muerte entre la población femenina. Actualmente, el método más eficaz para detectar lesiones mamarias en una etapa temprana es la mamografía. Ésta contribuye decisivamente al diagnóstico precoz de esta enfermedad que, si se detecta a tiempo, tiene una probabilidad de curación muy alta. Uno de los principales y más frecuentes hallazgos en una mamografía, son las microcalcificaciones, las cuales son consideradas como un indicador importante de cáncer de mama. En el momento de analizar las mamografías, factores como la capacidad de visualización, la fatiga o la experiencia profesional del especialista radiólogo hacen que el riesgo de omitir ciertas lesiones presentes se vea incrementado. Para disminuir dicho riesgo es importante contar con diferentes alternativas como por ejemplo, una segunda opinión por otro especialista o un doble análisis por el mismo. En la primera opción se eleva el coste y en ambas se prolonga el tiempo del diagnóstico. Esto supone una gran motivación para el desarrollo de sistemas de apoyo o asistencia en la toma de decisiones. En este trabajo de tesis se propone, se desarrolla y se justifica un sistema capaz de detectar microcalcificaciones en regiones de interés extraídas de mamografías digitalizadas, para contribuir a la detección temprana del cáncer demama. Dicho sistema estará basado en técnicas de procesamiento de imagen digital, de reconocimiento de patrones y de inteligencia artificial. Para su desarrollo, se tienen en cuenta las siguientes consideraciones: 1. Con el objetivo de entrenar y probar el sistema propuesto, se creará una base de datos de imágenes, las cuales pertenecen a regiones de interés extraídas de mamografías digitalizadas. 2. Se propone la aplicación de la transformada Top-Hat, una técnica de procesamiento digital de imagen basada en operaciones de morfología matemática. La finalidad de aplicar esta técnica es la de mejorar el contraste entre las microcalcificaciones y el tejido presente en la imagen. 3. Se propone un algoritmo novel llamado sub-segmentación, el cual está basado en técnicas de reconocimiento de patrones aplicando un algoritmo de agrupamiento no supervisado, el PFCM (Possibilistic Fuzzy c-Means). El objetivo es encontrar las regiones correspondientes a las microcalcificaciones y diferenciarlas del tejido sano. Además, con la finalidad de mostrar las ventajas y desventajas del algoritmo propuesto, éste es comparado con dos algoritmos del mismo tipo: el k-means y el FCM (Fuzzy c-Means). Por otro lado, es importante destacar que en este trabajo por primera vez la sub-segmentación es utilizada para detectar regiones pertenecientes a microcalcificaciones en imágenes de mamografía. 4. Finalmente, se propone el uso de un clasificador basado en una red neuronal artificial, específicamente un MLP (Multi-layer Perceptron). El propósito del clasificador es discriminar de manera binaria los patrones creados a partir de la intensidad de niveles de gris de la imagen original. Dicha clasificación distingue entre microcalcificación y tejido sano. ABSTRACT Breast cancer is one of the leading causes of women mortality in the world and its early detection continues being a key piece to improve the prognosis and survival. Currently, the most reliable and practical method for early detection of breast cancer is mammography.The presence of microcalcifications has been considered as a very important indicator ofmalignant types of breast cancer and its detection and classification are important to prevent and treat the disease. However, the detection and classification of microcalcifications continue being a hard work due to that, in mammograms there is a poor contrast between microcalcifications and the tissue around them. Factors such as visualization, tiredness or insufficient experience of the specialist increase the risk of omit some present lesions. To reduce this risk, is important to have alternatives such as a second opinion or a double analysis for the same specialist. In the first option, the cost increases and diagnosis time also increases for both of them. This is the reason why there is a great motivation for development of help systems or assistance in the decision making process. This work presents, develops and justifies a system for the detection of microcalcifications in regions of interest extracted fromdigitizedmammographies to contribute to the early detection of breast cancer. This systemis based on image processing techniques, pattern recognition and artificial intelligence. For system development the following features are considered: With the aim of training and testing the system, an images database is created, belonging to a region of interest extracted from digitized mammograms. The application of the top-hat transformis proposed. This image processing technique is based on mathematical morphology operations. The aim of this technique is to improve the contrast betweenmicrocalcifications and tissue present in the image. A novel algorithm called sub-segmentation is proposed. The sub-segmentation is based on pattern recognition techniques applying a non-supervised clustering algorithm known as Possibilistic Fuzzy c-Means (PFCM). The aim is to find regions corresponding to the microcalcifications and distinguish them from the healthy tissue. Furthermore,with the aim of showing themain advantages and disadvantages this is compared with two algorithms of same type: the k-means and the fuzzy c-means (FCM). On the other hand, it is important to highlight in this work for the first time the sub-segmentation is used for microcalcifications detection. Finally, a classifier based on an artificial neural network such as Multi-layer Perceptron is used. The purpose of this classifier is to discriminate froma binary perspective the patterns built from gray level intensity of the original image. This classification distinguishes between microcalcifications and healthy tissue.
Resumo:
INTRODUCCIÓN: El riesgo de padecer enfermedades cardiovasculares y los índices de obesidad infantil han ido en aumento durante los últimos años empobreciendo la salud de la población. La Teoría de Barker relaciona el estado de salud de la madre con el desarrollo fetal, asociando a un deficiente estado físico y hábitos de vida negativos de la mujer embarazada con el aumento del riesgo de padecer cardiopatías en la infancia y adolescencia, así como predisponer al recién nacido a padecer sobrepeso y/u obesidad en su vida posterior. Por otro lado los estudios efectuados sobre ejercicio físico durante el embarazo reportan beneficios para salud materna y fetal. Uno de los parámetros más utilizados para comprobar la salud fetal es su frecuencia cardiaca, mediante la que se comprueba el buen desarrollo del sistema nervioso autónomo. Si se observa este parámetro en presencia de ejercicio materno podría encontrarse una respuesta crónica del corazón fetal al ejercicio materno como consecuencia de una adaptación y mejora en el funcionamiento del sistema nervioso autónomo del feto. De esta forma podría mejorar su salud cardiovascular intrauterina, lo que podría mantenerse en su vida posterior descendiendo el riesgo de padecer enfermedades cardiovasculares en la edad adulta. OBJETIVOS: Conocer la influencia de un programa de ejercicio físico supervisado en la frecuencia cardiaca fetal (FCF) en reposo y después del ejercicio materno en relación con gestantes sedentarias mediante la realización de un protocolo específico. Conocer la influencia de un programa de ejercicio físico en el desarrollo del sistema nervioso autónomo fetal, relacionado con el tiempo de recuperación de la FCF. MATERIAL Y MÉTODO: Se diseñó un ensayo clínico aleatorizado multicéntrico en el que participaron 81 gestantes (GC=38, GE=43). El estudio fue aprobado por el comité ético de los hospitales que participaron en el estudio. Todas las gestantes fueron informadas y firmaron un consentimiento para su participación en el estudio. Las participantes del GE recibieron una intervención basada en un programa de ejercicio físico desarrollado durante la gestación (12-36 semanas de gestación) con una frecuencia de tres veces por semana. Todas las gestantes realizaron un protocolo de medida de la FCF entre las semanas 34-36 de gestación. Dicho protocolo consistía en dos test llevados a cabo caminando a diferentes intensidades (40% y 60% de la frecuencia cardiaca de reserva). De este protocolo se obtuvieron las principales variables de estudio: FCF en reposo, FCF posejercicio al 40 y al 60% de intensidad, tiempo de recuperación de la frecuencia cardiaca fetal en ambos esfuerzos. El material utilizado para la realización del protocolo fue un monitor de frecuencia cardiaca para controlar la frecuencia cardiaca de la gestante y un monitor fetal inalámbrico (telemetría fetal) para registrar el latido fetal durante todo el protocolo. RESULTADOS: No se encontraron diferencias estadísticamente significativas en la FCF en reposo entre grupos (GE=140,88 lat/min vs GC= 141,95 lat/min; p>,05). Se encontraron diferencias estadísticamente significativas en el tiempo de recuperación de la FCF entre los fetos de ambos grupos (GE=135,65 s vs GC=426,11 s esfuerzo al 40%; p<,001); (GE=180,26 s vs GC=565,61 s esfuerzo al 60%; p<,001). Se encontraron diferencias estadísticamente significativas en la FCF posejercicio al 40% (GE=139,93 lat/min vs GC=147,87 lat/min; p<,01). No se encontraron diferencias estadísticamente significativas en la FCF posejercicio al 60% (GE=143,74 lat/min vs GC=148,08 lat/min; p>,05). CONLUSIÓN: El programa de ejercicio físico desarrollado durante la gestación influyó sobre el corazón fetal de los fetos de las gestantes del GE en relación con el tiempo de recuperación de la FCF. Los resultados muestran un posible mejor funcionamiento del sistema nervioso autónomo en fetos de gestantes activas durante el embarazo. ABSTRACT INTRODUCTION: The risk to suffer cardiovascular diseases and childhood obesity index has grown in the last years worsening the health around the population. Barker´s Theory related maternal health with fetal development establishing an association between a poorly physical state and an unhealthy lifestyle in the pregnant woman with the risk to suffer heart disease during childhood and adolescence, childhood overweight and/or obese is related to maternal lifestyle. By the other way researches carried out about physical exercise and pregnancy show benefits in maternal and fetal health. One of the most studied parameters to check fetal health is its heart rate, correct fetal autonomic nervous system development and work is also corroborated by fetal heart rate. Looking at this parameter during maternal exercise a chronic response of fetal heart could be found due to an adaptation and improvement in the working of the autonomic nervous system. Therefore its cardiovascular health could be enhanced during its intrauterine life and maybe it could be maintained in its posterior life descending the risk to suffer cardiovascular diseases in adult life. OBJECTIVES: To know the influence of a supervised physical activity program in the fetal heart rate (FHR) at rest, FHR after maternal exercise related to sedentary pregnant women by a FHR assessment protocol. To know the influence of a physical activity program in the development of the autonomic nervous system related to FHR recovery time. MATERIAL AND METHOD: A multicentric randomized clinical trial was design in which 81 pregnant women participated (CG=38, EG=43). The study was approved by the ethics committee of all of the hospitals participating in the study. All of the participants signed an informed consent for their participation in the study. EG participants received an intervention based on a physical activity program carried out during gestation (12-36 gestation weeks) with a three days a week frequency. All of the participants were tested between 34-36 weeks of gestation by a specific FHR assessment protocol. The mentioned protocol consisted in two test performed walking and at a two different intensities (40% and 60% of the reserve heart rate). From this protocol we obtained the main research variables: FHR at rest, FHR post-exercise at 40% and 60% intensity, and FHR recovery time at both walking test. The material used to perform the protocol were a FH monitor to check maternal HR and a wireless fetal monitor (Telemetry) to register fetal beats during the whole protocol. RESULTS: There were no statistical differences in FHR at rest between groups (EG=140,88 beats/min vs CG= 141,95 beats/min; p>,05). There were statistical differences in FHR recovery time in both walking tests between groups (EG=135,65 s vs CG=426,11 s test at 40% intensity; p<,001); (EG=180,26 s vs CG=565,61 s test at 60% intensity; p<,001). Statistical differences were found in FHR post-exercise at 40% intensity between groups (EG=139,93 beats/min vs CG=147,87 beats/min; p<,01). No statistical differences were found in FHR at rest post-exercise at 60% intensity between groups (EG=143,74 beats/min vs CG=148,08 beats/min; p>,05). CONCLUSIONS: The physical activity program performed during gestation had an influence in fetal heart of the fetus from mother in the EG related to FHR recovery time. These results show a possible enhancement on autonomic nervous system working in fetus from active mothers during gestation.
Resumo:
The importance of vision-based systems for Sense-and-Avoid is increasing nowadays as remotely piloted and autonomous UAVs become part of the non-segregated airspace. The development and evaluation of these systems demand flight scenario images which are expensive and risky to obtain. Currently Augmented Reality techniques allow the compositing of real flight scenario images with 3D aircraft models to produce useful realistic images for system development and benchmarking purposes at a much lower cost and risk. With the techniques presented in this paper, 3D aircraft models are positioned firstly in a simulated 3D scene with controlled illumination and rendering parameters. Realistic simulated images are then obtained using an image processing algorithm which fuses the images obtained from the 3D scene with images from real UAV flights taking into account on board camera vibrations. Since the intruder and camera poses are user-defined, ground truth data is available. These ground truth annotations allow to develop and quantitatively evaluate aircraft detection and tracking algorithms. This paper presents the software developed to create a public dataset of 24 videos together with their annotations and some tracking application results.
Resumo:
En este documento se especifican aspectos importantes sobre un Modelo de Negocio que se llevará a cabo para justificar las expectativas de éxito de la empresa, pudiendo lograr con ello, financiación externa o socios capitalistas que quieran contribuir a alcanzar dicho éxito. En colaboración con la empresa Where Are Pets, formada por tres jóvenes emprendedores, entre los que me incluyo, se ha desarrollado este Modelo de Negocio para determinar la viabilidad económica y financiera del desarrollo de una aplicación móvil para la gestión de mascotas. Se han tratado puntos como las estrategias de marketing a seguir, el estudio de los clientes a los que la aplicación irá destinada y la estructura del capital necesario para llevar a cabo el proyecto, entre otros. Este Plan de Negocio está destinado a ser una herramienta de gran utilidad tanto para el emprendedor, como para socios, y para los posibles inversores. ABSTRACT This document lists important aspects of a Business Model to be carried out with the object of justify company’s success expectations, to achieve with this, external financing or financial partners who want to contribute to achieving this success. In collaboration with the company Where Are Pets, composed of three young entrepreneurs, myself included, we have developed this Business Model for determining the economic and financial viability of development of a mobile application for managing pets. Several points as marketing strategies, the study of potential customers and structure of the capital necessary to carry out the project, among others, have been treated. This Final Project is intended to be a useful for the entrepreneur, the partners or the potential investors.
Resumo:
The different theoretical models related with storm wave characterization focus on determining the significant wave height of the peak storm, the mean period and, usually assuming a triangle storm shape, their duration. In some cases, the main direction is also considered. Nevertheless, definition of the whole storm history, including the variation of the main random variables during the storm cycle is not taken into consideration. The representativeness of the proposed storm models, analysed in a recent study using an empirical maximum energy flux time dependent function shows that the behaviour of the different storm models is extremely dependent on the climatic characteristics of the project area. Moreover, there are no theoretical models able to adequately reproduce storm history evolution of the sea states characterized by important swell components. To overcome this shortcoming, several theoretical storm shapes are investigated taking into consideration the bases of the three best theoretical storm models, the Equivalent Magnitude Storm (EMS), the Equivalent Number of Waves Storm (ENWS) and the Equivalent Duration Storm (EDS) models. To analyse the representativeness of the new storm shape, the aforementioned maximum energy flux formulation and a wave overtopping discharge structure function are used. With the empirical energy flux formulation, correctness of the different approaches is focussed on the progressive hydraulic stability loss of the main armour layer caused by real and theoretical storms. For the overtopping structure equation, the total volume of discharge is considered. In all cases, the results obtained highlight the greater representativeness of the triangular EMS model for sea waves and the trapezoidal (nonparallel sides) EMS model for waves with a higher degree of wave development. Taking into account the increase in offshore and shallow water wind turbines, maritime transport and deep vertical breakwaters, the maximum wave height of the whole storm history and that corresponding to each sea state belonging to its cycle's evolution is also considered. The procedure considers the information usually available for extreme waves' characterization. Extrapolations of the maximum wave height of the selected storms have also been considered. The 4th order statistics of the sea state belonging to the real and theoretical storm have been estimated to complete the statistical analysis of individual wave height
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Abstract Idea Management Systems are web applications that implement the notion of open innovation though crowdsourcing. Typically, organizations use those kind of systems to connect to large communities in order to gather ideas for improvement of products or services. Originating from simple suggestion boxes, Idea Management Systems advanced beyond collecting ideas and aspire to be a knowledge management solution capable to select best ideas via collaborative as well as expert assessment methods. In practice, however, the contemporary systems still face a number of problems usually related to information overflow and recognizing questionable quality of submissions with reasonable time and effort allocation. This thesis focuses on idea assessment problem area and contributes a number of solutions that allow to filter, compare and evaluate ideas submitted into an Idea Management System. With respect to Idea Management System interoperability the thesis proposes theoretical model of Idea Life Cycle and formalizes it as the Gi2MO ontology which enables to go beyond the boundaries of a single system to compare and assess innovation in an organization wide or market wide context. Furthermore, based on the ontology, the thesis builds a number of solutions for improving idea assessment via: community opinion analysis (MARL), annotation of idea characteristics (Gi2MO Types) and study of idea relationships (Gi2MO Links). The main achievements of the thesis are: application of theoretical innovation models for practice of Idea Management to successfully recognize the differentiation between communities, opinion metrics and their recognition as a new tool for idea assessment, discovery of new relationship types between ideas and their impact on idea clustering. Finally, the thesis outcome is establishment of Gi2MO Project that serves as an incubator for Idea Management solutions and mature open-source software alternatives for the widely available commercial suites. From the academic point of view the project delivers resources to undertake experiments in the Idea Management Systems area and managed to become a forum that gathered a number of academic and industrial partners. Resumen Los Sistemas de Gestión de Ideas son aplicaciones Web que implementan el concepto de innovación abierta con técnicas de crowdsourcing. Típicamente, las organizaciones utilizan ese tipo de sistemas para conectar con comunidades grandes y así recoger ideas sobre cómo mejorar productos o servicios. Los Sistemas de Gestión de Ideas lian avanzado más allá de recoger simplemente ideas de buzones de sugerencias y ahora aspiran ser una solución de gestión de conocimiento capaz de seleccionar las mejores ideas por medio de técnicas colaborativas, así como métodos de evaluación llevados a cabo por expertos. Sin embargo, en la práctica, los sistemas contemporáneos todavía se enfrentan a una serie de problemas, que, por lo general, están relacionados con la sobrecarga de información y el reconocimiento de las ideas de dudosa calidad con la asignación de un tiempo y un esfuerzo razonables. Esta tesis se centra en el área de la evaluación de ideas y aporta una serie de soluciones que permiten filtrar, comparar y evaluar las ideas publicadas en un Sistema de Gestión de Ideas. Con respecto a la interoperabilidad de los Sistemas de Gestión de Ideas, la tesis propone un modelo teórico del Ciclo de Vida de la Idea y lo formaliza como la ontología Gi2MO que permite ir más allá de los límites de un sistema único para comparar y evaluar la innovación en un contexto amplio dentro de cualquier organización o mercado. Por otra parte, basado en la ontología, la tesis desarrolla una serie de soluciones para mejorar la evaluación de las ideas a través de: análisis de las opiniones de la comunidad (MARL), la anotación de las características de las ideas (Gi2MO Types) y el estudio de las relaciones de las ideas (Gi2MO Links). Los logros principales de la tesis son: la aplicación de los modelos teóricos de innovación para la práctica de Sistemas de Gestión de Ideas para reconocer las diferenciasentre comu¬nidades, métricas de opiniones de comunidad y su reconocimiento como una nueva herramienta para la evaluación de ideas, el descubrimiento de nuevos tipos de relaciones entre ideas y su impacto en la agrupación de estas. Por último, el resultado de tesis es el establecimiento de proyecto Gi2MO que sirve como incubadora de soluciones para Gestión de Ideas y herramientas de código abierto ya maduras como alternativas a otros sistemas comerciales. Desde el punto de vista académico, el proyecto ha provisto de recursos a ciertos experimentos en el área de Sistemas de Gestión de Ideas y logró convertirse en un foro que reunión para un número de socios tanto académicos como industriales.
Resumo:
Semantic Sensor Web infrastructures use ontology-based models to represent the data that they manage; however, up to now, these ontological models do not allow representing all the characteristics of distributed, heterogeneous, and web-accessible sensor data. This paper describes a core ontological model for Semantic Sensor Web infrastructures that covers these characteristics and that has been built with a focus on reusability. This ontological model is composed of different modules that deal, on the one hand, with infrastructure data and, on the other hand, with data from a specific domain, that is, the coastal flood emergency planning domain. The paper also presents a set of guidelines, followed during the ontological model development, to satisfy a common set of requirements related to modelling domain-specific features of interest and properties. In addition, the paper includes the results obtained after an exhaustive evaluation of the developed ontologies along different aspects (i.e., vocabulary, syntax, structure, semantics, representation, and context).
Resumo:
El aprendizaje basado en problemas se lleva aplicando con éxito durante las últimas tres décadas en un amplio rango de entornos de aprendizaje. Este enfoque educacional consiste en proponer problemas a los estudiantes de forma que puedan aprender sobre un dominio particular mediante el desarrollo de soluciones a dichos problemas. Si esto se aplica al modelado de conocimiento, y en particular al basado en Razonamiento Cualitativo, las soluciones a los problemas pasan a ser modelos que representan el compotamiento del sistema dinámico propuesto. Por lo tanto, la tarea del estudiante en este caso es acercar su modelo inicial (su primer intento de representar el sistema) a los modelos objetivo que proporcionan soluciones al problema, a la vez que adquieren conocimiento sobre el dominio durante el proceso. En esta tesis proponemos KaiSem, un método que usa tecnologías y recursos semánticos para guiar a los estudiantes durante el proceso de modelado, ayudándoles a adquirir tanto conocimiento como sea posible sin la directa supervisión de un profesor. Dado que tanto estudiantes como profesores crean sus modelos de forma independiente, estos tendrán diferentes terminologías y estructuras, dando lugar a un conjunto de modelos altamente heterogéneo. Para lidiar con tal heterogeneidad, proporcionamos una técnica de anclaje semántico para determinar, de forma automática, enlaces entre la terminología libre usada por los estudiantes y algunos vocabularios disponibles en la Web de Datos, facilitando con ello la interoperabilidad y posterior alineación de modelos. Por último, proporcionamos una técnica de feedback semántico para comparar los modelos ya alineados y generar feedback basado en las posibles discrepancias entre ellos. Este feedback es comunicado en forma de sugerencias individualizadas que el estudiante puede utilizar para acercar su modelo a los modelos objetivos en cuanto a su terminología y estructura se refiere. ABSTRACT Problem-based learning has been successfully applied over the last three decades to a diverse range of learning environments. This educational approach consists of posing problems to learners, so they can learn about a particular domain by developing solutions to them. When applied to conceptual modeling, and particularly to Qualitative Reasoning, the solutions to problems are models that represent the behavior of a dynamic system. Therefore, the learner's task is to move from their initial model, as their first attempt to represent the system, to the target models that provide solutions to that problem while acquiring domain knowledge in the process. In this thesis we propose KaiSem, a method for using semantic technologies and resources to scaffold the modeling process, helping the learners to acquire as much domain knowledge as possible without direct supervision from the teacher. Since learners and experts create their models independently, these will have different terminologies and structure, giving rise to a pool of models highly heterogeneous. To deal with such heterogeneity, we provide a semantic grounding technique to automatically determine links between the unrestricted terminology used by learners and some online vocabularies of the Web of Data, thus facilitating the interoperability and later alignment of the models. Lastly, we provide a semantic-based feedback technique to compare the aligned models and generate feedback based on the possible discrepancies. This feedback is communicated in the form of individualized suggestions, which can be used by the learner to bring their model closer in terminology and structure to the target models.