27 resultados para Knowledge-based information gathering, ontology, world knowledge base, user background knowledge, local instance repository, user information needs

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentación en Workshop EUON 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La evaluación de ontologías, incluyendo diagnóstico y reparación de las mismas, es una compleja actividad que debe llevarse a cabo en cualquier proyecto de desarrollo ontológico para comprobar la calidad técnica de las ontologías. Sin embargo, existe una gran brecha entre los enfoques metodológicos sobre la evaluación de ontologías y las herramientas que le dan soporte. En particular, no existen enfoques que proporcionen guías concretas sobre cómo diagnosticar y, en consecuencia, reparar ontologías. Esta tesis pretende avanzar en el área de la evaluación de ontologías, concretamente en la actividad de diagnóstico. Los principales objetivos de esta tesis son (a) ayudar a los desarrolladores en el diagnóstico de ontologías para encontrar errores comunes y (b) facilitar dicho diagnóstico reduciendo el esfuerzo empleado proporcionando el soporte tecnológico adecuado. Esta tesis presenta las siguientes contribuciones: • Catálogo de 41 errores comunes que los ingenieros ontológicos pueden cometer durante el desarrollo de ontologías. • Modelo de calidad para el diagnóstico de ontologías alineando el catálogo de errores comunes con modelos de calidad existentes. • Diseño e implementación de 48 métodos para detectar 33 de los 41 errores comunes en el catálogo. • Soporte tecnológico OOPS!, que permite el diagnstico de ontologías de forma (semi)automática. De acuerdo con los comentarios recibidos y los resultados de los test de satisfacción realizados, se puede afirmar que el enfoque desarrollado y presentado en esta tesis ayuda de forma efectiva a los usuarios a mejorar la calidad de sus ontologías. OOPS! ha sido ampliamente aceptado por un gran número de usuarios de formal global y ha sido utilizado alrededor de 3000 veces desde 60 países diferentes. OOPS! se ha integrado en software desarrollado por terceros y ha sido instalado en empresas para ser utilizado tanto durante el desarrollo de ontologías como en actividades de formación. Abstract Ontology evaluation, which includes ontology diagnosis and repair, is a complex activity that should be carried out in every ontology development project, because it checks for the technical quality of the ontology. However, there is an important gap between the methodological work about ontology evaluation and the tools that support such an activity. More precisely, not many approaches provide clear guidance about how to diagnose ontologies and how to repair them accordingly. This thesis aims to advance the current state of the art of ontology evaluation, specifically in the ontology diagnosis activity. The main goals of this thesis are (a) to help ontology engineers to diagnose their ontologies in order to find common pitfalls and (b) to lessen the effort required from them by providing the suitable technological support. This thesis presents the following main contributions: • A catalogue that describes 41 pitfalls that ontology developers might include in their ontologies. • A quality model for ontology diagnose that aligns the pitfall catalogue to existing quality models for semantic technologies. • The design and implementation of 48 methods for detecting 33 out of the 41 pitfalls defined in the catalogue. • A system called OOPS! (OntOlogy Pitfall Scanner!) that allows ontology engineers to (semi)automatically diagnose their ontologies. According to the feedback gathered and satisfaction tests carried out, the approach developed and presented in this thesis effectively helps users to increase the quality of their ontologies. At the time of writing this thesis, OOPS! has been broadly accepted by a high number of users worldwide and has been used around 3000 times from 60 different countries. OOPS! is integrated with third-party software and is locally installed in private enterprises being used both for ontology development activities and training courses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A more natural, intuitive, user-friendly, and less intrusive Human–Computer interface for controlling an application by executing hand gestures is presented. For this purpose, a robust vision-based hand-gesture recognition system has been developed, and a new database has been created to test it. The system is divided into three stages: detection, tracking, and recognition. The detection stage searches in every frame of a video sequence potential hand poses using a binary Support Vector Machine classifier and Local Binary Patterns as feature vectors. These detections are employed as input of a tracker to generate a spatio-temporal trajectory of hand poses. Finally, the recognition stage segments a spatio-temporal volume of data using the obtained trajectories, and compute a video descriptor called Volumetric Spatiograms of Local Binary Patterns (VS-LBP), which is delivered to a bank of SVM classifiers to perform the gesture recognition. The VS-LBP is a novel video descriptor that constitutes one of the most important contributions of the paper, which is able to provide much richer spatio-temporal information than other existing approaches in the state of the art with a manageable computational cost. Excellent results have been obtained outperforming other approaches of the state of the art.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the aim of empirical software engineering is to provide evidence for selecting the appropriate technology, it appears that there is a lack of recognition of this work in industry. Results from empirical research only rarely seem to find their way to company decision makers. If information relevant for software managers is provided in reports on experiments, such reports can be considered as a source of information for them when they are faced with making decisions about the selection of software engineering technologies. To bridge this communication gap between researchers and professionals, we propose characterizing the information needs of software managers in order to show empirical software engineering researchers which information is relevant for decision-making and thus enable them to make this information available. We empirically investigated decision makers? information needs to identify which information they need to judge the appropriateness and impact of a software technology. We empirically developed a model that characterizes these needs. To ensure that researchers provide relevant information when reporting results from experiments, we extended existing reporting guidelines accordingly.We performed an experiment to evaluate our model with regard to its effectiveness. Software managers who read an experiment report according to the proposed model judged the technology?s appropriateness significantly better than those reading a report about the same experiment that did not explicitly address their information needs. Our research shows that information regarding a technology, the context in which it is supposed to work, and most importantly, the impact of this technology on development costs and schedule as well as on product quality is crucial for decision makers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a semantic language developed with the objective to be used in a semantic analyzer based on linguistic and world knowledge. Linguistic knowledge is provided by a Combinatorial Dictionary and several sets of rules. Extra-linguistic information is stored in an Ontology. The meaning of the text is represented by means of a series of RDF-type triples of the form predicate (subject, object). Semantic analyzer is one of the options of the multifunctional ETAP-3 linguistic processor. The analyzer can be used for Information Extraction and Question Answering. We describe semantic representation of expressions that provide an assessment of the number of objects involved and/or give a quantitative evaluation of different types of attributes. We focus on the following aspects: 1) parametric and non-parametric attributes; 2) gradable and non-gradable attributes; 3) ontological representation of different classes of attributes; 4) absolute and relative quantitative assessment; 5) punctual and interval quantitative assessment; 6) intervals with precise and fuzzy boundaries

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper argues about the utility of advanced knowledge-based techniques to develop web-based applications that help consumers in finding products within marketplaces in e-commerce. In particular, we describe the idea of model-based approach to develop a shopping agent that dynamically configures a product according to the needs and preferences of customers. Finally, the paper summarizes the advantages provided by this approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A la hora del estudio de los fenómenos acústicos, es necesario el conocimiento de las características técnicas de los elementos utilizados. Un parte importante de ello son los micrófonos, sus características son esenciales para la obtención y comparación de los datos. Intentando describirlas, este trabajo quiere presentar un estudio detallado de una de las características que influyen en ellos, la característica de vida, especializándose en un tipo de micrófonos, los electret. La metodología seguida para el estudio de estos micrófonos se basa en la recopilación de información tras ensayos a diferentes temperaturas con el fin de caracterizarlos según las distintas distribuciones de vida existentes. En cuanto a los contenidos, se comienza explicando una base histórica de estos, origen, características generales y su evolución a lo largo de los años. Seguidamente se especifica una base teórica sobre distintas características que los afectan, antecedentes, y una explicación de las distintas distribuciones y formas del cálculo de vida. Posteriormente comienza la explicación del material utilizado, así como todas las características, para poder explicar la realización del estudio. Para ello: - Se han realizado tres ensayos a diferentes temperaturas, 140º, 125º y 110º dentro de una cámara térmica - Cada ensayo ha constado de diez micrófonos - El número de horas dentro de la cámara térmica ha variado dependiendo de la temperatura a la que se sometían los micrófonos en cada ensayo. - Tras cada hora se ha realizado una medida de cada uno de los micrófonos del nivel de presión sonora a la entrada, con el fin de ir comparándolos tras cada tiempo dentro de la cámara térmica. - Finalmente se ha realizado un estudio de fiabilidad con el que se ha obtenido el tiempo de vida. ABSTRACT. When the study of acoustic phenomena, the knowledge of technical characteristics for the different elements used is needed. An important part of this are the microphones, their characteristics are essential for obtaining and comparing data. Trying to describe it, this paper wants to present a detailed analysis of one of the characteristics that influence them, the characteristic of life, focusing in one type of microphones, the electret. The methodology for the study of these microphones is based on gathering information after some test at different temperatures in order to characterize them with different existing distributions of life. As for the contents, begins with a historical basis of them, origin, general characteristics and their evolution through the years. Afterward a theoretical background on different characteristics that affect them, history and an explanation of different distributions for calculating life forms. Then it begins with the explanation of the material used, as well as the features, in order to explain the study. For that: - There were three test at different temperatures, 140º, 125º and 110º in a thermal camera - Each test has consisted in ten microphones - The number of hours inside the thermal camera has varied depending on the subjected test which the microphones were - After each hour, it has been made a measurement of each microphone of their sound level pressure, in order to compare them after every hour inside the thermal camera - Finally, it has been made a reliability study to obtain their lifetime

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Idea Management Systems are web applications that implement the notion of open innovation though crowdsourcing. Typically, organizations use those kind of systems to connect to large communities in order to gather ideas for improvement of products or services. Originating from simple suggestion boxes, Idea Management Systems advanced beyond collecting ideas and aspire to be a knowledge management solution capable to select best ideas via collaborative as well as expert assessment methods. In practice, however, the contemporary systems still face a number of problems usually related to information overflow and recognizing questionable quality of submissions with reasonable time and effort allocation. This thesis focuses on idea assessment problem area and contributes a number of solutions that allow to filter, compare and evaluate ideas submitted into an Idea Management System. With respect to Idea Management System interoperability the thesis proposes theoretical model of Idea Life Cycle and formalizes it as the Gi2MO ontology which enables to go beyond the boundaries of a single system to compare and assess innovation in an organization wide or market wide context. Furthermore, based on the ontology, the thesis builds a number of solutions for improving idea assessment via: community opinion analysis (MARL), annotation of idea characteristics (Gi2MO Types) and study of idea relationships (Gi2MO Links). The main achievements of the thesis are: application of theoretical innovation models for practice of Idea Management to successfully recognize the differentiation between communities, opinion metrics and their recognition as a new tool for idea assessment, discovery of new relationship types between ideas and their impact on idea clustering. Finally, the thesis outcome is establishment of Gi2MO Project that serves as an incubator for Idea Management solutions and mature open-source software alternatives for the widely available commercial suites. From the academic point of view the project delivers resources to undertake experiments in the Idea Management Systems area and managed to become a forum that gathered a number of academic and industrial partners. Resumen Los Sistemas de Gestión de Ideas son aplicaciones Web que implementan el concepto de innovación abierta con técnicas de crowdsourcing. Típicamente, las organizaciones utilizan ese tipo de sistemas para conectar con comunidades grandes y así recoger ideas sobre cómo mejorar productos o servicios. Los Sistemas de Gestión de Ideas lian avanzado más allá de recoger simplemente ideas de buzones de sugerencias y ahora aspiran ser una solución de gestión de conocimiento capaz de seleccionar las mejores ideas por medio de técnicas colaborativas, así como métodos de evaluación llevados a cabo por expertos. Sin embargo, en la práctica, los sistemas contemporáneos todavía se enfrentan a una serie de problemas, que, por lo general, están relacionados con la sobrecarga de información y el reconocimiento de las ideas de dudosa calidad con la asignación de un tiempo y un esfuerzo razonables. Esta tesis se centra en el área de la evaluación de ideas y aporta una serie de soluciones que permiten filtrar, comparar y evaluar las ideas publicadas en un Sistema de Gestión de Ideas. Con respecto a la interoperabilidad de los Sistemas de Gestión de Ideas, la tesis propone un modelo teórico del Ciclo de Vida de la Idea y lo formaliza como la ontología Gi2MO que permite ir más allá de los límites de un sistema único para comparar y evaluar la innovación en un contexto amplio dentro de cualquier organización o mercado. Por otra parte, basado en la ontología, la tesis desarrolla una serie de soluciones para mejorar la evaluación de las ideas a través de: análisis de las opiniones de la comunidad (MARL), la anotación de las características de las ideas (Gi2MO Types) y el estudio de las relaciones de las ideas (Gi2MO Links). Los logros principales de la tesis son: la aplicación de los modelos teóricos de innovación para la práctica de Sistemas de Gestión de Ideas para reconocer las diferenciasentre comu¬nidades, métricas de opiniones de comunidad y su reconocimiento como una nueva herramienta para la evaluación de ideas, el descubrimiento de nuevos tipos de relaciones entre ideas y su impacto en la agrupación de estas. Por último, el resultado de tesis es el establecimiento de proyecto Gi2MO que sirve como incubadora de soluciones para Gestión de Ideas y herramientas de código abierto ya maduras como alternativas a otros sistemas comerciales. Desde el punto de vista académico, el proyecto ha provisto de recursos a ciertos experimentos en el área de Sistemas de Gestión de Ideas y logró convertirse en un foro que reunión para un número de socios tanto académicos como industriales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The important technological advances experienced along the last years have resulted in an important demand for new and efficient computer vision applications. On the one hand, the increasing use of video editing software has given rise to a necessity for faster and more efficient editing tools that, in a first step, perform a temporal segmentation in shots. On the other hand, the number of electronic devices with integrated cameras has grown enormously. These devices require new, fast, and efficient computer vision applications that include moving object detection strategies. In this dissertation, we propose a temporal segmentation strategy and several moving object detection strategies, which are suitable for the last generation of computer vision applications requiring both low computational cost and high quality results. First, a novel real-time high-quality shot detection strategy is proposed. While abrupt transitions are detected through a very fast pixel-based analysis, gradual transitions are obtained from an efficient edge-based analysis. Both analyses are reinforced with a motion analysis that allows to detect and discard false detections. This analysis is carried out exclusively over a reduced amount of candidate transitions, thus maintaining the computational requirements. On the other hand, a moving object detection strategy, which is based on the popular Mixture of Gaussians method, is proposed. This strategy, taking into account the recent history of each image pixel, adapts dynamically the amount of Gaussians that are required to model its variations. As a result, we improve significantly the computational efficiency with respect to other similar methods and, additionally, we reduce the influence of the used parameters in the results. Alternatively, in order to improve the quality of the results in complex scenarios containing dynamic backgrounds, we propose different non-parametric based moving object detection strategies that model both background and foreground. To obtain high quality results regardless of the characteristics of the analyzed sequence we dynamically estimate the most adequate bandwidth matrices for the kernels that are used in the background and foreground modeling. Moreover, the application of a particle filter allows to update the spatial information and provides a priori knowledge about the areas to analyze in the following images, enabling an important reduction in the computational requirements and improving the segmentation results. Additionally, we propose the use of an innovative combination of chromaticity and gradients that allows to reduce the influence of shadows and reflects in the detections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La gestión del conocimiento (KM) se basa en la captación, filtración, procesamiento y análisis de unos datos en bruto que con dicho refinamiento podrán llegar a convertirse en conocimiento o Sabiduría. Estas prácticas tendrán lugar en este PFC en una WSN (Wireless Sensor Network) compuesta de unos sofisticados dispositivos comúnmente conocidos como “motas” y cuya principal característica son sus bajas capacidades en cuanto a memoria, batería o autonomía. Ha sido objetivo primordial de este Proyecto de fin de Carrera aunar una WSN con la Gestión del Conocimiento así como demostrar que es posible llevar a cabo grandes procesamientos de información, con tan bajas capacidades, si se distribuyen correctamente los procesos. En primera instancia, se introducen conceptos básicos acerca de las WSN (Wireless Sensor Networks) así como de los elementos principales en dichas redes. Tras conocer el modelo de arquitectura de comunicaciones se procede a presentar la Gestión del Conocimiento de forma teórica y a continuación la interpretación que se ha hecho a partir de diversas referencias bibliográficas para llevar a cabo la implementación del proyecto. El siguiente paso es describir punto por punto todos los componentes del Simulador; librerías, funcionamiento y demás cuestiones sobre configuración y puesta a punto. Como escenario de aplicación se plantea una red de sensores inalámbricos básica cuya topología y ubicación es completamente configurable. Se lleva a cabo una configuración a nivel de red basada en el protocolo 6LowPAN pero con posibilidad de simplificarlo. Los datos se procesan de acuerdo a un modelo piramidal de Gestión de Conocimiento adaptable a las necesidades del usuario. Mediante la utilización de las diversas opciones que proporciona la interfaz gráfica implementada y los documentos de resultados que se van generando, se puede llevar a cabo un detallado estudio posterior de la simulación y comprobar si se cumplen las expectativas planteadas. Knowledge management (KM) is based on the collection, filtering, processing and analysis of some raw data which such refinement it can be turned into knowledge or wisdom. These practices will take place in a WSN (Wireless Sensor Network) consists of sophisticated devices commonly known as "dots" and whose main characteristics are its low capacity for memory, battery or autonomy. A primary objective of this Project will be to join a WSN with Knowledge Management and show that it is possible make largo information processing, with such low capacity if the processes are properly distributed. First, we introduce basic concepts about the WSN (Wireless Sensor Networks) and major elements of these networks. After meeting the communications architecture model, we proceed to show the Knowledge Management theory and then the interpretation of several bibliographic references to carry out the project implementation. The next step is discovering point by point all over the Simulator components; libraries, operation and the rest of points about configuration and tuning. As application scenario we propose a basic wireless sensor network whose topology and location is completely customizable. It will perform a network level configuration based in W6LowPAN Protocol. Data is processed according to a pyramidal pattern Knowledge Management adaptable to the user´s needs. The hardware elements will suffer more or less energy dependence depending on their role and activity in the network. Through the various options that provide the graphical interface has been implemented and results documents that are generated, can be carried out after a detailed study of the simulation and verify compliance with the expectations raised.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El mundo de las telecomunicaciones evoluciona a gran velocidad, acorde con las necesidades de los usuarios. El crecimiento del número de servicios a través de las conexiones que actualmente utilizamos para conectarnos a Internet (Ej. IPTV) con elevados requerimientos de ancho de banda, que junto a los servicios de nuevo nacimiento (ej. OTT), contribuyen tanto al aumento de la necesidad de mayores velocidades de conexión de los usuarios como a la implantación de nuevos modelos de calidad de servicio. Las redes de datos de banda ancha (fija y móvil) actuales deben, por lo tanto, experimentar una profunda transformación para conseguir solventar de una forma eficiente los problemas y las necesidades de tráfico, pudiendo así absorber el progresivo incremento del ancho de banda, dejando las puertas abiertas a futuras mejoras. Y para ello las operadoras se nutrirán con la valiosa información de tráfico y usuario que les lleven a tomar las mejores decisiones de cara a que las transformaciones llevadas a cabo cubran exactamente lo que el usuario demanda de la forma más eficiente posible. Con estas premisas, surgieron las ideas que se plasmaron como objetivos del PFC : La idea de narrar el despliegue de la banda ancha en España desde sus orígenes hasta la actualidad, enfocando su crecimiento desde un punto de vista sociotecnológico. Dando continuidad al punto anterior, se persiguió la idea de conocer las herramientas sociales y tecnológicas a raíz de las cuales se pueda realizar una previsión del tráfico en las redes de las operadoras en un futuro cercano. La pretensión de mostrar las características de los usuarios de banda ancha y del tráfico de datos que generan, que son de carácter crítico para las operadoras en la elaboración de forma adecuada de la planificación de sus redes. La intención de revelar los procedimientos de las operadoras para que, una vez conocidas las características de sus usuarios, se puedan cumplir los requisitos demandados por los mismos: QoS y los indicadores clave de rendimiento (KPIs) Por otro lado, el nivel de detalle dado pretende adecuarse a un público que no tenga profundos conocimientos sobre la materia, y salvo partes bastante concretas, se puede catalogar este trabajo como de abierto al público en general. ABSTRACT. The world of telecommunications is evolving at high speed, according to the needs of users. The growing of services number through the connections that currently have been used to connect to the Internet (eg IPTV ) with high bandwidth requirements, which together with the new birth services (eg OTT ) contribute both to increased the need for higher connection speeds users and the implementation of new models of service quality. Data networks broadband (fixed and mobile ) today must , therefore, undergo a deep transformation to achieve an efficient solving problems and traffic needs, thus being able to absorb the gradual increase of bandwidth, leaving the door open to future improvements. And for that the operators will be nurtured with valuable information and user traffic that lead them to make better decisions in the face of the transformations carried out exactly meet the user demand for the most efficient possible way. With these assumptions, the ideas that emerged were expressed as PFC objectives : The idea of narrating the broadband deployment in Spain from its origins to the present, focusing its growth from a socio-technological approach. Continuing the previous point, it pursued the idea of knowing the social tools and technology as a result of which it can perform a traffic forecast operators networks in the near future. The attempt to show the characteristics of broadband users and data traffic they generate, which are mission critical for operators in developing adequately planning their networks. The intention to disclose procedures for operators, once known the characteristics of their users, it can meet the requirements demanded by them: QoS and key performance indicators (KPI). On the other hand, the level of detail given suit seeks an audience that does not have deep knowledge on the subject, unless quite specific parts, this work can be classified as open to the general public.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper aims to present a preliminary version of asupport-system in the air transport passenger domain. This system relies upon an underlying on-tological structure representing a normative framework to facilitatethe provision of contextualized relevant legal information.This information includes the pas-senger's rights and itenhances self-litigation and the decision-making process of passengers.Our contribution is based in the attempt of rendering a user-centric-legal informationgroundedon case-scenarios of the most pronounced incidents related to the consumer complaints in the EU.A number ofadvantages with re-spect to the current state-of-the-art services are discussed and a case study illu-strates a possible technological application.