105 resultados para RDF,Named Graphs,Provenance,Semantic Web,Semantics
Resumo:
Sentiment and Emotion Analysis strongly depend on quality language resources, especially sentiment dictionaries. These resources are usually scattered, heterogeneous and limited to specific domains of appli- cation by simple algorithms. The EUROSENTIMENT project addresses these issues by 1) developing a common language resource representation model for sentiment analysis, and APIs for sentiment analysis services based on established Linked Data formats (lemon, Marl, NIF and ONYX) 2) by creating a Language Resource Pool (a.k.a. LRP) that makes avail- able to the community existing scattered language resources and services for sentiment analysis in an interoperable way. In this paper we describe the available language resources and services in the LRP and some sam- ple applications that can be developed on top of the EUROSENTIMENT LRP.
Resumo:
The fermentation stage is considered to be one of the critical steps in coffee processing due to its impact on the final quality of the product. The objective of this work is to characterise the temperature gradients in a fermentation tank by multi-distributed, low-cost and autonomous wireless sensors (23 semi-passive TurboTag® radio-frequency identifier (RFID) temperature loggers). Spatial interpolation in polar coordinates and an innovative methodology based on phase space diagrams are used. A real coffee fermentation process was supervised in the Cauca region (Colombia) with sensors submerged directly in the fermenting mass, leading to a 4.6 °C temperature range within the fermentation process. Spatial interpolation shows a maximum instant radial temperature gradient of 0.1 °C/cm from the centre to the perimeter of the tank and a vertical temperature gradient of 0.25 °C/cm for sensors with equal polar coordinates. The combination of spatial interpolation and phase space graphs consistently enables the identification of five local behaviours during fermentation (hot and cold spots).
Resumo:
En los últimos años la evolución de la información compartida por internet ha cambiado enormemente, llegando a convertirse en lo que llamamos hoy la Web Semántica. Este término, acuñado en 2004, muestra una manera más “inteligente” de compartir los datos, de tal manera que éstos puedan ser entendibles por una máquina o por cualquier persona en el mundo. Ahora mismo se encuentra en fase de expansión, prueba de ello es la cantidad de grupos de investigación que están actualmente dedicando sus esfuerzos al desarrollo e implementación de la misma y la amplitud de temáticas que tienen sus trabajos. Con la aparición de la Web Semántica, la tendencia de las bases de datos de nueva creación se está empezando a inclinar hacia la creación de ontologías más o menos sencillas que describan las bases de datos y así beneficiarse de las posibilidades de interoperabilidad que aporta. Con el presente trabajo se pretende el estudio de los beneficios que aporta la implementación de una ontología en una base de datos relacional ya creada, los trabajos necesarios para ello y las herramientas necesarias para hacerlo. Para ello se han tomado unos datos de gran interés y, como continuación a su trabajo, se ha implementado la ontología. Estos datos provienen del estudio de un método para la obtención automatizada del linaje de las parcelas registradas en el catastro español. Abstract: In the last years the evolution of the information shared on the Internet has dramatically changed, emerging what is called Semantic Web. This term appeared in 2004, defining a “smarter” way of sharing data. Data that could be understood by machines or by any human around the world. Nowadays, the Semantic Web is in expansion phase, as it can be probed by the amount of research groups working on this approach and the wide thematic range of their work. With the appearance of the Semantic Web, current database technologies are supported by the creation of ontologies which describe them and therefore get a new set of interoperability possibilities from them. This work focuses in the study of the benefits given by the implementation of an ontology in a created relational database, the steps to follow and the tools necessary to get it done. The study has been done by using data of considerable interest, coming from a study of the lineage of parcels registered in the Spanish cadaster. As a continuation of this work an ontology has been implemented.
Resumo:
Linked Data is the key paradigm of the Semantic Web, a new generation of the World Wide Web that promises to bring meaning (semantics) to data. A large number of both public and private organizations have published their data following the Linked Data principles, or have done so with data from other organizations. To this extent, since the generation and publication of Linked Data are intensive engineering processes that require high attention in order to achieve high quality, and since experience has shown that existing general guidelines are not always sufficient to be applied to every domain, this paper presents a set of guidelines for generating and publishing Linked Data in the context of energy consumption in buildings (one aspect of Building Information Models). These guidelines offer a comprehensive description of the tasks to perform, including a list of steps, tools that help in achieving the task, various alternatives for performing the task, and best practices and recommendations. Furthermore, this paper presents a complete example on the generation and publication of Linked Data about energy consumption in buildings, following the presented guidelines, in which the energy consumption data of council sites (e.g., buildings and lights) belonging to the Leeds City Council jurisdiction have been generated and published as Linked Data.
Resumo:
Query rewriting is one of the fundamental steps in ontologybased data access (OBDA) approaches. It takes as inputs an ontology and a query written according to that ontology, and produces as an output a set of queries that should be evaluated to account for the inferences that should be considered for that query and ontology. Different query rewriting systems give support to different ontology languages with varying expressiveness, and the rewritten queries obtained as an output do also vary in expressiveness. This heterogeneity has traditionally made it difficult to compare different approaches, and the area lacks in general commonly agreed benchmarks that could be used not only for such comparisons but also for improving OBDA support. In this paper we compile data, dimensions and measurements that have been used to evaluate some of the most recent systems, we analyse and characterise these assets, and provide a unified set of them that could be used as a starting point towards a more systematic benchmarking process for such systems. Finally, we apply this initial benchmark with some of the most relevant OBDA approaches in the state of the art.
Resumo:
In this paper we define the notion of an axiom dependency hypergraph, which explicitly represents how axioms are included into a module by the algorithm for computing locality-based modules. A locality-based module of an ontology corresponds to a set of connected nodes in the hypergraph, and atoms of an ontology to strongly connected components. Collapsing the strongly connected components into single nodes yields a condensed hypergraph that comprises a representation of the atomic decomposition of the ontology. To speed up the condensation of the hypergraph, we first reduce its size by collapsing the strongly connected components of its graph fragment employing a linear time graph algorithm. This approach helps to significantly reduce the time needed for computing the atomic decomposition of an ontology. We provide an experimental evaluation for computing the atomic decomposition of large biomedical ontologies. We also demonstrate a significant improvement in the time needed to extract locality-based modules from an axiom dependency hypergraph and its condensed version.
Resumo:
Esta exposición pretende ser una introducción al estudio de un amplio, complejo y dinámico conjunto de nociones, técnicas y prácticas sociales, que gira en torno a la blogosfera, “un vigoroso subespacio de comunicación en Internet”, tal como lo denomina Sáez Vacas en esta misma revista. El objetivo no es tanto ser exhaustivo en el tratamiento, como dar a conocer al lector los distintos conceptos y fenómenos involucrados en la génesis de este peculiar universo, cuyo origen podemos situar en un metafórico Blog Bang. Hablaremos de los blogs (weblogs o bitácoras), su origen, caracterización, clasificación y cuantificación, de la tecnología que los rodea y de conceptos relacionados, tales como los wikis, el socialware, la blogocultura y la web semántica. This essay is designed as an introduction to the study of a broad, complex and dynamic set of notions, techniques and social practices revolving around the blogosphere –“an intense communication subspace on the Internet”, as defined by Saéz Vacas in this magazine. The aim of this article is not to exhaustively cover the topic, but rather, to introduce the reader to the different concepts and phenomena involved in the genesis of this peculiar universe, whose origin lies in the metaphoric Blog Bang. We will touch on blogs (weblogs and bitcores), their origin, nature, classification and quantification, the technology that surrounds them, and other related concepts like wikis, socialware, blogculture and web semantics.
Resumo:
In this paper we present the MultiFarm dataset, which has been designed as a benchmark for multilingual ontology matching. The MultiFarm dataset is composed of a set of ontologies translated in different languages and the corresponding alignments between these ontologies. It is based on the OntoFarm dataset, which has been used successfully for several years in the Ontology Alignment Evaluation Initiative (OAEI). By translating the ontologies of the OntoFarm dataset into eight different languages – Chinese, Czech, Dutch, French, German, Portuguese, Russian, and Spanish – we created a comprehensive set of realistic test cases. Based on these test cases, it is possible to evaluate and compare the performance of matching approaches with a special focus on multilingualism.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
Linked data offers a promising setting to encode, publish and share metadata of resources. As the matter of fact, it is already adopted by data producers such as European Environment Agency, US and some EU Governs, whose first ambition is to share (meta)data making their processes more effective and transparent. Such as an increasing interest and involvement of data providers surely represents a genuine witness of the web of data success, but in a longer perspective, frameworks supporting linked data consumers in their decision making processes will be a compelling need. In this respect, the talk is introducing SSONDE, a framework enabling in detailed comparison, ranking and selection of linked data resources through the analysis of their RDF ontology driven metadata. SSONDE implements an instance similarity especially designed to support in resource selection, namely the process stakeholders engage to choose a set of resources suitable for a given analysis purpose: (i) it deploys an asymmetric similarity assessment to emphasize information about gains and losses the stakeholders get adopting a resource in place of another; (ii) it relies on an explicit formalization of contexts to tailor the similarity assessment with respect to specific user-defined selection goals. The talk aims at providing an insight on SSONDE instance similarity and it will briefly describe some examples of SSONDE deployment in the context of linked data consumption.
Resumo:
This paper introduces a semantic language developed with the objective to be used in a semantic analyzer based on linguistic and world knowledge. Linguistic knowledge is provided by a Combinatorial Dictionary and several sets of rules. Extra-linguistic information is stored in an Ontology. The meaning of the text is represented by means of a series of RDF-type triples of the form predicate (subject, object). Semantic analyzer is one of the options of the multifunctional ETAP-3 linguistic processor. The analyzer can be used for Information Extraction and Question Answering. We describe semantic representation of expressions that provide an assessment of the number of objects involved and/or give a quantitative evaluation of different types of attributes. We focus on the following aspects: 1) parametric and non-parametric attributes; 2) gradable and non-gradable attributes; 3) ontological representation of different classes of attributes; 4) absolute and relative quantitative assessment; 5) punctual and interval quantitative assessment; 6) intervals with precise and fuzzy boundaries
Resumo:
Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to configure the annotations to their specific needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation confidence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.
Resumo:
After the extraordinary spread of the World Wide Web during the last fifteen years, engineers and developers are pushing now the Internet to its next border. A new conception in computer science and networks communication has been burgeoning during roughly the last decade: a world where most of the computers of the future will be extremely downsized, to the point that they will look like dust at its most advanced prototypes. In this vision, every single element of our “real” world has an intelligent tag that carries all their relevant data, effectively mapping the “real” world into a “virtual” one, where all the electronically augmented objects are present, can interact among them and influence with their behaviour that of the other objects, or even the behaviour of a final human user. This is the vision of the Internet of the Future, which also draws ideas of several novel tendencies in computer science and networking, as pervasive computing and the Internet of Things. As it has happened before, materializing a new paradigm that changes the way entities interrelate in this new environment has proved to be a goal full of challenges in the way. Right now the situation is exciting, with a plethora of new developments, proposals and models sprouting every time, often in an uncoordinated, decentralised manner away from any standardization, resembling somehow the status quo of the first developments of advanced computer networking, back in the 60s and the 70s. Usually, a system designed after the Internet of the Future will consist of one or several final user devices attached to these final users, a network –often a Wireless Sensor Network- charged with the task of collecting data for the final user devices, and sometimes a base station sending the data for its further processing to less hardware-constrained computers. When implementing a system designed with the Internet of the Future as a pattern, issues, and more specifically, limitations, that must be faced are numerous: lack of standards for platforms and protocols, processing bottlenecks, low battery lifetime, etc. One of the main objectives of this project is presenting a functional model of how a system based on the paradigms linked to the Internet of the Future works, overcoming some of the difficulties that can be expected and showing a model for a middleware architecture specifically designed for a pervasive, ubiquitous system. This Final Degree Dissertation is divided into several parts. Beginning with an Introduction to the main topics and concepts of this new model, a State of the Art is offered so as to provide a technological background. After that, an example of a semantic and service-oriented middleware is shown; later, a system built by means of this semantic and service-oriented middleware, and other components, is developed, justifying its placement in a particular scenario, describing it and analysing the data obtained from it. Finally, the conclusions inferred from this system and future works that would be good to be tackled are mentioned as well. RESUMEN Tras el extraordinario desarrollo de la Web durante los últimos quince años, ingenieros y desarrolladores empujan Internet hacia su siguiente frontera. Una nueva concepción en la computación y la comunicación a través de las redes ha estado floreciendo durante la última década; un mundo donde la mayoría de los ordenadores del futuro serán extremadamente reducidas de tamaño, hasta el punto que parecerán polvo en sus más avanzado prototipos. En esta visión, cada uno de los elementos de nuestro mundo “real” tiene una etiqueta inteligente que porta sus datos relevantes, mapeando de manera efectiva el mundo “real” en uno “virtual”, donde todos los objetos electrónicamente aumentados están presentes, pueden interactuar entre ellos e influenciar con su comportamiento el de los otros, o incluso el comportamiento del usuario final humano. Ésta es la visión del Internet del Futuro, que también toma ideas de varias tendencias nuevas en las ciencias de la computación y las redes de ordenadores, como la computación omnipresente y el Internet de las Cosas. Como ha sucedido antes, materializar un nuevo paradigma que cambia la manera en que las entidades se interrelacionan en este nuevo entorno ha demostrado ser una meta llena de retos en el camino. Ahora mismo la situación es emocionante, con una plétora de nuevos desarrollos, propuestas y modelos brotando todo el rato, a menudo de una manera descoordinada y descentralizada lejos de cualquier estandarización, recordando de alguna manera el estado de cosas de los primeros desarrollos de redes de ordenadores avanzadas, allá por los años 60 y 70. Normalmente, un sistema diseñado con el Internet del futuro como modelo consistirá en uno o varios dispositivos para usuario final sujetos a estos usuarios finales, una red –a menudo, una red de sensores inalámbricos- encargada de recolectar datos para los dispositivos de usuario final, y a veces una estación base enviando los datos para su consiguiente procesado en ordenadores menos limitados en hardware. Al implementar un sistema diseñado con el Internet del futuro como patrón, los problemas, y más específicamente, las limitaciones que deben enfrentarse son numerosas: falta de estándares para plataformas y protocolos, cuellos de botella en el procesado, bajo tiempo de vida de las baterías, etc. Uno de los principales objetivos de este Proyecto Fin de Carrera es presentar un modelo funcional de cómo trabaja un sistema basado en los paradigmas relacionados al Internet del futuro, superando algunas de las dificultades que pueden esperarse y mostrando un modelo de una arquitectura middleware específicamente diseñado para un sistema omnipresente y ubicuo. Este Proyecto Fin de Carrera está dividido en varias partes. Empezando por una introducción a los principales temas y conceptos de este modelo, un estado del arte es ofrecido para proveer un trasfondo tecnológico. Después de eso, se muestra un ejemplo de middleware semántico orientado a servicios; después, se desarrolla un sistema construido por medio de este middleware semántico orientado a servicios, justificando su localización en un escenario particular, describiéndolo y analizando los datos obtenidos de él. Finalmente, las conclusiones extraídas de este sistema y las futuras tareas que sería bueno tratar también son mencionadas.
Resumo:
Abstract Idea Management Systems are web applications that implement the notion of open innovation though crowdsourcing. Typically, organizations use those kind of systems to connect to large communities in order to gather ideas for improvement of products or services. Originating from simple suggestion boxes, Idea Management Systems advanced beyond collecting ideas and aspire to be a knowledge management solution capable to select best ideas via collaborative as well as expert assessment methods. In practice, however, the contemporary systems still face a number of problems usually related to information overflow and recognizing questionable quality of submissions with reasonable time and effort allocation. This thesis focuses on idea assessment problem area and contributes a number of solutions that allow to filter, compare and evaluate ideas submitted into an Idea Management System. With respect to Idea Management System interoperability the thesis proposes theoretical model of Idea Life Cycle and formalizes it as the Gi2MO ontology which enables to go beyond the boundaries of a single system to compare and assess innovation in an organization wide or market wide context. Furthermore, based on the ontology, the thesis builds a number of solutions for improving idea assessment via: community opinion analysis (MARL), annotation of idea characteristics (Gi2MO Types) and study of idea relationships (Gi2MO Links). The main achievements of the thesis are: application of theoretical innovation models for practice of Idea Management to successfully recognize the differentiation between communities, opinion metrics and their recognition as a new tool for idea assessment, discovery of new relationship types between ideas and their impact on idea clustering. Finally, the thesis outcome is establishment of Gi2MO Project that serves as an incubator for Idea Management solutions and mature open-source software alternatives for the widely available commercial suites. From the academic point of view the project delivers resources to undertake experiments in the Idea Management Systems area and managed to become a forum that gathered a number of academic and industrial partners. Resumen Los Sistemas de Gestión de Ideas son aplicaciones Web que implementan el concepto de innovación abierta con técnicas de crowdsourcing. Típicamente, las organizaciones utilizan ese tipo de sistemas para conectar con comunidades grandes y así recoger ideas sobre cómo mejorar productos o servicios. Los Sistemas de Gestión de Ideas lian avanzado más allá de recoger simplemente ideas de buzones de sugerencias y ahora aspiran ser una solución de gestión de conocimiento capaz de seleccionar las mejores ideas por medio de técnicas colaborativas, así como métodos de evaluación llevados a cabo por expertos. Sin embargo, en la práctica, los sistemas contemporáneos todavía se enfrentan a una serie de problemas, que, por lo general, están relacionados con la sobrecarga de información y el reconocimiento de las ideas de dudosa calidad con la asignación de un tiempo y un esfuerzo razonables. Esta tesis se centra en el área de la evaluación de ideas y aporta una serie de soluciones que permiten filtrar, comparar y evaluar las ideas publicadas en un Sistema de Gestión de Ideas. Con respecto a la interoperabilidad de los Sistemas de Gestión de Ideas, la tesis propone un modelo teórico del Ciclo de Vida de la Idea y lo formaliza como la ontología Gi2MO que permite ir más allá de los límites de un sistema único para comparar y evaluar la innovación en un contexto amplio dentro de cualquier organización o mercado. Por otra parte, basado en la ontología, la tesis desarrolla una serie de soluciones para mejorar la evaluación de las ideas a través de: análisis de las opiniones de la comunidad (MARL), la anotación de las características de las ideas (Gi2MO Types) y el estudio de las relaciones de las ideas (Gi2MO Links). Los logros principales de la tesis son: la aplicación de los modelos teóricos de innovación para la práctica de Sistemas de Gestión de Ideas para reconocer las diferenciasentre comu¬nidades, métricas de opiniones de comunidad y su reconocimiento como una nueva herramienta para la evaluación de ideas, el descubrimiento de nuevos tipos de relaciones entre ideas y su impacto en la agrupación de estas. Por último, el resultado de tesis es el establecimiento de proyecto Gi2MO que sirve como incubadora de soluciones para Gestión de Ideas y herramientas de código abierto ya maduras como alternativas a otros sistemas comerciales. Desde el punto de vista académico, el proyecto ha provisto de recursos a ciertos experimentos en el área de Sistemas de Gestión de Ideas y logró convertirse en un foro que reunión para un número de socios tanto académicos como industriales.
Resumo:
The goal of the W3C's Media Annotation Working Group (MAWG) is to promote interoperability between multimedia metadata formats on the Web. As experienced by everybody, audiovisual data is omnipresent on today's Web. However, different interaction interfaces and especially diverse metadata formats prevent unified search, access, and navigation. MAWG has addressed this issue by developing an interlingua ontology and an associated API. This article discusses the rationale and core concepts of the ontology and API for media resources. The specifications developed by MAWG enable interoperable contextualized and semantic annotation and search, independent of the source metadata format, and connecting multimedia data to the Linked Data cloud. Some demonstrators of such applications are also presented in this article.