925 resultados para Semantic Web, Exploratory Search, Recommendation Systems
Resumo:
La tesi descrive PARLEN, uno strumento che permette l'analisi di articoli, l'estrazione e il riconoscimento delle entità - ad esempio persone, istituzioni, città - e il collegamento delle stesse a risorse online. PARLEN è inoltre in grado di pubblicare i dati estratti in un dataset basato su principi e tecnologie del Semantic Web.
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Describes and analyzes the results obtained after analysis of the publications present in Scopus data base and used that tool rankings generated by the research group Scimago on the production of the different countries of Central America on the issue of documentation the means of mass communication. Performed a comparative about different countries in the region and the scientific analyzes. Finally, and given and data analysis, a number of recommendations are made to improve the production and the presence in indexed database.
Resumo:
Abstract The World Wide Web Consortium, W3C, is known for standards like HTML and CSS but there's a lot more to it than that. Mobile, automotive, publishing, graphics, TV and more. Then there are horizontal issues like privacy, security, accessibility and internationalisation. Many of these assume that there is an underlying data infrastructure to power applications. In this session, W3C's Data Activity Lead, Phil Archer, will describe the overall vision for better use of the Web as a platform for sharing data and how that translates into recent, current and possible future work. What's the difference between using the Web as a data platform and as a glorified USB stick? Why does it matter? And what makes a standard a standard anyway? Speaker Biography Phil Archer Phil Archer is Data Activity Lead at W3C, the industry standards body for the World Wide Web, coordinating W3C's work in the Semantic Web and related technologies. He is most closely involved in the Data on the Web Best Practices, Permissions and Obligations Expression and Spatial Data on the Web Working Groups. His key themes are interoperability through common terminology and URI persistence. As well as work at the W3C, his career has encompassed broadcasting, teaching, linked data publishing, copy writing, and, perhaps incongruously, countryside conservation. The common thread throughout has been a knack for communication, particularly communicating complex technical ideas to a more general audience.
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
Resumo:
The continuous flow of technological developments in communications and electronic industries has led to the growing expansion of the Internet of Things (IoT). By leveraging the capabilities of smart networked devices and integrating them into existing industrial, leisure and communication applications, the IoT is expected to positively impact both economy and society, reducing the gap between the physical and digital worlds. Therefore, several efforts have been dedicated to the development of networking solutions addressing the diversity of challenges associated with such a vision. In this context, the integration of Information Centric Networking (ICN) concepts into the core of IoT is a research area gaining momentum and involving both research and industry actors. The massive amount of heterogeneous devices, as well as the data they produce, is a significant challenge for a wide-scale adoption of the IoT. In this paper we propose a service discovery mechanism, based on Named Data Networking (NDN), that leverages the use of a semantic matching mechanism for achieving a flexible discovery process. The development of appropriate service discovery mechanisms enriched with semantic capabilities for understanding and processing context information is a key feature for turning raw data into useful knowledge and ensuring the interoperability among different devices and applications. We assessed the performance of our solution through the implementation and deployment of a proof-of-concept prototype. Obtained results illustrate the potential of integrating semantic and ICN mechanisms to enable a flexible service discovery in IoT scenarios.
Resumo:
Los metadatos son llaves para la categorización de información en los servicios digitales. En esencia se trata de catalogación y clasificación de información y su uso constituye una de las mejores prácticas en la gestión de información y de la misma manera que los catálogos y OPAC’s impacta en mejores servicios a los usuarios sean éstos de bibiotecas virtuales, e-gobierno, e-aprendizaje o e-salud; asimismo son la base para futuros desarrollos como la Web Semántica.El tema es de particular interés a los bibliotecarios ya que como organizadores del conocimiento conocen los esquemas de clasificación, reglas de registro de datos como las AACR2 y vocabularios especializados. En este documento se manejan algunos conceptos básicos al respecto y se comentan los pasos que América Latina está dando en este tema global.
Resumo:
La nueva generación de la Web, la Web Semántica, plantea potenciales oportunidades para dotar de significado a los contenidos Web. Las ontologías constituyen una de las principales herramientas para especificar explícitamente los conceptos de un dominio concreto, sus propiedades y sus relaciones; de manera que la información se publique en formatos que sean inteligibles automáticamente por agentes máquinas que pueden localizar y gestionar de forma precisa la información. En este trabajo se presenta un marco de trabajo para una red de ontologías para representar conceptos, atributos, operaciones y restricciones, en relación a los ítems curriculares que se usan en procesos nacionales de categorización de docentes universitarios ecuatorianos. En una primera parte se muestra el contexto del dominio, trabajos relacionados, luego se describe el proceso seguido, la abstracción del modelo ontológico y finalmente se presenta una ontología. Es una ontología de dominio debido a que proporciona el significado de los conceptos y sus relaciones dentro del dominio de ítems curriculares producidos por docentes universitarios, que son requisitos de los proceso de categorización docente universitaria en Ecuador.
Resumo:
En este documento se propone un marco de trabajo basado en tecnologías de la Web Semántica para detectar potenciales redes de colaboración, mediante el enriquecimiento semántico de artículos científicos producidos por investigadores que publican con afiliaciones ecuatorianas. El marco de trabajo se describe a través de un ciclo de publicación de datos enlazados. Como alcance se consideraron publicaciones que tienen al menos un autor con afiliación ecuatoriana. Las redes de colaboración detectadas son un insumo importante para fortalecer los esfuerzos del gobierno ecuatoriano y las autoridades universitarias del país, priorizar los esfuerzos y recursos invertidos en investigación y determinar la pertinencia o coherencia de los programas de investigación.
RSLT: trasformazione di Open LinkedData in testi in linguaggio naturaletramite template dichiarativi
Resumo:
La diffusione del Semantic Web e di dati semantici in formato RDF, ha creato la necessità di un meccanismo di trasformazione di tali informazioni, semplici da interpretare per una macchina, in un linguaggio naturale, di facile comprensione per l'uomo. Nella dissertazione si discuterà delle soluzioni trovate in letteratura e, nel dettaglio, di RSLT, una libreria JavaScript che cerca di risolvere questo problema, consentendo la creazione di applicazioni web in grado di eseguire queste trasformazioni tramite template dichiarativi. Verranno illustrati, inoltre, tutti i cambiamenti e tutte le modi�che introdotte nella versione 1.1 della libreria, la cui nuova funzionalit�à principale �è il supporto a SPARQL 1.0.
Resumo:
Harnessing the potential of semantic web technologies to support and diversify scholarship is gaining popularity in the digital humanities. This talk describes a number of projects utilising Linked Data ranging from musicology and library metadata, to the representation of the narrative structure, philological, bibliographical, and museological data of ancient Mesopotamian literary compositions.
Resumo:
The dissertation addresses the still not solved challenges concerned with the source-based digital 3D reconstruction, visualisation and documentation in the domain of archaeology, art and architecture history. The emerging BIM methodology and the exchange data format IFC are changing the way of collaboration, visualisation and documentation in the planning, construction and facility management process. The introduction and development of the Semantic Web (Web 3.0), spreading the idea of structured, formalised and linked data, offers semantically enriched human- and machine-readable data. In contrast to civil engineering and cultural heritage, academic object-oriented disciplines, like archaeology, art and architecture history, are acting as outside spectators. Since the 1990s, it has been argued that a 3D model is not likely to be considered a scientific reconstruction unless it is grounded on accurate documentation and visualisation. However, these standards are still missing and the validation of the outcomes is not fulfilled. Meanwhile, the digital research data remain ephemeral and continue to fill the growing digital cemeteries. This study focuses, therefore, on the evaluation of the source-based digital 3D reconstructions and, especially, on uncertainty assessment in the case of hypothetical reconstructions of destroyed or never built artefacts according to scientific principles, making the models shareable and reusable by a potentially wide audience. The work initially focuses on terminology and on the definition of a workflow especially related to the classification and visualisation of uncertainty. The workflow is then applied to specific cases of 3D models uploaded to the DFG repository of the AI Mainz. In this way, the available methods of documenting, visualising and communicating uncertainty are analysed. In the end, this process will lead to a validation or a correction of the workflow and the initial assumptions, but also (dealing with different hypotheses) to a better definition of the levels of uncertainty.