975 resultados para Web Semantico semantic open data geoSPARQL
Resumo:
This dissertation proposes an analysis of the governance of the European scientific research, focusing on the emergence of the Open Science paradigm: a new way of doing science, oriented towards the openness of every phase of the scientific research process, able to take full advantage of the digital ICTs. The emergence of this paradigm is relatively recent, but in the last years it has become increasingly relevant. The European institutions expressed a clear intention to embrace the Open Science paradigm (eg., think about the European Open Science Cloud, EOSC; or the establishment of the Horizon Europe programme). This dissertation provides a conceptual framework for the multiple interventions of the European institutions in the field of Open Science, addressing the major legal challenges of its implementation. The study investigates the notion of Open Science, proposing a definition that takes into account all its dimensions related to the human and fundamental rights framework in which Open Science is grounded. The inquiry addresses the legal challenges related to the openness of research data, in light of the European Open Data framework and the impact of the GDPR on the context of Open Science. The last part of the study is devoted to the infrastructural dimension of the Open Science paradigm, exploring the e-infrastructures. The focus is on a specific type of computational infrastructure: the High Performance Computing (HPC) facility. The adoption of HPC for research is analysed from the European perspective, investigating the EuroHPC project, and the local perspective, proposing the case study of the HPC facility of the University of Luxembourg, the ULHPC. This dissertation intends to underline the relevance of the legal coordination approach, between all actors and phases of the process, in order to develop and implement the Open Science paradigm, adhering to the underlying human and fundamental rights.
Resumo:
This article is published online with Open Access and distributed under the terms of the Creative Commons Attribution Non-Commercial License.
Resumo:
Websites are, nowadays, the face of institutions, but they are often neglected, especially when it comes to contents. In the present paper, we put forth an investigation work whose final goal is the development of a model for the measurement of data quality in institutional websites for health units. To that end, we have carried out a bibliographic review of the available approaches for the evaluation of website content quality, in order to identify the most recurrent dimensions and the attributes, and we are currently carrying out a Delphi Method process, presently in its second stage, with the purpose of reaching an adequate set of attributes for the measurement of content quality.
Resumo:
This article presents a work-in-progress version of a Dublin Core Application Profile (DCAP) developed to serve the Social and Solidarity Economy (SSE). Studies revealed that this community is interested in implementing both internal interoperability between their Web platforms to build a global SSE e-marketplace, and external interoperability among their Web platforms and external ones. The Dublin Core Application Profile for Social and Solidarity Economy (DCAP-SSE) serves this purpose. SSE organisations are submerged in the market economy but they have specificities not taken into account in this economy. The DCAP-SSE integrates terms from well-known metadata schemas, Resource Description Framework (RDF) vocabularies or ontologies, in order to enhance interoperability and take advantage of the benefits of the Linked Open Data ecosystem. It also integrates terms from the new essglobal RDF vocabulary which was created with the goal to respond to the SSE-specific needs. The DCAP-SSE also integrates five new Vocabulary Encoding Schemes to be used with DCAP-SSE properties. The DCAP development was based on a method for the development of application profiles (Me4MAP). We believe that this article has an educational value since it presents the idea that it is important to base DCAP developments on a method. This article shows the main results of applying such a method.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
This study identifies a measure of the cultural importance of an area within a city. It does so by making use of origindestination trip data and the bike stations of the bike share system in New York City as a proxy to study the city. Rarely is movement in the city studied at such a small scale. The change in strength of the similarity of movement between each station is studied. It is the first study to provide this measure of importance for every point in the system. This measure is then related to the characteristics which make for vibrant city communities, namely highly mixed land use types. It reveals that the spatial pattern of important areas remains constant over differing time periods. Communities are then characterised by the land uses surrounding these stations with high measures of importance. Finally it identifies the areas of global cultural importance alongside the areas of local importance to the city.
Resumo:
FOSTER aims to support different stakeholders, especially young researchers, in adopting open access in the context of the European Research Area (ERA) and in complying with the open access policies and rules of participation set out for Horizon 2020 (H2020). FOSTER establish a European-wide training programme on open access and open data, consolidating training activities at downstream level and reaching diverse disciplinary communities and countries in the ERA. The training programme includes different approaches and delivery options: elearning, blearning, self-learning, dissemination of training materials/contents, helpdesk, face-to-face training, especially training-the-trainers, summer schools, seminars, etc.
Resumo:
To make full use of research data, the bioscience community needs to adopt technologies and reward mechanisms that support interoperability and promote the growth of an open 'data commoning' culture. Here we describe the prerequisites for data commoning and present an established and growing ecosystem of solutions using the shared 'Investigation-Study-Assay' framework to support that vision.
Resumo:
La sèrie d'informes Horizon és el resultat més tangible del Projecte Horizon del New Media Consortium, un esforç de recerca qualitativa iniciat el 2002, que identifica i descriu les tecnologies emergents amb més potencial d'impacte en l'ensenyament, l'aprenentatge, la recerca i la expressió creativa en l'àmbit educatiu global. Aquest volum, l'Informe Horizon 2010: Edició Iberoamericana, se centra en la investigació en els països de la regió Iberoamericana (incloent-hi tota Llatinoamèrica, Espanya i Portugal) i en l'àmbit de l'educació superior. L'Informe Horizon 2010: Edició Iberoamericana és el primer que ofereix aquesta contextualització regional i ha estat produït per l'NMC i el eLearn Center de la Universitat Oberta de Catalunya.
Resumo:
Over the past year, the Open University of Catalonia library has been designing its new website with this question in mind. Our main concern has been how to integrate the library in the student day to day study routine to not to be only a satellite tool. We present the design of the website that, in a virtual library like ours, it is not only a website but the whole library itself. The central point of the web is my library, a space that associates the library resources with the student's curriculum and their course subjects. There the students can save the resources as favourites, comment or share them. They have also access to all the services the library offers them. The resources are imported from multiple tools such as Millennium, SFX, Metalib and Dspace to the Drupal CMS. Then the resources' metadata can be enriched with other contextual information from other sources, for example the course subjects. And finally they can be exported in standard, open data formats making them available for linked data applications.
Resumo:
Este trabajo define qué es una base de datos semántica, qué ventajas ofrece, cómo se utiliza y en qué tipo de proyectos o sistemas tiene sentido usarla. Además, en él se estudia en detalle una de ellas, OWLIM 1, de la empresa Ontotext, para evaluar la dificultad de usarla, su rendimiento y sus capacidades específicas.
Resumo:
Comparativa sobre sistemas de gestión de bases de datos orientados a la web semántica, tanto nativos como habilitados para esta función.
Resumo:
Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.