975 resultados para Web Semantico semantic open data geoSPARQL
Resumo:
Tämä kandidaatintyö keskittyy avoimen datan käyttämiseen peleissä nyt ja tulevaisuudessa. Sen tavoitteena on tutkia avoimen datan hyötyjä, saatavuutta ja mahdollisuuksia. Tuloksena selvisi, että useimmissa tapauksissa datan avaamisesta hyötyvät kaikki osapuolet. Runsaasti erilaista avointa dataa on saatavilla monissa erilaissa tiedostomuodoissa, moniin eri tarkoituksiin. Avoin data on hyödyllistä peleissä, koska sen avulla voidaan luoda monenlaista sisältöä niihin. Joitakin onnistuneita kokeiluja on jo tehty peleillä ja avoimella datalla, joten se voi olla hyvin tärkeä osa pelialaa tulevaisuudessa.
Resumo:
Abstract. WikiRate is a Collective Awareness Platform for Sustainability and Social Innovation (CAPS) project with the aim of \crowdsourcing better companies" through analysis of their Environmental Social and Governance (ESG) performance. Research to inform the design of the platform involved surveying the current corporate ESG information landscape, and identifying ways in which an open approach and peer production ethos could be e ffectively mobilised to improve this landscape's fertility. The key requirement identi ed is for an open public repository of data tracking companies' ESG performance. Corporate Social Responsibility reporting is conducted in public, but there are barriers to accessing the information in a standardised analysable format. Analyses of and ratings built upon this data can exert power over companies' behaviour in certain circumstances, but the public at large have no access to the data or the most infuential ratings that utilise it. WikiRate aims to build an open repository for this data along with tools for analysis, to increase public demand for the data, allow a broader range of stakeholders to participate in its interpretation, and in turn drive companies to behave in a more ethical manner. This paper describes the quantitative Metrics system that has been designed to meet those objectives and some early examples of its use.
Resumo:
O livro organizado por Kira Tarapanoff nos apresenta a temática inteligência organizacional e competitiva no contexto da Web 2.0., a partir de quatro distintos enfoques: I. Web 2.0: novas oportunidades para a atividade de inteligência e Big Data; II. Novas arquiteturas informacionais; III. Desenvolvimento de estratégias por meio da Web 2.0; IV. Metodologias. O livro reúne nove capítulos elaborados por quinze autores brasileiros e um finlandês, este último traduzido para o português, alinhados ao tema principal do livro.
Resumo:
La nueva generación de la Web, la Web Semántica, plantea potenciales oportunidades para dotar de significado a los contenidos Web. Las ontologías constituyen una de las principales herramientas para especificar explícitamente los conceptos de un dominio concreto, sus propiedades y sus relaciones; de manera que la información se publique en formatos que sean inteligibles automáticamente por agentes máquinas que pueden localizar y gestionar de forma precisa la información. En este trabajo se presenta un marco de trabajo para una red de ontologías para representar conceptos, atributos, operaciones y restricciones, en relación a los ítems curriculares que se usan en procesos nacionales de categorización de docentes universitarios ecuatorianos. En una primera parte se muestra el contexto del dominio, trabajos relacionados, luego se describe el proceso seguido, la abstracción del modelo ontológico y finalmente se presenta una ontología. Es una ontología de dominio debido a que proporciona el significado de los conceptos y sus relaciones dentro del dominio de ítems curriculares producidos por docentes universitarios, que son requisitos de los proceso de categorización docente universitaria en Ecuador.
Resumo:
Preserving the cultural heritage of the performing arts raises difficult and sensitive issues, as each performance is unique by nature and the juxtaposition between the performers and the audience cannot be easily recorded. In this paper, we report on an experimental research project to preserve another aspect of the performing arts—the history of their rehearsals. We have specifically designed non-intrusive video recording and on-site documentation techniques to make this process transparent to the creative crew, and have developed a complete workflow to publish the recorded video data and their corresponding meta-data online as Open Data using state-of-the-art audio and video processing to maximize non-linear navigation and hypervideo linking. The resulting open archive is made publicly available to researchers and amateurs alike and offers a unique account of the inner workings of the worlds of theater and opera.
MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH
Resumo:
Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.
Resumo:
El presente estudio de caso busca examinar la incidencia de las medidas migratorias de control fronterizo implementadas por el Frontex y el gobierno Italiano en las condiciones mínimas de supervivencia de los migrantes irregulares, económicos y solicitantes de asilo en la Isla de Lampedusa, en el periodo 2011-2015. De esta manera, se identifican las medidas migratorias de control fronterizo implementadas por Frontex y el gobierno Italiano. Se examina la situación de la seguridad humana en la crisis migratoria de la Isla, y se analiza la relación entre las medidas migratorias de control fronterizo y las condiciones mínimas de supervivencia de los migrantes. El resultado de la investigación permite plasmar, las consecuencias negativas que han tenido las medidas migratorias en cuanto a las condiciones mínimas de supervivencia, lo que ha desembocado en una crisis humanitaria.
Resumo:
The Belt and Road Initiative (BRI) is a project launched by the Chinese Government whose main goal is to connect more than 65 countries in Asia, Europe, Africa and Oceania developing infrastructures and facilities. To support the prevention or mitigation of landslide hazards, which may affect the mainland infrastructures of BRI, a landslide susceptibility analysis in the countries involved has been carried out. Due to the large study area, the analysis has been carried out using a multi-scale approach which consists of mapping susceptibility firstly at continental scale, and then at national scale. The study area selected for the continental assessment is the south-Asia, where a pixel-based landslide susceptibility map has been carried out using the Weight of Evidence method and validated by Receiving Operating Characteristic (ROC) curves. Then, we selected the regions of west Tajikistan and north-east India to be investigated at national scale. Data scarcity is a common condition for many countries involved into the Initiative. Therefore in addition to the landslide susceptibility assessment of west Tajikistan, which has been conducted using a Generalized Additive Model and validated by ROC curves, we have examined, in the same study area, the effect of incomplete landslide dataset on the prediction capacity of statistical models. The entire PhD research activity has been conducted using only open data and open-source software. In this context, to support the analysis of the last years an open-source plugin for QGIS has been implemented. The SZ-tool allows the user to make susceptibility assessments from the data preprocessing, susceptibility mapping, to the final classification. All the output data of the analysis conducted are freely available and downloadable. This text describes the research activity of the last three years. Each chapter reports the text of the articles published in international scientific journal during the PhD.
Resumo:
While the Internet has given educators access to a steady supply of Open Educational Resources, the educational rubrics commonly shared on the Web are generally in the form of static, non-semantic presentational documents or in the proprietary data structures of commercial content and learning management systems.With the advent of Semantic Web Standards, producers of online resources have a new framework to support the open exchange of software-readable datasets. Despite these advances, the state of the art of digital representation of rubrics as sharable documents has not progressed.This paper proposes an ontological model for digital rubrics. This model is built upon the Semantic Web Standards of the World Wide Web Consortium (W3C), principally the Resource Description Framework (RDF) and Web Ontology Language (OWL).
Resumo:
Nowadays, when a user is planning a touristic route is very difficult to find out which are the best places to visit. The user has to choose considering his/her preferences due to the great quantity of information it is possible to find in the web and taking into account it is necessary to do a selection, within small time because there is a limited time to do a trip. In Itiner@ project, we aim to implement Semantic Web technology combined with Geographic Information Systems in order to offer personalized touristic routes around a region based on user preferences and time situation. Using ontologies it is possible to link, structure, share data and obtain the result more suitable for user's preferences and actual situation with less time and more precisely than without ontologies. To achieve these objectives we propose a web page combining a GIS server and a touristic ontology. As a step further, we also study how to extend this technology on mobile devices due to the raising interest and technological progress of these devices and location-based services, which allows the user to have all the route information on the hand when he/she does a touristic trip. We design a little application in order to apply the combination of GIS and Semantic Web in a mobile device.
Resumo:
Nowadays, when a user is planning a touristic route is very difficult to find out which are the best places to visit. The user has to choose considering his/her preferences due to the great quantity of information it is possible to find in the web and taking into account it is necessary to do a selection, within small time because there is a limited time to do a trip. In Itiner@ project, we aim to implement Semantic Web technology combined with Geographic Information Systems in order to offer personalized touristic routes around a region based on user preferences and time situation. Using ontologies it is possible to link, structure, share data and obtain the result more suitable for user's preferences and actual situation with less time and more precisely than without ontologies. To achieve these objectives we propose a web page combining a GIS server and a touristic ontology. As a step further, we also study how to extend this technology on mobile devices due to the raising interest and technological progress of these devices and location-based services, which allows the user to have all the route information on the hand when he/she does a touristic trip. We design a little application in order to apply the combination of GIS and Semantic Web in a mobile device.
Resumo:
The COntext INterchange (COIN) strategy is an approach to solving the problem of interoperability of semantically heterogeneous data sources through context mediation. COIN has used its own notation and syntax for representing ontologies. More recently, the OWL Web Ontology Language is becoming established as the W3C recommended ontology language. We propose the use of the COIN strategy to solve context disparity and ontology interoperability problems in the emerging Semantic Web – both at the ontology level and at the data level. In conjunction with this, we propose a version of the COIN ontology model that uses OWL and the emerging rules interchange language, RuleML.
Resumo:
Presentation given as part of the EPrints/dotAC training event on 26 Mar 2010.
Resumo:
This talk will present an overview of the ongoing ERCIM project SMARTDOCS (SeMAntically-cReaTed DOCuments) which aims at automatically generating webpages from RDF data. It will particularly focus on the current issues and the investigated solutions in the different modules of the project, which are related to document planning, natural language generation and multimedia perspectives. The second part of the talk will be dedicated to the KODA annotation system, which is a knowledge-base-agnostic annotator designed to provide the RDF annotations required in the document generation process.
Resumo:
RDFa JSON-LD Microdata