954 resultados para linked open data
Resumo:
Lo scopo di questo elaborato è di analizzare e progettare un sistema in grado di supportare la definizione dei dati nel formato utilizzato per definire in modo formale la semantica dei dati, ma soprattutto nella complessa e innovativa attività di link discovery. Una attività molto potente che, tramite gli strumenti e le regole del Web Semantico (chiamato anche Web of Data), permette data una base di conoscenza sorgente ed altre basi di conoscenza esterne e distribuite nel Web, di interconnettere i dati della base di conoscenza sorgente a quelli esterni sulla base di complessi algoritmi di interlinking. Questi algoritmi fanno si che i concetti espressi sulla base di dati sorgente ed esterne vengano interconnessi esprimendo la semantica del collegamento ed in base a dei complessi criteri di confronto definiti nel suddetto algoritmo. Tramite questa attività si è in grado quindi di aumentare notevolmente la conoscenza della base di conoscenza sorgente, se poi tutte le basi di conoscenza presenti nel Web of Data seguissero questo procedimento, la conoscenza definita aumenterebbe fino a livelli che sono limitati solo dalla immensa vastità del Web, dando una potenza di elaborazione dei dati senza eguali. Per mezzo di questo sistema si ha l’ambizioso obiettivo di fornire uno strumento che permetta di aumentare sensibilmente la presenza dei Linked Open Data principalmente sul territorio nazionale ma anche su quello internazionale, a supporto di enti pubblici e privati che tramite questo sistema hanno la possibilità di aprire nuovi scenari di business e di utilizzo dei dati, dando una potenza al dato che attualmente è solo immaginabile.
Resumo:
Le Associazioni Non Profit giocano un ruolo sempre più rilevante nella vita dei cittadini e rappresentano un'importante realtà produttiva del nostro paese; molto spesso però risulta difficile trovare informazioni relative ad eventi, attività o sull'esistenza stessa di queste associazioni. Per venire in contro alle esigenze dei cittadini molte Regioni e Province mettono a disposizione degli elenchi in cui sono raccolte le informazioni relative alle varie organizzazioni che operano sul territorio. Questi elenchi però, presentano spesso grossi problemi, sia per quanto riguarda la correttezza dei dati, sia per i formati utilizzati per la pubblicazione. Questi fattori hanno portato all'idea e alla necessità di realizzare un sistema per raccogliere, sistematizzare e rendere fruibili le informazioni sulle Associazioni Non Profit presenti sul territorio, in modo che questi dati possano essere utilizzati liberamente da chiunque per scopi diversi. Il presente lavoro si pone quindi due obiettivi principali: il primo consiste nell'implementazione di un tool in grado di recuperare le informazioni sulle Associazioni Non Profit sfruttando i loro Siti Web; questo avviene per mezzo dell'utilizzo di tecniche di Web Crawling e Web Scraping. Il secondo obiettivo consiste nel pubblicare le informazioni raccolte, secondo dei modelli che ne permettano un uso libero e non vincolato; per la pubblicazione e la strutturazione dei dati è stato utilizzato un modello basato sui principi dei linked open data.
Resumo:
Studio ed analisi delle principali tecniche in ambito di Social Data Analysis. Progettazione e Realizzazione di una soluzione software implementata con linguaggio Java in ambiente Eclipse. Il software realizzato permette di integrare differenti servizi di API REST, per l'estrazione di dati sociali da Twitter, la loro memorizzazione in un database non-relazionale (realizzato con MongoDB), e la loro gestione. Inoltre permette di effettuare operazioni di classificazione di topic, e di analizzare dati complessivi sulle collection di dati estratti. Infine permette di visualizzare un albero delle "ricondivisioni", partendo da singoli tweet selezionati, ed una mappa geo-localizzata, contenente gli utenti coinvolti nella catena di ricondivisioni, e i relativi archi di "retweet".
Resumo:
La pubblicazione si incentra sulla descrizione di un programma generico di disambiguazione di IRI e letterali, in Linked Open Data, fortemente configurabile, quindi applicabile in più contesti. CALID è la sigla di "Customizable Application for Literal and IRI's Disambiguation". Esso è stato creato per risolvere la disambiguazione degli autori di pubblicazioni scientifiche, e in questo articolo viene descritta la parte progettuale, il modo in cui si utilizza e i valori di performance e precisione ottenuti testandolo su diversi datasets.
Resumo:
As the number of data sources publishing their data on the Web of Data is growing, we are experiencing an immense growth of the Linked Open Data cloud. The lack of control on the published sources, which could be untrustworthy or unreliable, along with their dynamic nature that often invalidates links and causes conflicts or other discrepancies, could lead to poor quality data. In order to judge data quality, a number of quality indicators have been proposed, coupled with quality metrics that quantify the “quality level” of a dataset. In addition to the above, some approaches address how to improve the quality of the datasets through a repair process that focuses on how to correct invalidities caused by constraint violations by either removing or adding triples. In this paper we argue that provenance is a critical factor that should be taken into account during repairs to ensure that the most reliable data is kept. Based on this idea, we propose quality metrics that take into account provenance and evaluate their applicability as repair guidelines in a particular data fusion setting.
Resumo:
The Linked Data initiative offers a straight method to publish structured data in the World Wide Web and link it to other data, resulting in a world wide network of semantically codified data known as the Linked Open Data cloud. The size of the Linked Open Data cloud, i.e. the amount of data published using Linked Data principles, is growing exponentially, including life sciences data. However, key information for biological research is still missing in the Linked Open Data cloud. For example, the relation between orthologs genes and genetic diseases is absent, even though such information can be used for hypothesis generation regarding human diseases. The OGOLOD system, an extension of the OGO Knowledge Base, publishes orthologs/diseases information using Linked Data. This gives the scientists the ability to query the structured information in connection with other Linked Data and to discover new information related to orthologs and human diseases in the cloud.
Resumo:
La web ha sufrido una drástica transformación en los últimos años, debido principalmente a su popularización y a la enorme cantidad de información que alberga. Debido a estos factores se ha dado el salto de la denominada Web de Documentos, a la Web Semántica, donde toda la información está relacionada con otra. Las principales ventajas de la información enlazada estriban en la facilidad de reutilización, accesibilidad y disponibilidad para ser encontrada por el usuario. En este trabajo se pretende poner de manifiesto la utilidad de los datos enlazados aplicados al ámbito geográfico y mostrar como pueden ser empleados hoy en día. Para ello se han explotado datos enlazados de carácter espacial provenientes de diferentes fuentes, a través de servidores externos o endpoints SPARQL. Además de eso se ha trabajado con un servidor privado capaz de proporcionar información enlazada almacenada en un equipo personal. La explotación de información enlazada se ha implementado en una aplicación web en lenguaje JavaScript, tratando de abstraer totalmente al usuario del tratamiento de los datos a nivel interno de la aplicación. Esta aplicación cuenta además con algunos módulos y opciones capaces de interactuar con las consultas realizadas a los servidores, consiguiendo un entorno más intuitivo y agradable para el usuario. ABSTRACT: In recent years the web has suffered a drastic transformation because of the popularization and the huge amount of stored information. Due to these factors it has gone from Documents web to Semantic web, where the data are linked. The main advantages of Linked Data lie in the ease of his reuse, accessibility and availability to be located by users. The aim of this research is to highlight the usefulness of the geographic linked data and show how can be used at present time. To get this, the spatial linked data coming from several sources have been managed through external servers or also called endpoints. Besides, it has been worked with a private server able to provide linked data stored in a personal computer. The use of linked data has been implemented in a JavaScript web application, trying completely to abstract the internally data treatment of the application to make the user ignore it. This application has some modules and options that are able to interact with the queries made to the servers, getting a more intuitive and kind environment for users.
Resumo:
The W3C Best Practises for Multilingual Linked Open Data community group was born one year ago during the last MLW workshop in Rome. Nowadays, it continues leading the effort of a numerous community towards acquiring a shared view of the issues caused by multilingualism on the Web of Data and their possible solutions. Despite our initial optimism, we found the task of identifying best practises for ML-LOD a difficult one, requiring a deep understanding of the Web of Data in its multilingual dimension and in its practical problems. In this talk we will review the progresses of the group so far, mainly in the identification and analysis of topics, use cases, and design patterns, as well as the future challenges.
Resumo:
La tesi ha lo scopo di introdurre Investiga, un'applicazione per l'estrazione automatica di informazioni da articoli scientifici in formato PDF e pubblicazione di queste informazioni secondo i principi e i formati Linked Open Data, creata per la tesi. Questa applicazione è basata sul Task 2 della SemPub 2016, una challenge che ha come scopo principale quello di migliorare l'estrazione di informazioni da articoli scientifici in formato PDF. Investiga estrae i capitoli di primo livello, le didascalie delle figure e delle tabelle da un dato articolo e crea un grafo delle informazioni così estratte collegate adeguatamente tra loro. La tesi inoltre analizza gli strumenti esistenti per l'estrazione automatica di informazioni da documenti PDF e dei loro limiti.
Resumo:
A Internet possui inúmeros tipos de documentos e é uma influente fonte de informação.O conteúdo Web é projetado para os seres humanos interpretarem e não para as máquinas.Os sistemas de busca tradicionais são imprecisos na recuperação de informações. Ogoverno utiliza e disponibiliza documentos na Web para que os cidadãos e seus própriossetores organizacionais os utilizem, porém carece de ferramentas que apoiem na tarefa darecuperação desses documentos. Como exemplo, podemos citar a Plataforma de CurrículosLattes administrada pelo Cnpq.A Web semântica possui a finalidade de otimizar a recuperação dos documentos, ondeesses recebem significados, permitindo que tanto as pessoas quanto as máquinas possamcompreender o significado de uma informação. A falta de semântica em nossos documentos,resultam em pesquisas ineficazes, com informações divergentes e ambíguas. Aanotação semântica é o caminho para promover a semântica em documentos.O objetivo da dissertação é montar um arcabouço com os conceitos da Web Semânticaque possibilite anotar automaticamente o Currículo Lattes por meio de bases de dadosabertas (Linked Open Data), as quais armazenam o significado de termos e expressões.O problema da pesquisa está baseado em saber quais são os conceitos associados à WebSemântica que podem contribuir para a Anotação Semântica Automática do CurrículoLattes utilizando o Linked Open Data (LOD)?Na Revisão Sistemática da Literatura foi apresentado conceitos (anotação manual, automática,semi-automática, anotação intrusiva...), ferramentas (Extrator de Entidade...)e tecnologias (RDF, RDFa, SPARQL..) relativas ao tema. A aplicação desses conceitosoportunizou a criação do Sistema Lattes Web Semântico. O sistema possibilita a importaçãodo currículo XML da Plataforma Lattes, efetua a anotação automática dos dadosdisponibilizados utilizando as bases de dados abertas e possibilita efetuar consultas semânticas.A validação do sistema é realizada com a apresentação de currículos anotados e a realizaçãode consultas utilizando dados externos pertencentes ao LOD. Por fim é apresentado asconclusões, dificuldades encontradas e proposta de trabalhos futuros.
Resumo:
POSTDATA is a 5 year's European Research Council (ERC) Starting Grant Project that started in May 2016 and is hosted by the Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain. The context of the project is the corpora of European Poetry (EP), with a special focus on poetic materials from different languages and literary traditions. POSTDATA aims to offer a standardized model in the philological field and a metadata application profile (MAP) for EP in order to build a common classification of all these poetic materials. The information of Spanish, Italian and French repertoires will be published in the Linked Open Data (LOD) ecosystem. Later we expect to extend the model to include additional corpora. There are a number of Web Based Information Systems in Europe with repertoires of poems available to human consumption but not in an appropriate condition to be accessible and reusable by the Semantic Web. These systems are not interoperable; they are in fact locked in their databases and proprietary software, not suitable to be linked in the Semantic Web. A way to make this data interoperable is to develop a MAP in order to be able to publish this data available in the LOD ecosystem, and also to publish new data that will be created and modeled based on this MAP. To create a common data model for EP is not simple since the existent data models are based on conceptualizations and terminology belonging to their own poetical traditions and each tradition has developed an idiosyncratic analytical terminology in a different and independent way for years. The result of this uncoordinated evolution is a set of varied terminologies to explain analogous metrical phenomena through the different poetic systems whose correspondences have been hardly studied – see examples in González-Blanco & Rodríguez (2014a and b). This work has to be done by domain experts before the modeling actually starts. On the other hand, the development of a MAP is a complex task though it is imperative to follow a method for this development. The last years Curado Malta & Baptista (2012, 2013a, 2013b) have been studying the development of MAP's in a Design Science Research (DSR) methodological process in order to define a method for the development of MAPs (see Curado Malta (2014)). The output of this DSR process was a first version of a method for the development of Metadata Application Profiles (Me4MAP) (paper to be published). The DSR process is now in the validation phase of the Relevance Cycle to validate Me4MAP. The development of this MAP for poetry will follow the guidelines of Me4MAP and this development will be used to do the validation of Me4MAP. The final goal of the POSTDATA project is: i) to be able to publish all the data locked in the WIS, in LOD, where any agent interested will be able to build applications over the data in order to serve final users; ii) to build a Web platform where: a) researchers, students and other final users interested in EP will be able to access poems (and their analyses) of all databases; b) researchers, students and other final users will be able to upload poems, the digitalized images of manuscripts, and fill in the information concerning the analysis of the poem, collaboratively contributing to a LOD dataset of poetry.
Resumo:
Open data refers to publishing data on the web in machine-readable formats for public access. Using open data, innovative applications can be developed to facilitate people‟s lives. In this thesis, based on the open data cases (discussed in the literature review), Open Data Lappeenranta is suggested, which publishes open data related to opening hours of shops and stores in Lappeenranta City. To prove the possibility of creating Open Data Lappeenranta, the implementation of an open data system is presented in this thesis, which publishes specific data related to shops and stores (including their opening hours) on the web in standard format (JSON). The published open data is used to develop web and mobile applications to demonstrate the benefits of open data in practice. Also, the open data system provides manual and automatic interfaces which make it possible for shops and stores to maintain their own data in the system. Finally in this thesis, the completed version of Open Data Lappeenranta is proposed, which publishes open data related to other fields and businesses in Lappeenranta beyond only stores‟ data.
Resumo:
This thesis presented the overview of Open Data research area, quantity of evidence and establishes the research evidence based on the Systematic Mapping Study (SMS). There are 621 such publications were identified published between years 2005 and 2014, but only 243 were selected in the review process. This thesis highlights the implications of Open Data principals’ proliferation in the emerging era of the accessibility, reusability and sustainability of data transparency. The findings of mapping study are described in quantitative and qualitative measurement based on the organization affiliation, countries, year of publications, research method, star rating and units of analysis identified. Furthermore, units of analysis were categorized by development lifecycle, linked open data, type of data, technical platforms, organizations, ontology and semantic, adoption and awareness, intermediaries, security and privacy and supply of data which are important component to provide a quality open data applications and services. The results of the mapping study help the organizations (such as academia, government and industries), re-searchers and software developers to understand the existing trend of open data, latest research development and the demand of future research. In addition, the proposed conceptual framework of Open Data research can be adopted and expanded to strengthen and improved current open data applications.
Resumo:
BACKGROUND The population-based effectiveness of thoracic endovascular aortic repair (TEVAR) versus open surgery for descending thoracic aortic aneurysm remains in doubt. METHODS Patients aged over 50 years, without a history of aortic dissection, undergoing repair of a thoracic aortic aneurysm between 2006 and 2011 were assessed using mortality-linked individual patient data from Hospital Episode Statistics (England). The principal outcomes were 30-day operative mortality, long-term survival (5 years) and aortic-related reinterventions. TEVAR and open repair were compared using crude and multivariable models that adjusted for age and sex. RESULTS Overall, 759 patients underwent thoracic aortic aneurysm repair, mainly for intact aneurysms (618, 81·4 per cent). Median ages of TEVAR and open cohorts were 73 and 71 years respectively (P < 0·001), with more men undergoing TEVAR (P = 0·004). For intact aneurysms, the operative mortality rate was similar for TEVAR and open repair (6·5 versus 7·6 per cent; odds ratio 0·79, 95 per cent confidence interval (c.i.) 0·41 to 1·49), but the 5-year survival rate was significantly worse after TEVAR (54·2 versus 65·6 per cent; adjusted hazard ratio 1·45, 95 per cent c.i. 1·08 to 1·94). After 5 years, aortic-related mortality was similar in the two groups, but cardiopulmonary mortality was higher after TEVAR. TEVAR was associated with more aortic-related reinterventions (23·1 versus 14·3 per cent; adjusted HR 1·70, 95 per cent c.i. 1·11 to 2·60). There were 141 procedures for ruptured thoracic aneurysm (97 TEVAR, 44 open), with TEVAR showing no significant advantage in terms of operative mortality. CONCLUSION In England, operative mortality for degenerative descending thoracic aneurysm was similar after either TEVAR or open repair. Patients who had TEVAR appeared to have a higher reintervention rate and worse long-term survival, possibly owing to cardiopulmonary morbidity and other selection bias.