847 resultados para Web Semantico semantic open data geoSPARQL
Resumo:
L'Open Data, letteralmente “dati aperti”, è la corrente di pensiero (e il relativo “movimento”) che cerca di rispondere all'esigenza di poter disporre di dati legalmente “aperti”, ovvero liberamente re-usabili da parte del fruitore, per qualsiasi scopo. L’obiettivo dell’Open Data può essere raggiunto per legge, come negli USA dove l’informazione generata dal settore pubblico federale è in pubblico dominio, oppure per scelta dei detentori dei diritti, tramite opportune licenze. Per motivare la necessità di avere dei dati in formato aperto, possiamo usare una comparazione del tipo: l'Open Data sta al Linked Data, come la rete Internet sta al Web. L'Open Data, quindi, è l’infrastruttura (o la “piattaforma”) di cui il Linked Data ha bisogno per poter creare la rete di inferenze tra i vari dati sparsi nel Web. Il Linked Data, in altre parole, è una tecnologia ormai abbastanza matura e con grandi potenzialità, ma ha bisogno di grandi masse di dati tra loro collegati, ossia “linkati”, per diventare concretamente utile. Questo, in parte, è già stato ottenuto ed è in corso di miglioramento, grazie a progetti come DBpedia o FreeBase. In parallelo ai contributi delle community online, un altro tassello importante – una sorta di “bulk upload” molto prezioso – potrebbe essere dato dalla disponibilità di grosse masse di dati pubblici, idealmente anche già linkati dalle istituzioni stesse o comunque messi a disposizione in modo strutturato – che aiutino a raggiungere una “massa” di Linked Data. A partire dal substrato, rappresentato dalla disponibilità di fatto dei dati e dalla loro piena riutilizzabilità (in modo legale), il Linked Data può offrire una potente rappresentazione degli stessi, in termini di relazioni (collegamenti): in questo senso, Linked Data ed Open Data convergono e raggiungono la loro piena realizzazione nell’approccio Linked Open Data. L’obiettivo di questa tesi è quello di approfondire ed esporre le basi sul funzionamento dei Linked Open Data e gli ambiti in cui vengono utilizzati.
Resumo:
Le Associazioni Non Profit giocano un ruolo sempre più rilevante nella vita dei cittadini e rappresentano un'importante realtà produttiva del nostro paese; molto spesso però risulta difficile trovare informazioni relative ad eventi, attività o sull'esistenza stessa di queste associazioni. Per venire in contro alle esigenze dei cittadini molte Regioni e Province mettono a disposizione degli elenchi in cui sono raccolte le informazioni relative alle varie organizzazioni che operano sul territorio. Questi elenchi però, presentano spesso grossi problemi, sia per quanto riguarda la correttezza dei dati, sia per i formati utilizzati per la pubblicazione. Questi fattori hanno portato all'idea e alla necessità di realizzare un sistema per raccogliere, sistematizzare e rendere fruibili le informazioni sulle Associazioni Non Profit presenti sul territorio, in modo che questi dati possano essere utilizzati liberamente da chiunque per scopi diversi. Il presente lavoro si pone quindi due obiettivi principali: il primo consiste nell'implementazione di un tool in grado di recuperare le informazioni sulle Associazioni Non Profit sfruttando i loro Siti Web; questo avviene per mezzo dell'utilizzo di tecniche di Web Crawling e Web Scraping. Il secondo obiettivo consiste nel pubblicare le informazioni raccolte, secondo dei modelli che ne permettano un uso libero e non vincolato; per la pubblicazione e la strutturazione dei dati è stato utilizzato un modello basato sui principi dei linked open data.
Resumo:
La presente tesi è uno studio sugli strumenti e le tecnologie che caratterizzano l'utilizzo degli open data, in particolare, nello sviluppo di applicazioni web moderne che fanno uso di questo tipo di dati.
Resumo:
The Linked Data initiative offers a straight method to publish structured data in the World Wide Web and link it to other data, resulting in a world wide network of semantically codified data known as the Linked Open Data cloud. The size of the Linked Open Data cloud, i.e. the amount of data published using Linked Data principles, is growing exponentially, including life sciences data. However, key information for biological research is still missing in the Linked Open Data cloud. For example, the relation between orthologs genes and genetic diseases is absent, even though such information can be used for hypothesis generation regarding human diseases. The OGOLOD system, an extension of the OGO Knowledge Base, publishes orthologs/diseases information using Linked Data. This gives the scientists the ability to query the structured information in connection with other Linked Data and to discover new information related to orthologs and human diseases in the cloud.
Resumo:
The W3C Best Practises for Multilingual Linked Open Data community group was born one year ago during the last MLW workshop in Rome. Nowadays, it continues leading the effort of a numerous community towards acquiring a shared view of the issues caused by multilingualism on the Web of Data and their possible solutions. Despite our initial optimism, we found the task of identifying best practises for ML-LOD a difficult one, requiring a deep understanding of the Web of Data in its multilingual dimension and in its practical problems. In this talk we will review the progresses of the group so far, mainly in the identification and analysis of topics, use cases, and design patterns, as well as the future challenges.
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
Acknowledgements The research described here is supported by the award made by the RCUK Digital Economy programme to the dot.rural Digital Economy Hub; award reference: EP/G066051/1
Resumo:
Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es
Resumo:
A Internet possui inúmeros tipos de documentos e é uma influente fonte de informação.O conteúdo Web é projetado para os seres humanos interpretarem e não para as máquinas.Os sistemas de busca tradicionais são imprecisos na recuperação de informações. Ogoverno utiliza e disponibiliza documentos na Web para que os cidadãos e seus própriossetores organizacionais os utilizem, porém carece de ferramentas que apoiem na tarefa darecuperação desses documentos. Como exemplo, podemos citar a Plataforma de CurrículosLattes administrada pelo Cnpq.A Web semântica possui a finalidade de otimizar a recuperação dos documentos, ondeesses recebem significados, permitindo que tanto as pessoas quanto as máquinas possamcompreender o significado de uma informação. A falta de semântica em nossos documentos,resultam em pesquisas ineficazes, com informações divergentes e ambíguas. Aanotação semântica é o caminho para promover a semântica em documentos.O objetivo da dissertação é montar um arcabouço com os conceitos da Web Semânticaque possibilite anotar automaticamente o Currículo Lattes por meio de bases de dadosabertas (Linked Open Data), as quais armazenam o significado de termos e expressões.O problema da pesquisa está baseado em saber quais são os conceitos associados à WebSemântica que podem contribuir para a Anotação Semântica Automática do CurrículoLattes utilizando o Linked Open Data (LOD)?Na Revisão Sistemática da Literatura foi apresentado conceitos (anotação manual, automática,semi-automática, anotação intrusiva...), ferramentas (Extrator de Entidade...)e tecnologias (RDF, RDFa, SPARQL..) relativas ao tema. A aplicação desses conceitosoportunizou a criação do Sistema Lattes Web Semântico. O sistema possibilita a importaçãodo currículo XML da Plataforma Lattes, efetua a anotação automática dos dadosdisponibilizados utilizando as bases de dados abertas e possibilita efetuar consultas semânticas.A validação do sistema é realizada com a apresentação de currículos anotados e a realizaçãode consultas utilizando dados externos pertencentes ao LOD. Por fim é apresentado asconclusões, dificuldades encontradas e proposta de trabalhos futuros.
Resumo:
Part 14: Interoperability and Integration
Resumo:
Gli Open Data sono un'utile strumento che sta via via assumendo sempre più importanza nella società; in questa tesi vedremo la loro utilità attraverso la realizzazione di un'applicazione mobile, che utilizza questi dati per fornire informazioni circa lo stato ambientale dell'aria e dei pollini in Emilia Romagna, sfruttando i dataset forniti da un noto ente pubblico (Arpa Emilia Romagna). Tale applicazione mobile si basa su un Web Service che gestisce i vari passaggi dei dati e li immagazzina in un database Mongodb. Tale Web Service è stato creato per essere a sua volta messo a disposizione di programmatori, enti o persone comuni per studi e sviluppi futuri in tale ambito.
Resumo:
This special issue of the Journal of Urban Technology brings together five articles that are based on presentations given at the Street Computing workshop held on 24 November 2009 in Melbourne in conjunction with the Australian Computer-Human Interaction conference (OZCHI 2009). Our own article introduces the Street Computing vision and explores the potential, challenges and foundations of this research vision. In order to do so, we first look at the currently available sources of information and discuss their link to existing research efforts. Section 2 then introduces the notion of Street Computing and our research approach in more detail. Section 3 looks beyond the core concept itself and summarises related work in this field of interest.
Resumo:
Cities accumulate and distribute vast sets of digital information. Many decision-making and planning processes in councils, local governments and organisations are based on both real-time and historical data. Until recently, only a small, carefully selected subset of this information has been released to the public – usually for specific purposes (e.g. train timetables, release of planning application through websites to name just a few). This situation is however changing rapidly. Regulatory frameworks, such as the Freedom of Information Legislation in the US, the UK, the European Union and many other countries guarantee public access to data held by the state. One of the results of this legislation and changing attitudes towards open data has been the widespread release of public information as part of recent Government 2.0 initiatives. This includes the creation of public data catalogues such as data.gov.au (U.S.), data.gov.uk (U.K.), data.gov.au (Australia) at federal government levels, and datasf.org (San Francisco) and data.london.gov.uk (London) at municipal levels. The release of this data has opened up the possibility of a wide range of future applications and services which are now the subject of intensified research efforts. Previous research endeavours have explored the creation of specialised tools to aid decision-making by urban citizens, councils and other stakeholders (Calabrese, Kloeckl & Ratti, 2008; Paulos, Honicky & Hooker, 2009). While these initiatives represent an important step towards open data, they too often result in mere collections of data repositories. Proprietary database formats and the lack of an open application programming interface (API) limit the full potential achievable by allowing these data sets to be cross-queried. Our research, presented in this paper, looks beyond the pure release of data. It is concerned with three essential questions: First, how can data from different sources be integrated into a consistent framework and made accessible? Second, how can ordinary citizens be supported in easily composing data from different sources in order to address their specific problems? Third, what are interfaces that make it easy for citizens to interact with data in an urban environment? How can data be accessed and collected?
Resumo:
This special issue of the Journal of Urban Technology brings together five articles that are based on presentations given at the Street Computing Workshop held on 24 November 2009 in Melbourne in conjunction with the Australian Computer- Human Interaction conference (OZCHI 2009). Our own article introduces the Street Computing vision and explores the potential, challenges, and foundations of this research trajectory. In order to do so, we first look at the currently available sources of information and discuss their link to existing research efforts. Section 2 then introduces the notion of Street Computing and our research approach in more detail. Section 3 looks beyond the core concept itself and summarizes related work in this field of interest. We conclude by introducing the papers that have been contributed to this special issue.
Resumo:
As support grows for greater access to information and data held by governments, so does awareness of the need for appropriate policy, technical and legal frameworks to achieve the desired economic and societal outcomes. Since the late 2000s numerous international organizations, inter-governmental bodies and governments have issued open government data policies, which set out key principles underpinning access to, and the release and reuse of data. These policies reiterate the value of government data and establish the default position that it should be openly accessible to the public under transparent and non-discriminatory conditions, which are conducive to innovative reuse of the data. A key principle stated in open government data policies is that legal rights in government information must be exercised in a manner that is consistent with and supports the open accessibility and reusability of the data. In particular, where government information and data is protected by copyright, access should be provided under licensing terms which clearly permit its reuse and dissemination. This principle has been further developed in the policies issued by Australian Governments into a specific requirement that Government agencies are to apply the Creative Commons Attribution licence (CC BY) as the default licensing position when releasing government information and data. A wide-ranging survey of the practices of Australian Government agencies in managing their information and data, commissioned by the Office of the Australian Information Commissioner in 2012, provides valuable insights into progress towards the achievement of open government policy objectives and the adoption of open licensing practices. The survey results indicate that Australian Government agencies are embracing open access and a proactive disclosure culture and that open licensing under Creative Commons licences is increasingly prevalent. However, the finding that ‘[t]he default position of open access licensing is not clearly or robustly stated, nor properly reflected in the practice of Government agencies’ points to the need to further develop the policy framework and the principles governing information access and reuse, and to provide practical guidance tools on open licensing if the broadest range of government information and data is to be made available for innovative reuse.