889 resultados para Open Data, Dati Aperti, Open Government Data


Relevância:

80.00% 80.00%

Publicador:

Resumo:

La tesi ha lo scopo di introdurre Investiga, un'applicazione per l'estrazione automatica di informazioni da articoli scientifici in formato PDF e pubblicazione di queste informazioni secondo i principi e i formati Linked Open Data, creata per la tesi. Questa applicazione è basata sul Task 2 della SemPub 2016, una challenge che ha come scopo principale quello di migliorare l'estrazione di informazioni da articoli scientifici in formato PDF. Investiga estrae i capitoli di primo livello, le didascalie delle figure e delle tabelle da un dato articolo e crea un grafo delle informazioni così estratte collegate adeguatamente tra loro. La tesi inoltre analizza gli strumenti esistenti per l'estrazione automatica di informazioni da documenti PDF e dei loro limiti.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Permanent water bodies not only store dissolved CO2 but are essential for the maintenance of wetlands in their proximity. From the viewpoint of greenhouse gas (GHG) accounting wetland functions comprise sequestration of carbon under anaerobic conditions and methane release. The investigated area in central Siberia covers boreal and sub-arctic environments. Small inundated basins are abundant on the sub-arctic Taymir lowlands but also in parts of severe boreal climate where permafrost ice content is high and feature important freshwater ecosystems. Satellite radar imagery (ENVISAT ScanSAR), acquired in summer 2003 and 2004, has been used to derive open water surfaces with 150 m resolution, covering an area of approximately 3 Mkm**2. The open water surface maps were derived using a simple threshold-based classification method. The results were assessed with Russian forest inventory data, which includes detailed information about water bodies. The resulting classification has been further used to estimate the extent of tundra wetlands and to determine their importance for methane emissions. Tundra wetlands cover 7% (400,000 km**2) of the study region and methane emissions from hydromorphic soils are estimated to be 45,000 t/d for the Taymir peninsula.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Internet possui inúmeros tipos de documentos e é uma influente fonte de informação.O conteúdo Web é projetado para os seres humanos interpretarem e não para as máquinas.Os sistemas de busca tradicionais são imprecisos na recuperação de informações. Ogoverno utiliza e disponibiliza documentos na Web para que os cidadãos e seus própriossetores organizacionais os utilizem, porém carece de ferramentas que apoiem na tarefa darecuperação desses documentos. Como exemplo, podemos citar a Plataforma de CurrículosLattes administrada pelo Cnpq.A Web semântica possui a finalidade de otimizar a recuperação dos documentos, ondeesses recebem significados, permitindo que tanto as pessoas quanto as máquinas possamcompreender o significado de uma informação. A falta de semântica em nossos documentos,resultam em pesquisas ineficazes, com informações divergentes e ambíguas. Aanotação semântica é o caminho para promover a semântica em documentos.O objetivo da dissertação é montar um arcabouço com os conceitos da Web Semânticaque possibilite anotar automaticamente o Currículo Lattes por meio de bases de dadosabertas (Linked Open Data), as quais armazenam o significado de termos e expressões.O problema da pesquisa está baseado em saber quais são os conceitos associados à WebSemântica que podem contribuir para a Anotação Semântica Automática do CurrículoLattes utilizando o Linked Open Data (LOD)?Na Revisão Sistemática da Literatura foi apresentado conceitos (anotação manual, automática,semi-automática, anotação intrusiva...), ferramentas (Extrator de Entidade...)e tecnologias (RDF, RDFa, SPARQL..) relativas ao tema. A aplicação desses conceitosoportunizou a criação do Sistema Lattes Web Semântico. O sistema possibilita a importaçãodo currículo XML da Plataforma Lattes, efetua a anotação automática dos dadosdisponibilizados utilizando as bases de dados abertas e possibilita efetuar consultas semânticas.A validação do sistema é realizada com a apresentação de currículos anotados e a realizaçãode consultas utilizando dados externos pertencentes ao LOD. Por fim é apresentado asconclusões, dificuldades encontradas e proposta de trabalhos futuros.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Presentation at the CRIS2016 conference in St Andrews, June 10, 2016

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Following the workshop on new developments in daily licensing practice in November 2011, we brought together fourteen representatives from national consortia (from Denmark, Germany, Netherlands and the UK) and publishers (Elsevier, SAGE and Springer) met in Copenhagen on 9 March 2012 to discuss provisions in licences to accommodate new developments. The one day workshop aimed to: present background and ideas regarding the provisions KE Licensing Expert Group developed; introduce and explain the provisions the invited publishers currently use;ascertain agreement on the wording for long term preservation, continuous access and course packs; give insight and more clarity about the use of open access provisions in licences; discuss a roadmap for inclusion of the provisions in the publishers’ licences; result in report to disseminate the outcome of the meeting. Participants of the workshop were: United Kingdom: Lorraine Estelle (Jisc Collections) Denmark: Lotte Eivor Jørgensen (DEFF), Lone Madsen (Southern University of Denmark), Anne Sandfær (DEFF/Knowledge Exchange) Germany: Hildegard Schaeffler (Bavarian State Library), Markus Brammer (TIB) The Netherlands: Wilma Mossink (SURF), Nol Verhagen (University of Amsterdam), Marc Dupuis (SURF/Knowledge Exchange) Publishers: Alicia Wise (Elsevier), Yvonne Campfens (Springer), Bettina Goerner (Springer), Leo Walford (Sage) Knowledge Exchange: Keith Russell The main outcome of the workshop was that it would be valuable to have a standard set of clauses which could used in negotiations, this would make concluding licences a lot easier and more efficient. The comments on the model provisions the Licensing Expert group had drafted will be taken into account and the provisions will be reformulated. Data and text mining is a new development and demand for access to allow for this is growing. It would be easier if there was a simpler way to access materials so they could be more easily mined. However there are still outstanding questions on how authors of articles that have been mined can be properly attributed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

POSTDATA is a 5 year's European Research Council (ERC) Starting Grant Project that started in May 2016 and is hosted by the Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain. The context of the project is the corpora of European Poetry (EP), with a special focus on poetic materials from different languages and literary traditions. POSTDATA aims to offer a standardized model in the philological field and a metadata application profile (MAP) for EP in order to build a common classification of all these poetic materials. The information of Spanish, Italian and French repertoires will be published in the Linked Open Data (LOD) ecosystem. Later we expect to extend the model to include additional corpora. There are a number of Web Based Information Systems in Europe with repertoires of poems available to human consumption but not in an appropriate condition to be accessible and reusable by the Semantic Web. These systems are not interoperable; they are in fact locked in their databases and proprietary software, not suitable to be linked in the Semantic Web. A way to make this data interoperable is to develop a MAP in order to be able to publish this data available in the LOD ecosystem, and also to publish new data that will be created and modeled based on this MAP. To create a common data model for EP is not simple since the existent data models are based on conceptualizations and terminology belonging to their own poetical traditions and each tradition has developed an idiosyncratic analytical terminology in a different and independent way for years. The result of this uncoordinated evolution is a set of varied terminologies to explain analogous metrical phenomena through the different poetic systems whose correspondences have been hardly studied – see examples in González-Blanco & Rodríguez (2014a and b). This work has to be done by domain experts before the modeling actually starts. On the other hand, the development of a MAP is a complex task though it is imperative to follow a method for this development. The last years Curado Malta & Baptista (2012, 2013a, 2013b) have been studying the development of MAP's in a Design Science Research (DSR) methodological process in order to define a method for the development of MAPs (see Curado Malta (2014)). The output of this DSR process was a first version of a method for the development of Metadata Application Profiles (Me4MAP) (paper to be published). The DSR process is now in the validation phase of the Relevance Cycle to validate Me4MAP. The development of this MAP for poetry will follow the guidelines of Me4MAP and this development will be used to do the validation of Me4MAP. The final goal of the POSTDATA project is: i) to be able to publish all the data locked in the WIS, in LOD, where any agent interested will be able to build applications over the data in order to serve final users; ii) to build a Web platform where: a) researchers, students and other final users interested in EP will be able to access poems (and their analyses) of all databases; b) researchers, students and other final users will be able to upload poems, the digitalized images of manuscripts, and fill in the information concerning the analysis of the poem, collaboratively contributing to a LOD dataset of poetry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El movimiento de datos abiertos es relativamente nuevo, ofrece beneficios significativos a la sociedad y a la economía, promueve la democracia y la responsabilidad de los gobiernos públicos fomentando la transparencia, participación y colaboración de los ciudadanos. Por ser un movimiento relativamente nuevo, son los países que lideran el desarrollo quienes ya han implementado políticas de datos abiertos y ya disfrutan de sus beneficios; sin embargo, hay países en los que aún ni siquiera hay iniciativas de datos abiertos o aún están comenzando. En este trabajo se estudia el uso adecuado de buenas prácticas, normas, métricas y estándares para la implantación de datos abiertos de manera sostenible, automatizable y en formatos accesibles que garanticen la reutilización de los datos con el fin de generar valor a través de ellos, al crear nuevos productos y servicios que contribuyan a mejorar la calidad de vida de los ciudadanos. En ese sentido, se realiza un análisis exploratorio de los principios de datos abiertos, se realiza un análisis sobre la situación actual de iniciativas de datos abiertos, y con el fin de que el proyecto sea de máxima aplicabilidad, se realizan pruebas de la métrica Meloda 4.0 sobre conjuntos de datos del Ayuntamiento de Madrid. Se realiza un análisis y evaluación de los portales de datos abiertos de los Ayuntamientos de Madrid, Zaragoza y Barcelona basándose en la Norma UNE 178301:2015. En concordancia con la filosofía de datos abiertos, se estudia y sugiere el uso de tecnologías de código abierto para la publicación de datos abiertos. Finalmente, como resultado y aplicabilidad de todo lo aprendido, se propone el diseño de una metodología para publicación de datos abiertos orientada a entidades públicas que aún no tienen iniciativas o están comenzando a implementar políticas de datos abiertos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Part 14: Interoperability and Integration

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Slides for my talk at the CHEAD Membership & Networking Meeting

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This book chapter considers recent developments in Australia and key jurisdictions both in relation to the formation of a national information strategy and the management of legal rights in public sector information.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

There has been an increasing interest by governments worldwide in the potential benefits of open access to public sector information (PSI). However, an important question remains: can a government incur tortious liability for incorrect information released online under an open content licence? This paper argues that the release of PSI online for free under an open content licence, specifically a Creative Commons licence, is within the bounds of an acceptable level of risk to government, especially where users are informed of the limitations of the data and appropriate information management policies and principles are in place to ensure accountability for data quality and accuracy.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Malaysian National Innovation Model blueprint states that there is an urgent need to pursue an innovation-oriented economy to improve the nation’s capacity for knowledge, creativity and innovation. In nurturing a pervasive innovation culture, the Malaysian government has declared the year 2010 as an Innovative Year whereby creativity among its population is highly celebrated. However, while Malaysian citizens are encouraged to be creative and innovative, scientific data and information generated from publicly funded research in Malaysia is locked up because of rigid intellectual property licensing regimes and traditional publishing models. Reflecting on these circumstances, this paper looks at, and argue why, scientific data and information should be made available, accessible and re-useable freely to promote the grassroots level of innovation in Malaysia. Using innovation theory as its platform of argument, this paper calls for an open access policy for publicly funded research output to be adopted and implemented in Malaysia. Simultaneously, a normative analytic approach is used to determine the types of open access policy that ought to be adopted to spur greater innovation among Malaysians.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

There is still no comprehensive information strategy governing access to and reuse of public sector information, applying on a nationwide basis, across all levels of government – local, state and federal - in Australia. This is the case both for public sector materials generally and for spatial data in particular. Nevertheless, the last five years have seen some significant developments in information policy and practice, the result of which has been a considerable lessening of the barriers that previously acted to impede the accessibility and reusability of a great deal of spatial and other material held by public sector agencies. Much of the impetus for change has come from the spatial community which has for many years been a proponent of the view “that government held information, and in particular spatial information, will play an absolutely critical role in increasing the innovative capacity of this nation.”1 However, the potential of government spatial data to contribute to innovation will remain unfulfilled without reform of policies on access and reuse as well as the pervasive practices of public sector data custodians who have relied on government copyright to justify the imposition of restrictive conditions on its use.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

QUT’s new metadata repository (data registry), Research Data Finder, has been designed to promote the visibility and discoverability of QUT research datasets. Funded by the Australian National Data Service (ANDS), it will provide a qualitative snapshot of research data outputs created or collected by members of the QUT research community that are available via open or mediated access. As a fully integrated metadata repository Research Data Finder aligns with institutional sources of truth, such as QUT’s research administrative system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. In addition, the repository and its workflows are designed to foster smoother data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximize existing research datasets. The metadata schema used in Research Data Finder is the Registry Interchange Format - Collections and Services (RIF-CS), developed by ANDS in 2009. This comprehensive schema is potentially complex for researchers; unlike metadata for publications, which are often made publicly available with the official publication, metadata for datasets are not typically available and need to be created. Research Data Finder uses a hybrid self-deposit and mediated deposit system. In addition to automated ingests from ResearchMaster (research project information) and Academic Profiles system (researcher information), shareable data is identified at a number of key “trigger points” in the research cycle. These include: research grant proposals; ethics applications; Data Management Plans; Liaison Librarian data interviews; and thesis submissions. These ingested records can be supplemented with related metadata including links to related publications, such as those in QUT ePrints. Records deposited in Research Data Finder are harvested by ANDS and made available to a national and international audience via Research Data Australia, ANDS’ discovery service for Australian research data. Researcher and research group metadata records are also harvested by the National Library of Australia (NLA) and these records are then published in Trove (the NLA’s digital information portal). By contributing records to the national infrastructure, QUT data will become more visible. Within Australia and internationally, many funding bodies have already mandated the open access of publications produced from publicly funded research projects, such as those supported by the Australian Research Council (ARC), or the National Health and Medical Research Council (NHMRC). QUT will be well placed to respond to the rapidly evolving climate of research data management. This project is supported by the Australian National Data Service (ANDS). ANDS is supported by the Australian Government through the National Collaborative Research Infrastructure Strategy Program and the Education Investment Fund (EIF) Super Science Initiative.