1000 resultados para digital curation
Resumo:
Integration of experiential learning into the library and information science (LIS) courses has been a theme in LIS education, but the topic deserves renewed attention with an increasing demand for professionals in the digital library field and in light of the new initiative announced by the Library of Congress (LC) and the Institution of Museum and Library Services (IMLS) for national residency program in digital curation. The balance between theory and practice in digital library curricula, the challenges of incorporating practical projects into LIS coursework, and the current practice of teaching with hands on activities represent the primary areas of this panel discussion.
Resumo:
The Digital Public Library of America (DPLA) is a digital library that strives to serve the public through digital collections accumulated from a wide variety of partners. Our chosen topic for the DPLA exhibit project is Perspectives on the Vietnam War. The Vietnam War remains a controversial topic of national interest, making it a topic of depth and of many perspectives. Our goals with this exhibit were to gather different perspectives of the war through personal stories, the media, presidential administrations of the war, military personnel, and the general public, including famous figures. We strove to demonstrate the variety of perspectives on the Vietnam War through a variation of digital objects and content that will be engaging for users: both black and white and color photos, videos, and audio files. Furthermore, we wanted to ensure that our digital materials are of high quality, properly documented, and easy to search and find thus all of our objects are from DPLA and are from usable original sources. This poster will describe our processes for organizational, object selection, building our exhibit, attainment of our goals, and detailed steps of our overall operation. The poster will also include details about the minor issues and bumps that occurred while reaching our final product as well as the team members’ perspectives on the project as a whole including: problems, words to for the wise, and triumphs.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This paper discusses many of the issues associated with formally publishing data in academia, focusing primarily on the structures that need to be put in place for peer review and formal citation of datasets. Data publication is becoming increasingly important to the scientific community, as it will provide a mechanism for those who create data to receive academic credit for their work and will allow the conclusions arising from an analysis to be more readily verifiable, thus promoting transparency in the scientific process. Peer review of data will also provide a mechanism for ensuring the quality of datasets, and we provide suggestions on the types of activities one expects to see in the peer review of data. A simple taxonomy of data publication methodologies is presented and evaluated, and the paper concludes with a discussion of dataset granularity, transience and semantics, along with a recommended human-readable citation syntax.
Resumo:
Traditionally, the formal scientific output in most fields of natural science has been limited to peer- reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI) – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE) to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.
Resumo:
ISO19156 Observations and Measurements (O&M) provides a standardised framework for organising information about the collection of information about the environment. Here we describe the implementation of a specialisation of O&M for environmental data, the Metadata Objects for Linking Environmental Sciences (MOLES3). MOLES3 provides support for organising information about data, and for user navigation around data holdings. The implementation described here, “CEDA-MOLES”, also supports data management functions for the Centre for Environmental Data Archival, CEDA. The previous iteration of MOLES (MOLES2) saw active use over five years, being replaced by CEDA-MOLES in late 2014. During that period important lessons were learnt both about the information needed, as well as how to design and maintain the necessary information systems. In this paper we review the problems encountered in MOLES2; how and why CEDA-MOLES was developed and engineered; the migration of information holdings from MOLES2 to CEDA-MOLES; and, finally, provide an early assessment of MOLES3 (as implemented in CEDA-MOLES) and its limitations. Key drivers for the MOLES3 development included the necessity for improved data provenance, for further structured information to support ISO19115 discovery metadata export (for EU INSPIRE compliance), and to provide appropriate fixed landing pages for Digital Object Identifiers (DOIs) in the presence of evolving datasets. Key lessons learned included the importance of minimising information structure in free text fields, and the necessity to support as much agility in the information infrastructure as possible without compromising on maintainability both by those using the systems internally and externally (e.g. citing in to the information infrastructure), and those responsible for the systems themselves. The migration itself needed to ensure continuity of service and traceability of archived assets.
Resumo:
From where did this tweet originate? Was this quote from the New York Times modified? Daily, we rely on data from the Web but often it is difficult or impossible to determine where it came from or how it was produced. This lack of provenance is particularly evident when people and systems deal with Web information or with any environment where information comes from sources of varying quality. Provenance is not captured pervasively in information systems. There are major technical, social, and economic impediments that stand in the way of using provenance effectively. This paper synthesizes requirements for provenance on the Web for a number of dimensions focusing on three key aspects of provenance: the content of provenance, the management of provenance records, and the uses of provenance information. To illustrate these requirements, we use three synthesized scenarios that encompass provenance problems faced by Web users today.
Resumo:
Pro gradu -tutkielma käsittelee digitaalisen kuratoinnin soveltamista tutkimuksessa ja erityisesti historiantutkimuksessa. Digitaalinen kuratointi voidaan ymmärtää digitaalisten aineistojen ylläpidoksi, säilyttämiseksi ja niiden arvon lisäämiseksi uudelleenkäytön kautta. Tutkimuksessa tarkastellaan alan historiaa, nykyisyyttä ja tulevaisuutta Suomessa ja kansainvälisesti. Keskiössä ovat digitaalisen kuratoinnin elinkaarimallin toiminta ja alan ammattilaisten näkemykset tulevaisuuden haasteista. Tutkielma palvelee historiantutkijoiden lisäksi laajemmin ihmistieteitä digitaalisissa tutkimusympäristöissä. Digitaalisten aineistojen lisääntyessä niiden käyttöön liittyy useita ongelmia. Tutkielmassa analysoidaan Digital Curation Centren (DCC) kehittämää digitaalisen kuratoinnin elinkaarimallia, joka jakautuu seitsemään toimintavaiheeseen. Näiden vaiheiden kautta on tarkoitus varmistaa, että digitaaliset aineistot ovat löydettävissä, luettavissa ja käytettävissä tutkimusta varten nyt ja tulevaisuudessa. Keskeisinä lähteinä on käytetty DCC:n verkkoarkistoa ja tutkija Ross Harveyn julkaisemaa teosta Digital Curation – a how-to-do-it manual (2010). Empiiristä osuutta työssä edustaa digitaalisen kuratoinnin tutkijoiden ja ammattilaisten haastattelut, jotka tehtiin alkuvuodesta 2016. Tutkielman teoreettisessa osuudessa on hyödynnetty hermeneuttista analyysia ja empiirisessä puolessa sovellettu osin tulevaisuuden tutkimuksessa käytettävää delfoi-menetelmää. Keskeinen tutkimustulos on, että digitaalinen kuratointi soveltuu tutkimukseen ja aineistojen käyttöön yleisenä ohjeistuksena. Historiantutkimuksen näkökulmasta elinkarimalli seuraa monissa kohdin tutkimuksen kulkua ja on näin sovellettavissa tieteenalaan helposti. Haastatteluosuudesta nousee esille, että digitaalisten aineistojen kuratointi on vielä monissa maissa alkutekijöissään. Haasteita tulevaisuudessa ovat resurssien kaventuminen ja teknisen osaamisen heikko taso. Digitaalinen kuratointi nähdään myös avainroolissa datatieteen yleistyessä yhteiskunnassa ja tieteellisen tutkimuksen parissa. Näin digitaalisen kuratoinnin tutkimusta voi laajentaa tulevaisuudessa eri analyysimenetelmien hyödyntämiseen eri tieteenalojen näkökulmista.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014