889 resultados para aggregations of digital collections
Collection-Level Subject Access in Aggregations of Digital Collections: Metadata Application and Use
Resumo:
Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.
Resumo:
The Digital Conversion and Media Reformatting plan was written in 2012 and revised 2013-2014, as a five-year plan for the newly established department at the University of Maryland Libraries under the Digital Systems and Stewardship Division. The plan focuses on increasing digitization production, both in-house and through vendors, and creates a model for the management of this production.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
Queensland University of Technology’s Institutional Repository, QUT ePrints (http://eprints.qut.edu.au/), was established in 2003. With the help of an institutional mandate (endorsed in 2004) the repository now holds over 11,000 open access publications. The repository’s success is celebrated within the University and acknowledged nationally and internationally. QUT ePrints was built on GNU EPrints open source repository software (currently running v.3.1.3) and was originally configured to accommodate open access versions of the traditional range of research publications (journal articles, conference papers, books, book chapters and working papers). However, in 2009, the repository’s scope, content and systems were broadened and the ‘QUT Digital repository’ is now a service encompassing a range of digital collections, services and systems. For a work to be accepted in to the institutional repository, at least one of the authors/creators must have a current affiliation with QUT. However, the success of QUT ePrints in terms of its capacity to increase the visibility and accessibility of our researchers' scholarly works resulted in requests to accept digital collections of works which were out of scope. To address this need, a number of parallel digital collections have been developed. These collections include, OZcase, a collection of legal research materials and ‘The Sugar Industry Collection’; a digitsed collection of books and articles on sugar cane production and processing. Additionally, the Library has responded to requests from academics for a service to support the publication of new, and existing, peer reviewed open access journals. A project is currently underway to help a group of senior QUT academics publish a new international peer reviewed journal. The QUT Digital Repository website will be a portal for access to a range of resources to support copyright management. It is likely that it will provide an access point for the institution’s data repository. The data repository, provisionally named the ‘QUT Data Commons’, is currently a work-in-progress. The metadata for some QUT datasets will also be harvested by and discoverable via ‘Research Data Australia’, the dataset discovery service managed by the Australian National Data Service (ANDS). QUT Digital repository will integrate a range of technologies and services related to scholarly communication. This paper will discuss the development of the QUT Digital Repository, its strategic functions, the stakeholders involved and lessons learned.
Resumo:
This presentation was given at the 2015 USETDA (United States Electronic Theses and Dissertations Association) conference in Austin, Texas explores the history of Digital Collections Center at Florida International University and where and how it functions in the process of publishing, archiving, and promoting the university's electronic theses and dissertations. Additionally, the functionality of Digital Commons is discussed along with the use of Adobe Acrobat for creating archival quality PDFs. The final section discusses promotion techniques used via social media for increased discoverability of ETDs.
Resumo:
Integration of experiential learning into the library and information science (LIS) courses has been a theme in LIS education, but the topic deserves renewed attention with an increasing demand for professionals in the digital library field and in light of the new initiative announced by the Library of Congress (LC) and the Institution of Museum and Library Services (IMLS) for national residency program in digital curation. The balance between theory and practice in digital library curricula, the challenges of incorporating practical projects into LIS coursework, and the current practice of teaching with hands on activities represent the primary areas of this panel discussion.
Resumo:
Nowadays the great public libraries in Bulgaria are gaining the appearance of digital centers which provide new informational resources and services in the digital space. The digital conversion as a way of preservation is one of the important priorities of Regional Public Library in Veliko Tarnovo. In the last few years we persistently search for possible ways of financing by national and foreign programs in this direction. In the beginning the strategy was oriented to digitalization of the funds with most urgent conversion – these of the local studies periodicals from 1878 till 1944 year. The digitalization of funds will create conditions of laying the basement of full text database of Bulgarian periodical publications. The technology that is offered gives opportunities for including other libraries in the Unified index, which can develop it into a National Unified index of periodical publications. The integrated informational environment that is created is an attractive, comfortable and useful place for work in home or at work for researchers, historians, art experts, bibliographers. The library readers use very actively all informational services of the library internet page and work competently with the on-line indexes provided there, they find the necessary title, which can be demanded later for usage in home or in the library, using electronic means again.
Resumo:
Presentation made by Jamie Rogers and John Nemmers at the Society of Florida Archivists annual meeting in Tallahassee, Florida. Jamie Rogers presented the "Coral Gables - Virtual Historic City" project at Florida International University. John Nemmers presented the "Unearthing St. Augustine’s Colonial Heritage" project at the University of Florida
Resumo:
Cultural objects are increasingly generated and stored in digital form, yet effective methods for their indexing and retrieval still remain an important area of research. The main problem arises from the disconnection between the content-based indexing approach used by computer scientists and the description-based approach used by information scientists. There is also a lack of representational schemes that allow the alignment of the semantics and context with keywords and low-level features that can be automatically extracted from the content of these cultural objects. This paper presents an integrated approach to address these problems, taking advantage of both computer science and information science approaches. We firstly discuss the requirements from a number of perspectives: users, content providers, content managers and technical systems. We then present an overview of our system architecture and describe various techniques which underlie the major components of the system. These include: automatic object category detection; user-driven tagging; metadata transform and augmentation, and an expression language for digital cultural objects. In addition, we discuss our experience on testing and evaluating some existing collections, analyse the difficulties encountered and propose ways to address these problems.
A story worth telling : putting oral history and digital collections online in cultural institutions
Resumo:
Digital platforms in cultural institutions offer exciting opportunities for oral history and digital storytelling that can augment and enrich traditional collections. The way in which cultural institutions allow access to the public is changing dramatically, prompting substantial expansions of their oral history and digital story holdings. In Queensland, Australia, public libraries and museums are becoming innovative hubs of a wide assortment of collections that represent a cross-section of community groups and organisations through the integration of oral history and digital storytelling. The State Library of Queensland (SLQ) features digital stories online to encourage users to explore what the institution has in the catalogue through their website. Now SLQ also offers oral history interviews online, to introduce users to oral history and other components of their collections,- such as photographs and documents to current, as well as new users. This includes the various departments, Indigenous centres and regional libraries affiliated with SLQ statewide, who are often unable to access the materials held within, or even full information about, the collections available within the institution. There has been a growing demand for resources and services that help to satisfy community enthusiasm and promote engagement. Demand increases as public access to affordable digital media technologies increases, and as community or marginalised groups become interested in do it yourself (DIY) history; and SLQ encourages this. This paper draws on the oral history and digital story-based research undertaken by the Queensland University of Technology (QUT) for the State Library of Queensland including: the Apology Collection: The Prime Minister’s apology to Australia’s Indigenous Stolen Generation; Five Senses: regional Queensland artists; Gay history of Brisbane; and The Queensland Business Leaders Hall of Fame.
Resumo:
Image collections are ever growing and hence visual information is becoming more and more important. Moreover, the classical paradigm of taking pictures has changed, first with the spread of digital cameras and, more recently, with mobile devices equipped with integrated cameras. Clearly, these image repositories need to be managed, and tools for effectively and efficiently searching image databases are highly sought after, especially on mobile devices where more and more images are being stored. In this paper, we present an image browsing system for interactive exploration of image collections on mobile devices. Images are arranged so that visually similar images are grouped together while large image repositories become accessible through a hierarchical, browsable tree structure, arranged on a hexagonal lattice. The developed system provides an intuitive and fast interface for navigating through image databases using a variety of touch gestures. © 2012 Springer-Verlag.
Resumo:
The key to prosperity in today's world is access to digital content and skills to create new content. Investigations of folklore artifacts is the topic of this article, presenting research related to the national program „Knowledge Technologies for Creation of Digital Presentation and Significant Repositories of Folklore Heritage” (FolkKnow). FolkKnow aims to build a digital multimedia archive "Bulgarian Folklore Heritage” (BFH) and virtual information portal with folk media library of digitized multimedia objects from a selected collection of the fund of Institute of Ethnology and Folklore Studies with Ethnographic Museum (IEFSEM) of the Bulgarian Academy of Science (BAS). The realization of the project FolkKnow gives opportunity for wide social applications of the multimedia collections, for the purposes of Interactive distance learning/self-learning, research activities in the field of Bulgarian traditional culture and for the cultural and ethno-tourism. We study, analyze and implement techniques and methods for digitization of multimedia objects and their annotation. In the paper are discussed specifics approaches used to building and protect a digital archive with multimedia content. Tasks can be systematized in the following guidelines: * Digitization of the selected samples * Analysis of the objects in order to determine the metadata of selected artifacts from selected collections and problem areas * Digital multimedia archive * Socially-oriented applications and virtual exhibitions artery * Frequency dictionary tool for texts with folklore themes * A method of modern technologies of protecting intellectual property and copyrights on digital content developed for use in digital exposures.
Resumo:
Online social networks, user-created content and participatory media are often still ignored by professionals, denounced in the press and banned in schools. but the potential of digital literacy should not be underestimated. Hartley reassesses the historical and global context, commercial and cultural dynamics and the potential of popular productivity through analysis of the use of digital media in various domains, including creative industries, digital storytelling, YouTube, journalism and mediated fashion.
Resumo:
Public key cryptography, and with it,the ability to compute digital signatures, have made it possible for electronic commerce to flourish. It is thus unsurprising that the proposed Australian NECS will also utilise digital signatures in its system so as to provide a fully automated process from the creation of electronic land title instrument to the digital signing, and electronic lodgment of these instruments. This necessitates an analysis of the fraud risks raised by the usage of digital signatures because a compromise of the integrity of digital signatures will lead to a compromise of the Torrens system itself. This article will show that digital signatures may in fact offer greater security against fraud than handwritten signatures; but to achieve this, digital signatures require an infrastructure whereby each component is properly implemented and managed.
Resumo:
Process models in organizational collections are typically modeled by the same team and using the same conventions. As such, these models share many characteristic features like size range, type and frequency of errors. In most cases merely small samples of these collections are available due to e.g. the sensitive information they contain. Because of their sizes, these samples may not provide an accurate representation of the characteristics of the originating collection. This paper deals with the problem of constructing collections of process models, in the form of Petri nets, from small samples of a collection for accurate estimations of the characteristics of this collection. Given a small sample of process models drawn from a real-life collection, we mine a set of generation parameters that we use to generate arbitrary-large collections that feature the same characteristics of the original collection. In this way we can estimate the characteristics of the original collection on the generated collections.We extensively evaluate the quality of our technique on various sample datasets drawn from both research and industry.