928 resultados para Metadata Authority Description Schema (MADS)
Resumo:
Introduction: Authority records interchange requires establishing and using metadata standards, such as MARC 21 Format for Authority Data, format used by several cataloging agencies, and Metadata Authority Description Schema (MADS), that has received little attention and it is a little widespread standard among agencies. Purpose: Presenting an introductory study about Metadata Authority Description Schema (MADS). Methodology: Descriptive and exploratory bibliographic research. Results: The paper address the MADS creation context, its goals and its structure and key issues related to conversion of records from MARC 21 to MADS. Conclusions: The study concludes that, despite its limitations, MADS might be used to create simple authority records in Web environment and beyond libraries context.
Resumo:
Description based on: Summer 1996; title from caption.
Collection-Level Subject Access in Aggregations of Digital Collections: Metadata Application and Use
Resumo:
Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.
Resumo:
Description based on: No. 38.
Resumo:
Description based on: 1982-1983.
Resumo:
Description based on: 1987; title from cover.
Resumo:
Description based on: 1989 edition.
Resumo:
Description based on: 1986.
Resumo:
1903
Resumo:
1903
Resumo:
Authors from Burrough (1992) to Heuvelink et al. (2007) have highlighted the importance of GIS frameworks which can handle incomplete knowledge in data inputs, in decision rules and in the geometries and attributes modelled. It is particularly important for this uncertainty to be characterised and quantified when GI data is used for spatial decision making. Despite a substantial and valuable literature on means of representing and encoding uncertainty and its propagation in GI (e.g.,Hunter and Goodchild 1993; Duckham et al. 2001; Couclelis 2003), no framework yet exists to describe and communicate uncertainty in an interoperable way. This limits the usability of Internet resources of geospatial data, which are ever-increasing, based on specifications that provide frameworks for the ‘GeoWeb’ (Botts and Robin 2007; Cox 2006). In this paper we present UncertML, an XML schema which provides a framework for describing uncertainty as it propagates through many applications, including online risk management chains. This uncertainty description ranges from simple summary statistics (e.g., mean and variance) to complex representations such as parametric, multivariate distributions at each point of a regular grid. The philosophy adopted in UncertML is that all data values are inherently uncertain, (i.e., they are random variables, rather than values with defined quality metadata).
Resumo:
Metadata that is associated with either an information system or an information object for purposes of description, administration, legal requirements, technical functionality, use and usage, and preservation, plays a critical role in ensuring the creation, management, preservation and use and re-use of trustworthymaterials, including records. Recordkeeping1 metadata, of which one key type is archival description, plays a particularly important role in documenting the reliability and authenticity of records and recordkeeping systemsas well as the various contexts (legal-administrative, provenancial, procedural, documentary, and technical) within which records are created and kept as they move across space and time. In the digital environment, metadata is also the means by which it is possible to identify how record components – those constituent aspects of a digital record that may be managed, stored and used separately by the creator or the preserver – can be reassembled to generate an authentic copy of a record or reformulated per a user’s request as a customized output package.Issues relating to the creation, capture, management and preservation of adequate metadata are, therefore, integral to any research study addressing the reliability and authenticity of digital entities, regardless of the community, sector or institution within which they are being created. The InterPARES 2 Description Cross-Domain Group (DCD) examined the conceptualization, definitions, roles, and current functionality of metadata and archival description in terms of requirements generated by InterPARES 12. Because of the needs to communicate the work of InterPARES in a meaningful way across not only other disciplines, but also different archival traditions; to interface with, evaluate and inform existing standards, practices and other research projects; and to ensure interoperability across the three focus areas of InterPARES2, the Description Cross-Domain also addressed its research goals with reference to wider thinking about and developments in recordkeeping and metadata. InterPARES2 addressed not only records, however, but a range of digital information objects (referred to as “entities” by InterPARES 2, but not to be confused with the term “entities” as used in metadata and database applications) that are the products and by-products of government, scientific and artistic activities that are carried out using dynamic, interactive or experiential digital systems. The nature of these entities was determined through a diplomatic analysis undertaken as part of extensive case studies of digital systems that were conducted by the InterPARES 2 Focus Groups. This diplomatic analysis established whether the entities identified during the case studies were records, non-records that nevertheless raised important concerns relating to reliability and authenticity, or “potential records.” To be determined to be records, the entities had to meet the criteria outlined by archival theory – they had to have a fixed documentary format and stable content. It was not sufficient that they be considered to be or treated as records by the creator. “Potential records” is a new construct that indicates that a digital system has the potential to create records upon demand, but does not actually fix and set aside records in the normal course of business. The work of the Description Cross-Domain Group, therefore, addresses the metadata needs for all three categories of entities.Finally, since “metadata” as a term is used today so ubiquitously and in so many different ways by different communities, that it is in peril of losing any specificity, part of the work of the DCD sought to name and type categories of metadata. It also addressed incentives for creators to generate appropriate metadata, as well as issues associated with the retention, maintenance and eventual disposition of the metadata that aggregates around digital entities over time.
Resumo:
QUT’s new metadata repository (data registry), Research Data Finder, has been designed to promote the visibility and discoverability of QUT research datasets. Funded by the Australian National Data Service (ANDS), it will provide a qualitative snapshot of research data outputs created or collected by members of the QUT research community that are available via open or mediated access. As a fully integrated metadata repository Research Data Finder aligns with institutional sources of truth, such as QUT’s research administrative system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. In addition, the repository and its workflows are designed to foster smoother data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximize existing research datasets. The metadata schema used in Research Data Finder is the Registry Interchange Format - Collections and Services (RIF-CS), developed by ANDS in 2009. This comprehensive schema is potentially complex for researchers; unlike metadata for publications, which are often made publicly available with the official publication, metadata for datasets are not typically available and need to be created. Research Data Finder uses a hybrid self-deposit and mediated deposit system. In addition to automated ingests from ResearchMaster (research project information) and Academic Profiles system (researcher information), shareable data is identified at a number of key “trigger points” in the research cycle. These include: research grant proposals; ethics applications; Data Management Plans; Liaison Librarian data interviews; and thesis submissions. These ingested records can be supplemented with related metadata including links to related publications, such as those in QUT ePrints. Records deposited in Research Data Finder are harvested by ANDS and made available to a national and international audience via Research Data Australia, ANDS’ discovery service for Australian research data. Researcher and research group metadata records are also harvested by the National Library of Australia (NLA) and these records are then published in Trove (the NLA’s digital information portal). By contributing records to the national infrastructure, QUT data will become more visible. Within Australia and internationally, many funding bodies have already mandated the open access of publications produced from publicly funded research projects, such as those supported by the Australian Research Council (ARC), or the National Health and Medical Research Council (NHMRC). QUT will be well placed to respond to the rapidly evolving climate of research data management. This project is supported by the Australian National Data Service (ANDS). ANDS is supported by the Australian Government through the National Collaborative Research Infrastructure Strategy Program and the Education Investment Fund (EIF) Super Science Initiative.
Resumo:
Democratic governments raise taxes and charges and spend revenue on delivering peace, order and good government. The delivery process begins with a legislature as that can provide a framework of legally enforceable rules enacted according to the government’s constitution. These rules confer rights and obligations that allow particular people to carry on particular functions at particular places and times. Metadata standards as applied to public records contain information about the functioning of government as distinct from the non-government sector of society. Metadata standards apply to database construction. Data entry, storage, maintenance, interrogation and retrieval depend on a controlled vocabulary needed to enable accurate retrieval of suitably catalogued records in a global information environment. Queensland’s socioeconomic progress now depends in part on technical efficiency in database construction to address queries about who does what, where and when; under what legally enforceable authority; and how the evidence of those facts is recorded. The Survey and Mapping Infrastructure Act 2003 (Qld) addresses technical aspects of where questions – typically the officially recognised name of a place and a description of its boundaries. The current 10-year review of the Survey and Mapping Regulation 2004 provides a valuable opportunity to consider whether the Regulation makes sense in the context of a number of later laws concerned with management of Public Sector Information (PSI) as well as policies for ICT hardware and software procurement. Removing ambiguities about how official place names are to be regarded on a whole-of-government basis can achieve some short term goals. Longer-term goals depend on a more holistic approach to information management – and current aspirations for more open government and community engagement are unlikely to occur without such a longer-term vision.
Resumo:
The digital management of collections in museums, archives, libraries and galleries is an increasingly important part of cultural heritage studies. This paper describes a representation for folk song metadata, based on the Web Ontology Language (OWL) implementation of the CIDOC Conceptual Reference Model. The OWL representation facilitates encoding and reasoning over a genre ontology, while the CIDOC model enables a representation of complex spatial containment and proximity relations among geographic regions. It is shown how complex queries of folk song metadata, relying on inference and not only retrieval, can be expressed in OWL and solved using a description logic reasoner.