983 resultados para Data Standards
Resumo:
This paper applies Latour’s 1992 translation map as a device to explore the development of and recent conflict between two data standards for the exchange of business information – EDIFACT and XBRL. Our research is focussed in France, where EDIFACT is well established and XBRL is just emerging. The alliances supporting both standards are local and global. The French/European EDIFACT is promulgated through the United Nations while a consortium of national jurisdictions and companies has coalesced around the US initiated XBRL International (XII). We suggest cultural differences pose a barrier to co-operation between the two networks. Competing data standards create the risk of switching costs. The different technical characteristics of the standards are identified as raising implications for regulators and users. A key concern is the lack of co-ordination of data standard production and the mechanisms regulatory agencies use to choose platforms for electronic data submission.
Resumo:
INTRODUCTION: The ability to reproducibly identify clinically equivalent patient populations is critical to the vision of learning health care systems that implement and evaluate evidence-based treatments. The use of common or semantically equivalent phenotype definitions across research and health care use cases will support this aim. Currently, there is no single consolidated repository for computable phenotype definitions, making it difficult to find all definitions that already exist, and also hindering the sharing of definitions between user groups. METHOD: Drawing from our experience in an academic medical center that supports a number of multisite research projects and quality improvement studies, we articulate a framework that will support the sharing of phenotype definitions across research and health care use cases, and highlight gaps and areas that need attention and collaborative solutions. FRAMEWORK: An infrastructure for re-using computable phenotype definitions and sharing experience across health care delivery and clinical research applications includes: access to a collection of existing phenotype definitions, information to evaluate their appropriateness for particular applications, a knowledge base of implementation guidance, supporting tools that are user-friendly and intuitive, and a willingness to use them. NEXT STEPS: We encourage prospective researchers and health administrators to re-use existing EHR-based condition definitions where appropriate and share their results with others to support a national culture of learning health care. There are a number of federally funded resources to support these activities, and research sponsors should encourage their use.
Resumo:
The NRS state data quality standards identify the policies, processes and materials that states and local programs should have in place to collect valid and reliable data for the National Reporting System (NRS). The Division of Adult Education (DAEL) within the Office of Vocational and Adult Education developed the standards to define the characteristics of high quality state and local data collection systems for the NRS. The standards provide an organized way for DAEL to understand the quality of NRS data collection within the states and also provide guidance to states on how to improve their systems. States are to complete this checklist, which incorporates the standards, with their annual NRS data submission to rate their level of implementation of the standards. The accompanying policy document describes DAEL’s requirements for state conformance to the standards and explains the use of the information from this checklist.
Resumo:
Like many businesses and government agencies, the Iowa Department of Corrections has been measuring our results for some years now. Certain performance measures are collected and reported to the Governor as part of the Director’s Flexible Performance Agreement used to evaluate the DOC Director. Updates of these measures are forwarded to DOC staff on a quarterly basis. In addition, the Iowa Department of Management requires each state agency to report on certain performance measures as part of Iowa’s effort to ensure accountability in state government. These measures and their progress are posted to www.ResultsIowa.org
Resumo:
In the global phenomenon, the aging population becomes a critical issue. Data and information concerning elderly citizens are increasing and are not well organized. In addition, these unstructured data and information cause the problems for decision makers. Since we live in a digital world, Information Technology is considered to be a tool in order to solve problems. Data, information, and knowledge are crucial components to facilitate success in IT service system. Therefore, it is necessary to study how to organize or to govern data from various sources related elderly citizens. The research is conducted due to the fact that there is no internationally accepted holistic framework for governance of data. The research limits the scope to study on the healthcare domain; however, the results can be applied to the other areas. The research starts with an ongoing research of Dahlberg and Nokkala (2015) as a theory. It explains the classification of existing data sources and their characteristics with the focus on managerial perspectives. Then the studies of existing frameworks at international and national level organizations have been performed to show the current frameworks, which have been used and are useful in compiling data on elderly citizens. The international organizations in this research are selected based on their reputations and the reliability to obtain information. The selected countries at national level provide different point of views between two countries. Australia is a forerunner in IT governance while Thailand is the country which the author has familiar knowledge of the current situation. Considered the discussions of frameworks at international and national organizations level illustrate the main characteristics of each framework. At international organization level gives precedence to the interoperability of exchanging data and information between different parties. Whereas at national level shows the importance of the acknowledgement of using frameworks throughout the country in order to make the frameworks to be effective. After the studies of both international and national organization levels, the thesis shows the summarized tables to answer the fitness to the proposed framework by Dahlberg and Nokkala whether the framework help to consolidate data from various sources with different formats, hierarchies, structures, velocities, and other attributes of data storages. In addition, suggestions and recommendations will be proposed for the future research.
Resumo:
Standardisation of microsatellite allele profiles between laboratories is of fundamental importance to the transferability of genetic fingerprint data and the identification of clonal individuals held at multiple sites. Here we describe two methods of standardisation applied to the microsatellite fingerprinting of 429 Theobroma cacao L. trees representing 345 accessions held in the worlds largest Cocoa Intermediate Quarantine facility: the use of a partial allelic ladder through the production of 46 cloned and sequenced allelic standards (AJ748464 to AJ48509), and the use of standard genotypes selected to display a diverse allelic range. Until now a lack of accurate and transferable identification information has impeded efforts to genetically improve the cocoa crop. To address this need, a global initiative to fingerprint all international cocoa germplasm collections using a common set of 15 microsatellite markers is in progress. Data reported here have been deposited with the International Cocoa Germplasm Database and form the basis of a searchable resource for clonal identification. To our knowledge, this is the first quarantine facility to be completely genotyped using microsatellite markers for the purpose of quality control and clonal identification. Implications of the results for retrospective tracking of labelling errors are briefly explored.
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.
Resumo:
ISO19156 Observations and Measurements (O&M) provides a standardised framework for organising information about the collection of information about the environment. Here we describe the implementation of a specialisation of O&M for environmental data, the Metadata Objects for Linking Environmental Sciences (MOLES3). MOLES3 provides support for organising information about data, and for user navigation around data holdings. The implementation described here, “CEDA-MOLES”, also supports data management functions for the Centre for Environmental Data Archival, CEDA. The previous iteration of MOLES (MOLES2) saw active use over five years, being replaced by CEDA-MOLES in late 2014. During that period important lessons were learnt both about the information needed, as well as how to design and maintain the necessary information systems. In this paper we review the problems encountered in MOLES2; how and why CEDA-MOLES was developed and engineered; the migration of information holdings from MOLES2 to CEDA-MOLES; and, finally, provide an early assessment of MOLES3 (as implemented in CEDA-MOLES) and its limitations. Key drivers for the MOLES3 development included the necessity for improved data provenance, for further structured information to support ISO19115 discovery metadata export (for EU INSPIRE compliance), and to provide appropriate fixed landing pages for Digital Object Identifiers (DOIs) in the presence of evolving datasets. Key lessons learned included the importance of minimising information structure in free text fields, and the necessity to support as much agility in the information infrastructure as possible without compromising on maintainability both by those using the systems internally and externally (e.g. citing in to the information infrastructure), and those responsible for the systems themselves. The migration itself needed to ensure continuity of service and traceability of archived assets.
Resumo:
We analyzed more than 200 OSIRIS NAC images with a pixel scale of 0.9-2.4 m/pixel of comet 67P/Churyumov-Gerasimenko (67P) that have been acquired from onboard the Rosetta spacecraft in August and September 2014 using stereo-photogrammetric methods (SPG). We derived improved spacecraft position and pointing data for the OSIRIS images and a high-resolution shape model that consists of about 16 million facets (2 m horizontal sampling) and a typical vertical accuracy at the decimeter scale. From this model, we derive a volume for the northern hemisphere of 9.35 km(3) +/- 0.1 km(3). With the assumption of a homogeneous density distribution and taking into account the current uncertainty of the position of the comet's center-of-mass, we extrapolated this value to an overall volume of 18.7 km(3) +/- 1.2 km(3), and, with a current best estimate of 1.0 X 10(13) kg for the mass, we derive a bulk density of 535 kg/m(3) +/- 35 kg/m(3). Furthermore, we used SPG methods to analyze the rotational elements of 67P. The rotational period for August and September 2014 was determined to be 12.4041 +/- 0.0004 h. For the orientation of the rotational axis (z-axis of the body-fixed reference frame) we derived a precession model with a half-cone angle of 0.14 degrees, a cone center position at 69.54 degrees/64.11 degrees (RA/Dec J2000 equatorial coordinates), and a precession period of 10.7 days. For the definition of zero longitude (x-axis orientation), we finally selected the boulder-like Cheops feature on the big lobe of 67P and fixed its spherical coordinates to 142.35 degrees right-hand-rule eastern longitude and -0.28 degrees latitude. This completes the definition of the new Cheops reference frame for 67P. Finally, we defined cartographic mapping standards for common use and combined analyses of scientific results that have been obtained not only within the OSIRIS team, but also within other groups of the Rosetta mission.
Resumo:
In recent years, a variety of systems have been developed that export the workflows used to analyze data and make them part of published articles. We argue that the workflows that are published in current approaches are dependent on the specific codes used for execution, the specific workflow system used, and the specific workflow catalogs where they are published. In this paper, we describe a new approach that addresses these shortcomings and makes workflows more reusable through: 1) the use of abstract workflows to complement executable workflows to make them reusable when the execution environment is different, 2) the publication of both abstract and executable workflows using standards such as the Open Provenance Model that can be imported by other workflow systems, 3) the publication of workflows as Linked Data that results in open web accessible workflow repositories. We illustrate this approach using a complex workflow that we re-created from an influential publication that describes the generation of 'drugomes'.
Resumo:
Within the European Union, member states are setting up official data catalogues as entry points to access PSI (Public Sector Information). In this context, it is important to describe the metadata of these data portals, i.e., of data catalogs, and allow for interoperability among them. To tackle these issues, the Government Linked Data Working Group developed DCAT (Data Catalog Vocabulary), an RDF vocabulary for describing the metadata of data catalogs. This topic report analyzes the current use of the DCAT vocabulary in several European data catalogs and proposes some recommendations to deal with an inconsistent use of the metadata across countries. The enrichment of such metadata vocabularies with multilingual descriptions, as well as an account for cultural divergences, is seen as a necessary step to guarantee interoperability and ensure wider adoption.
Resumo:
"WH67-418."
Resumo:
"WH67-300."