965 resultados para Avoin metadata
Resumo:
Historical, i.e. pre-1957, upper-air data are a valuable source of information on the state of the atmosphere, in some parts of the world dating back to the early 20th century. However, to date, reanalyses have only partially made use of these data, and only of observations made after 1948. Even for the period between 1948 (the starting year of the NCEP/NCAR (National Centers for Environmental Prediction/National Center for Atmospheric Research) reanalysis) and the International Geophysical Year in 1957 (the starting year of the ERA-40 reanalysis), when the global upper-air coverage reached more or less its current status, many observations have not yet been digitised. The Comprehensive Historical Upper-Air Network (CHUAN) already compiled a large collection of pre-1957 upper-air data. In the framework of the European project ERA-CLIM (European Reanalysis of Global Climate Observations), significant amounts of additional upper-air data have been catalogued (> 1.3 million station days), imaged (> 200 000 images) and digitised (> 700 000 station days) in order to prepare a new input data set for upcoming reanalyses. The records cover large parts of the globe, focussing on, so far, less well covered regions such as the tropics, the polar regions and the oceans, and on very early upper-air data from Europe and the US. The total number of digitised/inventoried records is 61/101 for moving upper-air data, i.e. data from ships, etc., and 735/1783 for fixed upper-air stations. Here, we give a detailed description of the resulting data set including the metadata and the quality checking procedures applied. The data will be included in the next version of CHUAN. The data are available at doi:10.1594/PANGAEA.821222
Resumo:
In a fast changing world with growing concerns about biodiversity loss and an increasing number of animal and human diseases emerging from wildlife, the need for effective wildlife health investigations including both surveillance and research is now widely recognized. However, procedures applicable to and knowledge acquired from studies related to domestic animal and human health can be on partly extrapolated to wildlife. This article identifies requirements and challenges inherent in wildlife health investigations, reviews important definitions and novel health investigation methods, and proposes tools and strategies for effective wildlife health surveillance programs. Impediments to wildlife health investigations are largely related to zoological, behavioral and ecological characteristics of wildlife populations and to limited access to investigation materials. These concerns should not be viewed as insurmountable but it is imperative that they are considered in study design, data analysis and result interpretation. It is particularly crucial to remember that health surveillance does not begin in the laboratory but in the fields. In this context, participatory approaches and mutual respect are essential. Furthermore, interdisciplinarity and open minds are necessary because a wide range of tools and knowledge from different fields need to be integrated in wildlife health surveillance and research. The identification of factors contributing to disease emergence requires the comparison of health and ecological data over time and among geographical regions. Finally, there is a need for the development and validation of diagnostic tests for wildlife species and for data on free-ranging population densities. Training of health professionals in wildlife diseases should also be improved. Overall, the article particularly emphasizes five needs of wildlife health investigations: communication and collaboration; use of synergies and triangulation approaches; investments for the long term; systematic collection of metadata; and harmonization of definitions and methods.
Resumo:
This paper describes the RNetCDF package (version 1.6), an interface for reading and writing files in Unidata NetCDF format, and gives an introduction to the NetCDF file format. NetCDF is a machine independent binary file format which allows storage of different types of array based data, along with short metadata descriptions. The package presented here allows access to the most important functions of the NetCDF C-interface for reading, writing, and modifying NetCDF datasets. In this paper, we present a short overview on the NetCDF file format and show usage examples of the package.
Resumo:
Upper-air observations are a fundamental data source for global atmospheric data products, but uncertainties, particularly in the early years, are not well known. Most of the early observations, which have now been digitized, are prone to a large variety of undocumented uncertainties (errors) that need to be quantified, e.g., for their assimilation in reanalysis projects. We apply a novel approach to estimate errors in upper-air temperature, geopotential height, and wind observations from the Comprehensive Historical Upper-Air Network for the time period from 1923 to 1966. We distinguish between random errors, biases, and a term that quantifies the representativity of the observations. The method is based on a comparison of neighboring observations and is hence independent of metadata, making it applicable to a wide scope of observational data sets. The estimated mean random errors for all observations within the study period are 1.5 K for air temperature, 1.3 hPa for pressure, 3.0 ms−1for wind speed, and 21.4° for wind direction. The estimates are compared to results of previous studies and analyzed with respect to their spatial and temporal variability.
Resumo:
Many observed time series of the global radiosonde or PILOT networks exist as fragments distributed over different archives. Identifying and merging these fragments can enhance their value for studies on the three-dimensional spatial structure of climate change. The Comprehensive Historical Upper-Air Network (CHUAN version 1.7), which was substantially extended in 2013, and the Integrated Global Radiosonde Archive (IGRA) are the most important collections of upper-air measurements taken before 1958. CHUAN (tracked) balloon data start in 1900, with higher numbers from the late 1920s onward, whereas IGRA data start in 1937. However, a substantial fraction of those measurements have not been taken at synoptic times (preferably 00:00 or 12:00 GMT) and on altitude levels instead of standard pressure levels. To make them comparable with more recent data, the records have been brought to synoptic times and standard pressure levels using state-of-the-art interpolation techniques, employing geopotential information from the National Oceanic and Atmospheric Administration (NOAA) 20th Century Reanalysis (NOAA 20CR). From 1958 onward the European Re-Analysis archives (ERA-40 and ERA-Interim) available at the European Centre for Medium-Range Weather Forecasts (ECMWF) are the main data sources. These are easier to use, but pilot data still have to be interpolated to standard pressure levels. Fractions of the same records distributed over different archives have been merged, if necessary, taking care that the data remain traceable back to their original sources. If possible, station IDs assigned by the World Meteorological Organization (WMO) have been allocated to the station records. For some records which have never been identified by a WMO ID, a local ID above 100 000 has been assigned. The merged data set contains 37 wind records longer than 70 years and 139 temperature records longer than 60 years. It can be seen as a useful basis for further data processing steps, most notably homogenization and gridding, after which it should be a valuable resource for climatological studies. Homogeneity adjustments for wind using the NOAA-20CR as a reference are described in Ramella Pralungo and Haimberger (2014). Reliable homogeneity adjustments for temperature beyond 1958 using a surface-data-only reanalysis such as NOAA-20CR as a reference have yet to be created. All the archives and metadata files are available in ASCII and netCDF format in the PANGAEA archive
Resumo:
The international standardisation of national meteorological networks in the late nineteenth century excluded biotic and abiotic observations from the objects to be henceforth published in the yearbooks. Skilled amateurs being in charge of three meteorological stations in Canton Schaffhausen (Switzerland) and their successors managed to continuously publish phenological observations gathered in the station environment alongside with meteorological data in the official gazette of this Canton from 1876 to 1950, i.e. up to the onset of phenological network observations in Switzerland. At least ten observations are available for 51 plant and animal phenological phases. Long series were assembled (N → = 30) for 14 plant phenological observations, among them for the first flowering of snowdrop (Galanthus nivalis), of hazel (Corylus avellana), of horse chestnut (Aesculus hippocastanum), of winter rye (Secale cereale) and of grape vine (Vitis vinifera) as well as the beginning of hay, winter rye and grape harvesting. Only the bare data were published without any metadata. The quality of 10 long series (N →=60) was checked by investigating the biographical and biological background of key observers and submitting their evidence to graphical (meteorological plausibility check of outliers) and statistical verification. The long term observers, mostly schoolteachers and high school professors, had a good knowledge of botany and the quality of their observations – disregarding obvious printing errors – is surprisingly good. A number of long series (seven) was completed with applicable data from the Swiss Phenological Network up to 2011. Besides anthropogenic shifts (beginning of hay and grape harvest) there is a contrast between a global warming-related earlier flowering of snowdrop and hazel and a later occurrence of grape vine flowering.
Resumo:
The International Surface Pressure Databank (ISPD) is the world's largest collection of global surface and sea-level pressure observations. It was developed by extracting observations from established international archives, through international cooperation with data recovery facilitated by the Atmospheric Circulation Reconstructions over the Earth (ACRE) initiative, and directly by contributing universities, organizations, and countries. The dataset period is currently 1768–2012 and consists of three data components: observations from land stations, marine observing systems, and tropical cyclone best track pressure reports. Version 2 of the ISPD (ISPDv2) was created to be observational input for the Twentieth Century Reanalysis Project (20CR) and contains the quality control and assimilation feedback metadata from the 20CR. Since then, it has been used for various general climate and weather studies, and an updated version 3 (ISPDv3) has been used in the ERA-20C reanalysis in connection with the European Reanalysis of Global Climate Observations project (ERA-CLIM). The focus of this paper is on the ISPDv2 and the inclusion of the 20CR feedback metadata. The Research Data Archive at the National Center for Atmospheric Research provides data collection and access for the ISPDv2, and will provide access to future versions.
Resumo:
Libraries of learning objects may serve as basis for deriving course offerings that are customized to the needs of different learning communities or even individuals. Several ways of organizing this course composition process are discussed. Course composition needs a clear understanding of the dependencies between the learning objects. Therefore we discuss the metadata for object relationships proposed in different standardization projects and especially those suggested in the Dublin Core Metadata Initiative. Based on these metadata we construct adjacency matrices and graphs. We show how Gozinto-type computations can be used to determine direct and indirect prerequisites for certain learning objects. The metadata may also be used to define integer programming models which can be applied to support the instructor in formulating his specifications for selecting objects or which allow a computer agent to automatically select learning objects. Such decision models could also be helpful for a learner navigating through a library of learning objects. We also sketch a graph-based procedure for manual or automatic sequencing of the learning objects.
Resumo:
Specification consortia and standardization bodies concentrate on e-Learning objects to en-sure reusability of content. Learning objects may be collected in a library and used for deriv-ing course offerings that are customized to the needs of different learning communities. How-ever, customization of courses is possible only if the logical dependencies between the learn-ing objects are known. Metadata for describing object relationships have been proposed in several e-Learning specifications. This paper discusses the customization potential of e-Learning objects but also the pitfalls that exist if content is customized inappropriately.
Resumo:
A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.
Resumo:
This paper describes the procedures used to create a distributed collection of topographic maps of the Austro-Hungarian Empire, the Spezialkarte der Öesterriechisch-ungarnischen Monarchie, Masse. 1:75,000 der natur. This set of maps was published in Vienna over a period of years from 1877 to 1914. The part of the set used in this project includes 776 sheets; all sheets from all editions number over 3,665. The paper contains detailed information on how the maps were converted to digital images, how metadata were prepared, and how Web-browser access was created using ArcIMS Metadata Server. The project, funded by a 2004 National Leadership Grant from the Institute for Museums and Library Science (IMLS), was a joint project of the Homer Babbidge Library Map and Geographic Information Center at the University of Connecticut, the New York Public Library, and the American Geographical Society’s Map Library at the University of Wisconsin Milwaukee.
Resumo:
Purpose – The purpose of this paper is to describe the tools and strategies that were employed by C/W MARS to successfully develop and implement the Digital Treasures digital repository. Design/methodology/approach – This paper outlines the planning and subsequent technical issues that arise when implementing a digitization project on the scale of the large, multi-type, automated library network. Workflow solutions addressed include synchronous online metadata record submissions from multiple library sources and the delivery of collection-level use statistics to participating library administrators. The importance of standards-based descriptive metadata and the role of project collaboration are also discussed. Findings – From the time of its initial planning, the Digital Treasures repository was fully implemented in six months. The discernable and statistically quantified online discovery and access of actual digital objects greatly assisted libraries unsure of their own staffing costs/benefits to join the repository. Originality/value – This case study may serve as a possible example of initial planning, workflow and final implementation strategies for new repositories in both the general and library consortium environment. Keywords – Digital repositories, Library networks, Data management. Paper type – Case study
Resumo:
This paper describes the creation of a GIS database index to the collection of historical aerial photographs of Connecticut housed in the Map and Geographic Information Center in the Homer Babbidge Library at the University of Connecticut. The index allows patrons to search for scanned aerial photograph images for a specific location across multiple years and to retrieve digital scans from the Library server. Procedures for scanning and georeferencing the images, preparing metadata for the images, and creating the GIS database index are described.