901 resultados para Data dissemination and sharing
Resumo:
Pteropods are a group of holoplanktonic gastropods for which global biomass distribution patterns remain poorly resolved. The aim of this study was to collect and synthesize existing pteropod (Gymnosomata, Thecosomata and Pseudothecosomata) abundance and biomass data, in order to evaluate the global distribution of pteropod carbon biomass, with a particular emphasis on its seasonal, temporal and vertical patterns. We collected 25 902 data points from several online databases and a number of scientific articles. The biomass data has been gridded onto a 360 x 180° grid, with a vertical resolution of 33 WOA depth levels. Data has been converted to NetCDF format. Data were collected between 1951-2010, with sampling depths ranging from 0-1000 m. Pteropod biomass data was either extracted directly or derived through converting abundance to biomass with pteropod specific length to weight conversions. In the Northern Hemisphere (NH) the data were distributed evenly throughout the year, whereas sampling in the Southern Hemisphere was biased towards the austral summer months. 86% of all biomass values were located in the NH, most (42%) within the latitudinal band of 30-50° N. The range of global biomass values spanned over three orders of magnitude, with a mean and median biomass concentration of 8.2 mg C l-1 (SD = 61.4) and 0.25 mg C l-1, respectively for all data points, and with a mean of 9.1 mg C l-1 (SD = 64.8) and a median of 0.25 mg C l-1 for non-zero biomass values. The highest mean and median biomass concentrations were located in the NH between 40-50° S (mean biomass: 68.8 mg C l-1 (SD = 213.4) median biomass: 2.5 mg C l-1) while, in the SH, they were within the 70-80° S latitudinal band (mean: 10.5 mg C l-1 (SD = 38.8) and median: 0.2 mg C l-1). Biomass values were lowest in the equatorial regions. A broad range of biomass concentrations was observed at all depths, with the biomass peak located in the surface layer (0-25 m) and values generally decreasing with depth. However, biomass peaks were located at different depths in different ocean basins: 0-25 m depth in the N Atlantic, 50-100 m in the Pacific, 100-200 m in the Arctic, 200-500 m in the Brazilian region and >500 m in the Indo-Pacific region. Biomass in the NH was relatively invariant over the seasonal cycle, but more seasonally variable in the SH. The collected database provides a valuable tool for modellers for the study of ecosystem processes and global biogeochemical cycles.
Resumo:
The large discrepancy between field and laboratory measurements of mineral reaction rates is a long-standing problem in earth sciences, often attributed to factors extrinsic to the mineral itself. Nevertheless, differences in reaction rate are also observed within laboratory measurements, raising the possibility of intrinsic variations as well. Critical insight is available from analysis of the relationship between the reaction rate and its distribution over the mineral surface. This analysis recognizes the fundamental variance of the rate. The resulting anisotropic rate distributions are completely obscured by the common practice of surface area normalization. In a simple experiment using a single crystal and its polycrystalline counterpart, we demonstrate the sensitivity of dissolution rate to grain size, results that undermine the use of "classical" rate constants. Comparison of selected published crystal surface step retreat velocities (Jordan and Rammensee, 1998) as well as large single crystal dissolution data (Busenberg and Plummer, 1986) provide further evidence of this fundamental variability. Our key finding highlights the unsubstantiated use of a single-valued "mean" rate or rate constant as a function of environmental conditions. Reactivity predictions and long-term reservoir stability calculations based on laboratory measurements are thus not directly applicable to natural settings without a probabilistic approach. Such a probabilistic approach must incorporate both the variation of surface energy as a general range (intrinsic variation) as well as constraints to this variation owing to the heterogeneity of complex material (e.g., density of domain borders). We suggest the introduction of surface energy spectra (or the resulting rate spectra) containing information about the probability of existing rate ranges and the critical modes of surface energy.
Resumo:
At present time, there is a lack of knowledge on the interannual climate-related variability of zooplankton communities of the tropical Atlantic, central Mediterranean Sea, Caspian Sea, and Aral Sea, due to the absence of appropriate databases. In the mid latitudes, the North Atlantic Oscillation (NAO) is the dominant mode of atmospheric fluctuations over eastern North America, the northern Atlantic Ocean and Europe. Therefore, one of the issues that need to be addressed through data synthesis is the evaluation of interannual patterns in species abundance and species diversity over these regions in regard to the NAO. The database has been used to investigate the ecological role of the NAO in interannual variations of mesozooplankton abundance and biomass along the zonal array of the NAO influence. Basic approach to the proposed research involved: (1) development of co-operation between experts and data holders in Ukraine, Russia, Kazakhstan, Azerbaijan, UK, and USA to rescue and compile the oceanographic data sets and release them on CD-ROM, (2) organization and compilation of a database based on FSU cruises to the above regions, (3) analysis of the basin-scale interannual variability of the zooplankton species abundance, biomass, and species diversity.
Resumo:
During the past five million yrs, benthic d18O records indicate a large range of climates, from warmer than today during the Pliocene Warm Period to considerably colder during glacials. Antarctic ice cores have revealed Pleistocene glacial-interglacial CO2 variability of 60-100 ppm, while sea level fluctuations of typically 125 m are documented by proxy data. However, in the pre-ice core period, CO2 and sea level proxy data are scarce and there is disagreement between different proxies and different records of the same proxy. This hampers comprehensive understanding of the long-term relations between CO2, sea level and climate. Here, we drive a coupled climate-ice sheet model over the past five million years, inversely forced by a stacked benthic d18O record. We obtain continuous simulations of benthic d18O, sea level and CO2 that are mutually consistent. Our model shows CO2 concentrations of 300 to 470 ppm during the Early Pliocene. Furthermore, we simulate strong CO2 variability during the Pliocene and Early Pleistocene. These features are broadly supported by existing and new d11B-based proxy CO2 data, but less by alkenone-based records. The simulated concentrations and variations therein are larger than expected from global mean temperature changes. Our findings thus suggest a smaller Earth System Sensitivity than previously thought. This is explained by a more restricted role of land ice variability in the Pliocene. The largest uncertainty in our simulation arises from the mass balance formulation of East Antarctica, which governs the variability in sea level, but only modestly affects the modeled CO2 concentrations.
Resumo:
Over the last decade, several hundred seals have been equipped with conductivity-temperature-depth sensors in the Southern Ocean for both biological and physical oceanographic studies. A calibrated collection of seal-derived hydrographic data is now available, consisting of more than 165,000 profiles. The value of these hydrographic data within the existing Southern Ocean observing system is demonstrated herein by conducting two state estimation experiments, differing only in the use or not of seal data to constrain the system. Including seal-derived data substantially modifies the estimated surface mixedlayer properties and circulation patterns within and south of the Antarctic Circumpolar Current. Agreement with independent satellite observations of sea ice concentration is improved, especially along the East Antarctic shelf. Instrumented animals efficiently reduce a critical observational gap, and their contribution to monitoring polar climate variability will continue to grow as data accuracy and spatial coverage increase.
Resumo:
This paper describes seagrass species and percentage cover point-based field data sets derived from georeferenced photo transects. Annually or biannually over a ten year period (2004-2015) data sets were collected using 30-50 transects, 500-800 m in length distributed across a 142 km**2 shallow, clear water seagrass habitat, the Eastern Banks, Moreton Bay, Australia. Each of the eight data sets include seagrass property information derived from approximately 3000 georeferenced, downward looking photographs captured at 2-4 m intervals along the transects. Photographs were manually interpreted to estimate seagrass species composition and percentage cover (Coral Point Count excel; CPCe). Understanding seagrass biology, ecology and dynamics for scientific and management purposes requires point-based data on species composition and cover. This data set, and the methods used to derive it are a globally unique example for seagrass ecological applications. It provides the basis for multiple further studies at this site, regional to global comparative studies, and, for the design of similar monitoring programs elsewhere.
Resumo:
Measures have been developed to understand tendencies in the distribution of economic activity. The merits of these measures are in the convenience of data collection and processing. In this interim report, investigating the property of such measures to determine the geographical spread of economic activities, we summarize the merits and limitations of measures, and make clear that we must apply caution in their usage. As a first trial to access areal data, this project focus on administrative areas, not on point data and input-output data. Firm level data is not within the scope of this article. The rest of this article is organized as follows. In Section 2, we touch on the the limitations and problems associated with the measures and areal data. Specific measures are introduced in Section 3, and applied in Section 4. The conclusion summarizes the findings and discusses future work.
Resumo:
Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound
Resumo:
In a series of attempts to research and document relevant sloshing type phenomena, a series of experiments have been conducted. The aim of this paper is to describe the setup and data processing of such experiments. A sloshing tank is subjected to angular motion. As a result pressure registers are obtained at several locations, together with the motion data, torque and a collection of image and video information. The experimental rig and the data acquisition systems are described. Useful information for experimental sloshing research practitioners is provided. This information is related to the liquids used in the experiments, the dying techniques, tank building processes, synchronization of acquisition systems, etc. A new procedure for reconstructing experimental data, that takes into account experimental uncertainties, is presented. This procedure is based on a least squares spline approximation of the data. Based on a deterministic approach to the first sloshing wave impact event in a sloshing experiment, an uncertainty analysis procedure of the associated first pressure peak value is described.
Resumo:
In this paper, abstract interpretation algorithms are described for computing the sharmg as well as the freeness information about the run-time instantiations of program variables. An abstract domain is proposed which accurately and concisely represents combined freeness and sharing information for program variables. Abstract unification and all other domain-specific functions for an abstract interpreter working on this domain are presented. These functions are illustrated with an example. The importance of inferring freeness is stressed by showing (1) the central role it plays in non-strict goal independence, and (2) the improved accuracy it brings to the analysis of sharing information when both are computed together. Conversely, it is shown that keeping accurate track of sharing allows more precise inference of freeness, thus resulting in an overall much more powerful abstract interpreter.
Resumo:
This paper presents improved unification algorithms, an implementation, and an analysis of the effectiveness of an abstract interpreter based on the sharing + freeness domain presented in a previous paper, which was designed to accurately and concisely represent combined freeness and sharing information for program variables. We first briefly review this domain and the unification algorithms previously proposed. We then improve these algorithms and correct them to deal with some cases which were not well analyzed previously, illustrating the improvement with an example. We then present the implementation of the improved algorithm and evaluate its performance by comparing the effectiveness of the information inferred to that of other interpreters available to us for an application (program parallelization) that is common to all these interpreters. All these systems have been embedded in a real parallelizing compiler. Effectiveness of the analysis is measured in terms of actual final performance of the system: i.e. in terms of the actual speedups obtained. The results show good performance for the combined domain in that it improves the accuracy of both types of information and also in that the analyzer using the combined domain is more effective in the application than any of the other analyzers it is compared to.
Resumo:
The development of new-generation intelligent vehicle technologies will lead to a better level of road safety and CO2 emission reductions. However, the weak point of all these systems is their need for comprehensive and reliable data. For traffic data acquisition, two sources are currently available: 1) infrastructure sensors and 2) floating vehicles. The former consists of a set of fixed point detectors installed in the roads, and the latter consists of the use of mobile probe vehicles as mobile sensors. However, both systems still have some deficiencies. The infrastructure sensors retrieve information fromstatic points of the road, which are spaced, in some cases, kilometers apart. This means that the picture of the actual traffic situation is not a real one. This deficiency is corrected by floating cars, which retrieve dynamic information on the traffic situation. Unfortunately, the number of floating data vehicles currently available is too small and insufficient to give a complete picture of the road traffic. In this paper, we present a floating car data (FCD) augmentation system that combines information fromfloating data vehicles and infrastructure sensors, and that, by using neural networks, is capable of incrementing the amount of FCD with virtual information. This system has been implemented and tested on actual roads, and the results show little difference between the data supplied by the floating vehicles and the virtual vehicles.
Resumo:
Nowadays, there is a significant quantity of linguistic data available on the Web. However, linguistic resources are often published using proprietary formats and, as such, it can be difficult to interface with one another and they end up confined in “data silos”. The creation of web standards for the publishing of data on the Web and projects to create Linked Data have lead to interest in the creation of resources that can be published using Web principles. One of the most important aspects of “Lexical Linked Data” is the sharing of lexica and machine readable dictionaries. It is for this reason, that the lemon format has been proposed, which we briefly describe. We then consider two resources that seem ideal candidates for the Linked Data cloud, namely WordNet 3.0 and Wiktionary, a large document based dictionary. We discuss the challenges of converting both resources to lemon , and in particular for Wiktionary, the challenge of processing the mark-up, and handling inconsistencies and underspecification in the source material. Finally, we turn to the task of creating links between the two resources and present a novel algorithm for linking lexica as lexical Linked Data.