988 resultados para format-compliant


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Potential temperature measured with a SBE37 at 35.862ºN, 5.97ºW at 344 meters Depth. Data expand from September the 30th, 2004 to March the 2nd, 2016. Original measurement frequency was 30 minutes, the data presented here is a subsampling that extract the coldest temperature found each 12 hours. The time vector corresponds with the moment in which this minimun temperature is observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Topographic variation, the spatial variation in elevation and terrain features, underpins a myriad of patterns and processes in geography and ecology and is key to understanding the variation of life on the planet. The characterization of this variation is scale-dependent, i.e. it varies with the distance over which features are assessed and with the spatial grain (grid cell resolution) of analysis. A fully standardized and global multivariate product of different terrain features has the potential to support many large-scale basic research and analytical applications, however to date, such technique is unavailable. Here we used the digital elevation model products of global 250 m GMTED and near-global 90 m SRTM to derive a suite of topographic variables: elevation, slope, aspect, eastness, northness, roughness, terrain roughness index, topographic position index, vector ruggedness measure, profile and tangential curvature, and 10 geomorphological landform classes. We aggregated each variable to 1, 5, 10, 50 and 100 km spatial grains using several aggregation approaches (median, average, minimum, maximum, standard deviation, percent cover, count, majority, Shannon Index, entropy, uniformity). While a global cross-correlation underlines the high similarity of many variables, a more detailed view in four mountain regions reveals local differences, as well as scale variations in the aggregated variables at different spatial grains. All newly-developed variables are available for download at http://www.earthenv.org and can serve as a basis for standardized hydrological, environmental and biodiversity modeling at a global extent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transient simulations are widely used in studying the past climate as they provide better comparison with any exisiting proxy data. However, multi-millennial transient simulations using coupled climate models are usually computationally very expensive. As a result several acceleration techniques are implemented when using numerical simulations to recreate past climate. In this study, we compare the results from transient simulations of the present and the last interglacial with and without acceleration of the orbital forcing, using the comprehensive coupled climate model CCSM3 (Community Climate System Model 3). Our study shows that in low-latitude regions, the simulation of long-term variations in interglacial surface climate is not significantly affected by the use of the acceleration technique (with an acceleration factor of 10) and hence, large-scale model-data comparison of surface variables is not hampered. However, in high-latitude regions where the surface climate has a direct connection to the deep ocean, e.g. in the Southern Ocean or the Nordic Seas, acceleration-induced biases in sea-surface temperature evolution may occur with potential influence on the dynamics of the overlying atmosphere. The data provided here are from both accelerated and non-accelerated runs as decadal mean values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cognitive tests, animals are often given a choice between two options and obtain a reward if they choose correctly. We investigated whether task format affects subjects' performance in a physical cognition test. In experiment 1, a two-choice memory test, 15 marmosets, Callithrix jacchus, had to remember the location of a food reward over time delays of increasing duration. We predicted that their performance would decline with increasing delay, but this was not found. One possible explanation was that the subjects were not sufficiently motivated to choose correctly when presented with only two options because in each trial they had a 50% chance of being rewarded. In experiment 2, we explored this possibility by testing eight naïve marmosets and seven squirrel monkeys, Saimiri sciureus, with both the traditional two-choice and a new nine-choice version of the memory test that increased the cost of a wrong choice. We found that task format affected the monkeys' performance. When choosing between nine options, both species performed better and their performance declined as delays became longer. Our results suggest that the two-choice format compromises the assessment of physical cognition, at least in memory tests with these New World monkeys, whereas providing more options, which decreases the probability of obtaining a reward when making a random guess, improves both performance and measurement validity of memory. Our findings suggest that two-choice tasks should be used with caution in comparisons within and across species because they are prone to motivational biases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This document specifies the NetCDF file format of EGO-gliders that is used to distribute glider data, metadata and technical data. It documents the standards used therein; this includes naming conventions as well as metadata content. It was initiated in October 2012, based on OceanSITES, Argo and ANFOG user's manuals. Everyone’s Gliding Observatories - EGO is dedicated to the promotion of the glider technology and its applications. The EGO group promotes glider applications through coordination, training, liaison between providers and users, advocacy, and provision of expert advice. We intend to favor oceanographic experiments and the operational monitoring of the oceans with gliders through scientific and international collaboration. We provide news, support, information about glider projects and glider data management, as well as resources related to gliders. All EGO data are publicly available. More information about the project is available at: http://www.ego-network.org

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Every Argo data file submitted by a DAC for distribution on the GDAC has its format and data consistency checked by the Argo FileChecker. Two types of checks are applied: 1. Format checks. Ensures the file formats match the Argo standards precisely. 2. Data consistency checks. Additional data consistency checks are performed on a file after it passes the format checks. These checks do not duplicate any of the quality control checks performed elsewhere. These checks can be thought of as “sanity checks” to ensure that the data are consistent with each other. The data consistency checks enforce data standards and ensure that certain data values are reasonable and/or consistent with other information in the files. Examples of the “data standard” checks are the “mandatory parameters” defined for meta-data files and the technical parameter names in technical data files. Files with format or consistency errors are rejected by the GDAC and are not distributed. Less serious problems will generate warnings and the file will still be distributed on the GDAC. Reference Tables and Data Standards: Many of the consistency checks involve comparing the data to the published reference tables and data standards. These tables are documented in the User’s Manual. (The FileChecker implements “text versions” of these tables.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Half-Unit-Biased format is based on shifting the representation line of the binary numbers by half Unit in the Last Place. The main feature of this format is that the round to nearest is carried out by a simple truncation, preventing any carry propagation and saving time and area. Algorithms and architectures have been defined for addition/substraction and multiplication operations under this format. Nevertheless, the division operation has not been confronted yet. In this paper we deal with the floating-point division under HUB format, studying the architecture for the digit recurrence method, including the on-the-fly conversion of the signed digit quotient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a simple and compact compliant gripper, whose gripping stiffness can be thermally controlled to accommodate the actuation inaccuracy to avoid or reduce the risk of breaking objects. The principle of reducing jaw stiffness is that thermal change can cause an initial internal compressive force along each compliant beam. A prototype is fabricated with physical testing to verify the feasibility. It has been shown that when a voltage is applied, the gripping stiffness effectively reduces to accommodate more inaccuracy of actuation, which allows delicate or small-scale objects to be gripped.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since precise linear actuators of a compliant parallel manipulator suffer from their inability to tolerate the transverse motion/load in the multi-axis motion, actuation isolation should be considered in the compliant manipulator design to eliminate the transverse motion at the point of actuation. This paper presents an effective design method for constructing compliant parallel manipulators with actuation isolation, by adding the same number of actuation legs as the number of the DOF (degree of freedom) of the original mechanism. The method is demonstrated by two design case studies, one of which is quantitatively studied by analytical modelling. The modelling results confirm possible inherent issues of the proposed structure design method such as increased primary stiffness, introduced extra parasitic motions and cross-axis coupling motions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates the legal, ethical, technical, and psychological issues of general data processing and artificial intelligence practices and the explainability of AI systems. It consists of two main parts. In the initial section, we provide a comprehensive overview of the big data processing ecosystem and the main challenges we face today. We then evaluate the GDPR’s data privacy framework in the European Union. The Trustworthy AI Framework proposed by the EU’s High-Level Expert Group on AI (AI HLEG) is examined in detail. The ethical principles for the foundation and realization of Trustworthy AI are analyzed along with the assessment list prepared by the AI HLEG. Then, we list the main big data challenges the European researchers and institutions identified and provide a literature review on the technical and organizational measures to address these challenges. A quantitative analysis is conducted on the identified big data challenges and the measures to address them, which leads to practical recommendations for better data processing and AI practices in the EU. In the subsequent part, we concentrate on the explainability of AI systems. We clarify the terminology and list the goals aimed at the explainability of AI systems. We identify the reasons for the explainability-accuracy trade-off and how we can address it. We conduct a comparative cognitive analysis between human reasoning and machine-generated explanations with the aim of understanding how explainable AI can contribute to human reasoning. We then focus on the technical and legal responses to remedy the explainability problem. In this part, GDPR’s right to explanation framework and safeguards are analyzed in-depth with their contribution to the realization of Trustworthy AI. Then, we analyze the explanation techniques applicable at different stages of machine learning and propose several recommendations in chronological order to develop GDPR-compliant and Trustworthy XAI systems.