892 resultados para Towards Seamless Integration of Geoscience Models and Data
Resumo:
Replication Data Management (RDM) aims at enabling the use of data collections from several iterations of an experiment. However, there are several major challenges to RDM from integrating data models and data from empirical study infrastructures that were not designed to cooperate, e.g., data model variation of local data sources. [Objective] In this paper we analyze RDM needs and evaluate conceptual RDM approaches to support replication researchers. [Method] We adapted the ATAM evaluation process to (a) analyze RDM use cases and needs of empirical replication study research groups and (b) compare three conceptual approaches to address these RDM needs: central data repositories with a fixed data model, heterogeneous local repositories, and an empirical ecosystem. [Results] While the central and local approaches have major issues that are hard to resolve in practice, the empirical ecosystem allows bridging current gaps in RDM from heterogeneous data sources. [Conclusions] The empirical ecosystem approach should be explored in diverse empirical environments.
Resumo:
Applications based on Wireless Sensor Networks for Internet of Things scenarios are on the rise. The multiple possibilities they offer have spread towards previously hard to imagine fields, like e-health or human physiological monitoring. An application has been developed for its usage in scenarios where data collection is applied to smart spaces, aiming at its usage in fire fighting and sports. This application has been tested in a gymnasium with real, non-simulated nodes and devices. A Graphic User Interface has been implemented to suggest a series of exercises to improve a sportsman/woman s condition, depending on the context and their profile. This system can be adapted to a wide variety of e-health applications with minimum changes, and the user will interact using different devices, like smart phones, smart watches and/or tablets.
Resumo:
EPICS (Experimental Physics and Industrial Control System) lies in a set of software tools and applications which provide a software infrastructure for building distributed data acquisition and control systems. Currently there is an increase in use of such systems in large Physics experiments like ITER, ESS, and FREIA. In these experiments, advanced data acquisition systems using FPGA-based technology like FlexRIO are more frequently been used. The particular case of ITER (International Thermonuclear Experimental Reactor), the instrumentation and control system is supported by CCS (CODAC Core System), based on RHEL (Red Hat Enterprise Linux) operating system, and by the plant design specifications in which every CCS element is defined either hardware, firmware or software. In this degree final project the methodology proposed in Implementation of Intelligent Data Acquisition Systems for Fusion Experiments using EPICS and FlexRIO Technology Sanz et al. [1] is used. The final objective is to provide a document describing the fulfilled process and the source code of the data acquisition system accomplished. The use of the proposed methodology leads to have two diferent stages. The first one consists of the hardware modelling with graphic design tools like LabVIEWFPGA which later will be implemented in the FlexRIO device. In the next stage the design cycle is completed creating an EPICS controller that manages the device using a generic device support layer named NDS (Nominal Device Support). This layer integrates the data acquisition system developed into CCS (Control, data access and communication Core System) as an EPICS interface to the system. The use of FlexRIO technology drives the use of LabVIEW and LabVIEW FPGA respectively. RESUMEN. EPICS (Experimental Physics and Industrial Control System) es un conjunto de herramientas software utilizadas para el desarrollo e implementación de sistemas de adquisición de datos y control distribuidos. Cada vez es más utilizado para entornos de experimentación física a gran escala como ITER, ESS y FREIA entre otros. En estos experimentos se están empezando a utilizar sistemas de adquisición de datos avanzados que usan tecnología basada en FPGA como FlexRIO. En el caso particular de ITER, el sistema de instrumentación y control adoptado se basa en el uso de la herramienta CCS (CODAC Core System) basado en el sistema operativo RHEL (Red Hat) y en las especificaciones del diseño del sistema de planta, en la cual define todos los elementos integrantes del CCS, tanto software como firmware y hardware. En este proyecto utiliza la metodología propuesta para la implementación de sistemas de adquisición de datos inteligente basada en EPICS y FlexRIO. Se desea generar una serie de ejemplos que cubran dicho ciclo de diseño completo y que serían propuestos como casos de uso de dichas tecnologías. Se proporcionará un documento en el que se describa el trabajo realizado así como el código fuente del sistema de adquisición. La metodología adoptada consta de dos etapas diferenciadas. En la primera de ellas se modela el hardware y se sintetiza en el dispositivo FlexRIO utilizando LabVIEW FPGA. Posteriormente se completa el ciclo de diseño creando un controlador EPICS que maneja cada dispositivo creado utilizando una capa software genérica de manejo de dispositivos que se denomina NDS (Nominal Device Support). Esta capa integra la solución en CCS realizando la interfaz con la capa EPICS del sistema. El uso de la tecnología FlexRIO conlleva el uso del lenguaje de programación y descripción hardware LabVIEW y LabVIEW FPGA respectivamente.
Resumo:
The road transportation sector is responsible for around 25% of total man-made CO2 emissions worldwide. Considerable efforts are therefore underway to reduce these emissions using several approaches, including improved vehicle technologies, traffic management and changing driving behaviour. Detailed traffic and emissions models are used extensively to assess the potential effects of these measures. However, if the input and calibration data are not sufficiently detailed there is an inherent risk that the results may be inaccurate. This article presents the use of Floating Car Data to derive useful speed and acceleration values in the process of traffic model calibration as a means of ensuring more accurate results when simulating the effects of particular measures. The data acquired includes instantaneous GPS coordinates to track and select the itineraries, and speed and engine performance extracted directly from the on-board diagnostics system. Once the data is processed, the variations in several calibration parameters can be analyzed by comparing the base case model with the measure application scenarios. Depending on the measure, the results show changes of up to 6.4% in maximum speed values, and reductions of nearly 15% in acceleration and braking levels, especially when eco-driving is applied.
Resumo:
In the last years, there has been an increase in the amount of real-time data generated. Sensors attached to things are transforming how we interact with our environment. Extracting meaningful information from these streams of data is essential for some application areas and requires processing systems that scale to varying conditions in data sources, complex queries, and system failures. This paper describes ongoing research on the development of a scalable RDF streaming engine.
Resumo:
Impact response surfaces (IRSs) depict the response of an impact variable to changes in two explanatory variables as a plotted surface. Here, IRSs of spring and winter wheat yields were constructed from a 25-member ensemble of process-based crop simulation models. Twenty-one models were calibrated by different groups using a common set of calibration data, with calibrations applied independently to the same models in three cases. The sensitivity of modelled yield to changes in temperature and precipitation was tested by systematically modifying values of 1981-2010 baseline weather data to span the range of 19 changes projected for the late 21st century at three locations in Europe.
Resumo:
All crop models, whether site-specific or global-gridded and regardless of crop, simulate daily crop transpiration and soil evaporation during the crop life cycle, resulting in seasonal crop water use. Modelers use several methods for predicting daily potential evapotranspiration (ET), including FAO-56, Penman-Monteith, Priestley-Taylor, Hargreaves, full energy balance, and transpiration water efficiency. They use extinction equations to partition energy to soil evaporation or transpiration, depending on leaf area index. Most models simulate soil water balance and soil-root water supply for transpiration, and limit transpiration if water uptake is insufficient, and thereafter reduce dry matter production. Comparisons among multiple crop and global gridded models in the Agricultural Model Intercomparison and Improvement Project (AgMIP) show surprisingly large differences in simulated ET and crop water use for the same climatic conditions. Model intercomparisons alone are not enough to know which approaches are correct. There is an urgent need to test these models against field-observed data on ET and crop water use. It is important to test various ET modules/equations in a model platform where other aspects such as soil water balance and rooting are held constant, to avoid compensation caused by other parts of models. The CSM-CROPGRO model in DSSAT already has ET equations for Priestley-Taylor, Penman-FAO-24, Penman-Monteith-FAO-56, and an hourly energy balance approach. In this work, we added transpiration-efficiency modules to DSSAT and AgMaize models and tested the various ET equations against available data on ET, soil water balance, and season-long crop water use of soybean, fababean, maize, and other crops where runoff and deep percolation were known or zero. The different ET modules created considerable differences in predicted ET, growth, and yield.
Resumo:
PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.
Resumo:
Background: Models describing nuclear fragmentation and fragmentation fission deliver important input for planning nuclear physics experiments and future radioactive ion beam facilities. These models are usually benchmarked against data from stable beam experiments. In the future, two-step fragmentation reactions with exotic nuclei as stepping stones are a promising tool for reaching the most neutron-rich nuclei, creating a need for models to describe also these reactions. Purpose: We want to extend the presently available data on fragmentation reactions towards the light exotic region on the nuclear chart. Furthermore, we want to improve the understanding of projectile fragmentation especially for unstable isotopes. Method: We have measured projectile fragments from (10,12-18C) and B10-15 isotopes colliding with a carbon target. These measurements were all performed within one experiment, which gives rise to a very consistent data set. We compare our data to model calculations. Results: One-proton removal cross sections with different final neutron numbers (1 pxn) for relativistic C-10,C-12-18 and B10-15 isotopes impinging on a carbon target. Comparing model calculations to the data, we find that the EPAX code is not able to describe the data satisfactorily. Using ABRABLA07 on the other hand, we find that the average excitation energy per abraded nucleon needs to be decreased from 27 MeV to 8.1 MeV. With that decrease ABRABLA07 describes the data surprisingly well. Conclusions: Extending the available data towards light unstable nuclei with a consistent set of new data has allowed a systematic investigation of the role of the excitation energy induced in projectile fragmentation. Most striking is the apparent mass dependence of the average excitation energy per abraded nucleon. Nevertheless, this parameter, which has been related to final-state interactions, requires further study.
Resumo:
Phase equilibrium data regression is an unavoidable task necessary to obtain the appropriate values for any model to be used in separation equipment design for chemical process simulation and optimization. The accuracy of this process depends on different factors such as the experimental data quality, the selected model and the calculation algorithm. The present paper summarizes the results and conclusions achieved in our research on the capabilities and limitations of the existing GE models and about strategies that can be included in the correlation algorithms to improve the convergence and avoid inconsistencies. The NRTL model has been selected as a representative local composition model. New capabilities of this model, but also several relevant limitations, have been identified and some examples of the application of a modified NRTL equation have been discussed. Furthermore, a regression algorithm has been developed that allows for the advisable simultaneous regression of all the condensed phase equilibrium regions that are present in ternary systems at constant T and P. It includes specific strategies designed to avoid some of the pitfalls frequently found in commercial regression tools for phase equilibrium calculations. Most of the proposed strategies are based on the geometrical interpretation of the lowest common tangent plane equilibrium criterion, which allows an unambiguous comprehension of the behavior of the mixtures. The paper aims to show all the work as a whole in order to reveal the necessary efforts that must be devoted to overcome the difficulties that still exist in the phase equilibrium data regression problem.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Resumo:
Moderate resolution remote sensing data, as provided by MODIS, can be used to detect and map active or past wildfires from daily records of suitable combinations of reflectance bands. The objective of the present work was to develop and test simple algorithms and variations for automatic or semiautomatic detection of burnt areas from time series data of MODIS biweekly vegetation indices for a Mediterranean region. MODIS-derived NDVI 250m time series data for the Valencia region, East Spain, were subjected to a two-step process for the detection of candidate burnt areas, and the results compared with available fire event records from the Valencia Regional Government. For each pixel and date in the data series, a model was fitted to both the previous and posterior time series data. Combining drops between two consecutive points and 1-year average drops, we used discrepancies or jumps between the pre and post models to identify seed pixels, and then delimitated fire scars for each potential wildfire using an extension algorithm from the seed pixels. The resulting maps of the detected burnt areas showed a very good agreement with the perimeters registered in the database of fire records used as reference. Overall accuracies and indices of agreement were very high, and omission and commission errors were similar or lower than in previous studies that used automatic or semiautomatic fire scar detection based on remote sensing. This supports the effectiveness of the method for detecting and mapping burnt areas in the Mediterranean region.
Resumo:
Statistical machine translation (SMT) is an approach to Machine Translation (MT) that uses statistical models whose parameter estimation is based on the analysis of existing human translations (contained in bilingual corpora). From a translation student’s standpoint, this dissertation aims to explain how a phrase-based SMT system works, to determine the role of the statistical models it uses in the translation process and to assess the quality of the translations provided that system is trained with in-domain goodquality corpora. To that end, a phrase-based SMT system based on Moses has been trained and subsequently used for the English to Spanish translation of two texts related in topic to the training data. Finally, the quality of this output texts produced by the system has been assessed through a quantitative evaluation carried out with three different automatic evaluation measures and a qualitative evaluation based on the Multidimensional Quality Metrics (MQM).
Resumo:
Substantial retreat or disintegration of numerous ice shelves have been observed on the Antarctic Peninsula. The ice shelf in the Prince Gustav Channel retreated gradually since the late 1980's and broke-up in 1995. Tributary glaciers reacted with speed-up, surface lowering and increased ice discharge, consequently contributing to sea level rise. We present a detailed long-term study (1993-2014) on the dynamic response of Sjögren Inlet glaciers to the disintegration of Prince Gustav Ice Shelf. We analyzed various remote sensing datasets to observe the reactions of the glaciers to the loss of the buttressing ice shelf. A strong increase in ice surface velocities was observed with maximum flow speeds reaching 2.82±0.48 m/d in 2007 and 1.50±0.32 m/d in 2004 at Sjögren and Boydell glaciers respectively. Subsequently, the flow velocities decelerated, however in late 2014, we still measured about two times the values of our first measurements in 1996. The tributary glaciers retreated 61.7±3.1 km² behind the former grounding line of the ice shelf. In regions below 1000 m a.s.l., a mean surface lowering of -68±10 m (-3.1 m/a) was observed in the period 1993-2014. The lowering rate decreased to -2.2 m/a in recent years. Based on the surface lowering rates, geodetic mass balances of the glaciers were derived for different time steps. High mass loss rate of -1.21±0.36 Gt/a was found in the earliest period (1993-2001). Due to the dynamic adjustments of the glaciers to the new boundary conditions the ice mass loss reduced to -0.59±0.11 Gt/a in the period 2012-2014, resulting in an average mass loss rate of -0.89±0.16 Gt/a (1993-2014). Including the retreat of the ice front and grounding line, a total mass change of -38.5±7.7 Gt and a contribution to sea level rise of 0.061±0.013 mm were computed. Analysis of the ice flux revealed that available bedrock elevation estimates at Sjögren Inlet are too shallow and are the major uncertainty in ice flux computations. This temporally dense time series analysis of Sjögren Inlet glaciers shows that the adjustments of tributary glaciers to ice shelf disintegration are still going on and provides detailed information of the changes in glacier dynamics.
Resumo:
The strength and geometry of the Atlantic meridional overturning circulation is tightly coupled to climate on glacial-interglacial and millennial timescales, but has proved difficult to reconstruct, particularly for the Last Glacial Maximum. Today, the return flow from the northern North Atlantic to lower latitudes associated with the Atlantic meridional overturning circulation reaches down to approximately 4,000 m. In contrast, during the Last Glacial Maximum this return flow is thought to have occurred primarily at shallower depths. Measurements of sedimentary 231Pa/230Th have been used to reconstruct the strength of circulation in the North Atlantic Ocean, but the effects of biogenic silica on 231Pa/230Th-based estimates remain controversial. Here we use measurements of 231Pa/230Th ratios and biogenic silica in Holocene-aged Atlantic sediments and simulations with a two-dimensional scavenging model to demonstrate that the geometry and strength of the Atlantic meridional overturning circulation are the primary controls of 231Pa/230Th ratios in modern Atlantic sediments. For the glacial maximum, a simulation of Atlantic overturning with a shallow, but vigorous circulation and bulk water transport at around 2,000 m depth best matched observed glacial Atlantic 231Pa/230Th values. We estimate that the transport of intermediate water during the Last Glacial Maximum was at least as strong as deep water transport today.