803 resultados para Sensor Networks and Data Streaming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current scientific applications are often structured as workflows and rely on workflow systems to compile abstract experiment designs into enactable workflows that utilise the best available resources. The automation of this step and of the workflow enactment, hides the details of how results have been produced. Knowing how compilation and enactment occurred allows results to be reconnected with the experiment design. We investigate how provenance helps scientists to connect their results with the actual execution that took place, their original experiment and its inputs and parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The papers aims at considering the issue of relative efficiency measurement in the context of the public sector. In particular, we consider the efficiency measurement approach provided by Data Envelopment Analysis (DEA). The application considered the main Brazilian federal universities for the year of 1994. Given the large number of inputs and outputs, this paper advances the idea of using factor analysis to explore common dimensions in the data set. Such procedure made possible a meaningful application of DEA, which finally provided a set of efficiency scores for the universities considered .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Starting from the idea that economic systems fall into complexity theory, where its many agents interact with each other without a central control and that these interactions are able to change the future behavior of the agents and the entire system, similar to a chaotic system we increase the model of Russo et al. (2014) to carry out three experiments focusing on the interaction between Banks and Firms in an artificial economy. The first experiment is relative to Relationship Banking where, according to the literature, the interaction over time between Banks and Firms are able to produce mutual benefits, mainly due to reduction of the information asymmetry between them. The following experiment is related to information heterogeneity in the credit market, where the larger the bank, the higher their visibility in the credit market, increasing the number of consult for new loans. Finally, the third experiment is about the effects on the credit market of the heterogeneity of prices that Firms faces in the goods market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One objective of the feeder reconfiguration problem in distribution systems is to minimize the power losses for a specific load. For this problem, mathematical modeling is a nonlinear mixed integer problem that is generally hard to solve. This paper proposes an algorithm based on artificial neural network theory. In this context, clustering techniques to determine the best training set for a single neural network with generalization ability are also presented. The proposed methodology was employed for solving two electrical systems and presented good results. Moreover, the methodology can be employed for large-scale systems in real-time environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article introduces the software program called EthoSeq, which is designed to extract probabilistic behavioral sequences (tree-generated sequences, or TGSs) from observational data and to prepare a TGS-species matrix for phylogenetic analysis. The program uses Graph Theory algorithms to automatically detect behavioral patterns within the observational sessions. It includes filtering tools to adjust the search procedure to user-specified statistical needs. Preliminary analyses of data sets, such as grooming sequences in birds and foraging tactics in spiders, uncover a large number of TGSs which together yield single phylogenetic trees. An example of the use of the program is our analysis of felid grooming sequences, in which we have obtained 1,386 felid grooming TGSs for seven species, resulting in a single phylogeny. These results show that behavior is definitely useful in phylogenetic analysis. EthoSeq simplifies and automates such analyses, uncovers much of the hidden patterns of long behavioral sequences, and prepares this data for further analysis with standard phylogenetic programs. We hope it will encourage many empirical studies on the evolution of behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The contents of some nutrients in 35 Brazilian green and roasted coffee samples were determined by flame atomic absorption spectrometry (Ca, Mg, Fe, Cu, Mn, and Zn), flame atomic emission photometry (Na and K) and Kjeldahl (N) after preparing the samples by wet digestion procedures using i) a digester heating block and ii) a conventional microwave oven system with pressure and temperature control. The accuracy of the procedures was checked using three standard reference materials (National Institute of Standards and Technology, SRM 1573a Tomato Leaves, SRM 1547 Peach Leaves, SRM 1570a Trace Elements in Spinach). Analysis of data after application of t-test showed that results obtained by microwave-assisted digestion were more accurate than those obtained by block digester at 95% confidence level. Additionally to better accuracy, other favorable characteristics found were lower analytical blanks, lower reagent consumption, and shorter digestion time. Exploratory analysis of results using Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) showed that Na, K, Ca, Cu, Mg, and Fe were the principal elements to discriminate between green and roasted coffee samples. ©2007 Sociedade Brasileira de Química.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In geophysics and seismology, raw data need to be processed to generate useful information that can be turned into knowledge by researchers. The number of sensors that are acquiring raw data is increasing rapidly. Without good data management systems, more time can be spent in querying and preparing datasets for analyses than in acquiring raw data. Also, a lot of good quality data acquired at great effort can be lost forever if they are not correctly stored. Local and international cooperation will probably be reduced, and a lot of data will never become scientific knowledge. For this reason, the Seismological Laboratory of the Institute of Astronomy, Geophysics and Atmospheric Sciences at the University of São Paulo (IAG-USP) has concentrated fully on its data management system. This report describes the efforts of the IAG-USP to set up a seismology data management system to facilitate local and international cooperation. © 2011 by the Istituto Nazionale di Geofisica e Vulcanologia. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need for high reliability and environmental concerns are making the underground networks the most appropriate choice of energy distribution. However, like any other system, underground distribution systems are not free of failures. In this context, this work presents an approach to study underground systems using computational tools by integrating the software PSCAD/EMTDC with artificial neural networks to assist fault location in power distribution systems. Targeted benefits include greater accuracy and reduced repair time. The results presented here shows the feasibility of the proposed approach. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliography

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El Sub-Programa de Documentacion en Poblacion y Procesamiento de Datos del CELADE se compone de tres elementos que corresponden a los primeros objetivos inmediatos del mismo: el Sistema de Documentacion en Poblacion para America Latina (DOCPAL), el Banco de Datos y la Unidad de Procesamiento de Informacion. El presente documento describe los objetivos, antecedentes y justificacion de cada uno de estos tres nucleos, asi como las actividades contempladas para el periodo 1980-1983 y que fundamentan el presupuesto contenido en la solicitud de fondos al UNFPA