949 resultados para soil data requirements
Resumo:
In e-Science experiments, it is vital to record the experimental process for later use such as in interpreting results, verifying that the correct process took place or tracing where data came from. The process that led to some data is called the provenance of that data, and a provenance architecture is the software architecture for a system that will provide the necessary functionality to record, store and use process documentation. However, there has been little principled analysis of what is actually required of a provenance architecture, so it is impossible to determine the functionality they would ideally support. In this paper, we present use cases for a provenance architecture from current experiments in biology, chemistry, physics and computer science, and analyse the use cases to determine the technical requirements of a generic, technology and application-independent architecture. We propose an architecture that meets these requirements and evaluate a preliminary implementation by attempting to realise two of the use cases.
Resumo:
The open provenance architecture (OPA) approach to the challenge was distinct in several regards. In particular, it is based on an open, well-defined data model and architecture, allowing different components of the challenge workflow to independently record documentation, and for the workflow to be executed in any environment. Another noticeable feature is that we distinguish between the data recorded about what has occurred, emphprocess documentation, and the emphprovenance of a data item, which is all that caused the data item to be as it is and is obtained as the result of a query over process documentation. This distinction allows us to tailor the system to separately best address the requirements of recording and querying documentation. Other notable features include the explicit recording of causal relationships between both events and data items, an interaction-based world model, intensional definition of data items in queries rather than relying on explicit naming mechanisms, and emphstyling of documentation to support non-functional application requirements such as reducing storage costs or ensuring privacy of data. In this paper we describe how each of these features aid us in answering the challenge provenance queries.
Resumo:
From where did this tweet originate? Was this quote from the New York Times modified? Daily, we rely on data from the Web but often it is difficult or impossible to determine where it came from or how it was produced. This lack of provenance is particularly evident when people and systems deal with Web information or with any environment where information comes from sources of varying quality. Provenance is not captured pervasively in information systems. There are major technical, social, and economic impediments that stand in the way of using provenance effectively. This paper synthesizes requirements for provenance on the Web for a number of dimensions focusing on three key aspects of provenance: the content of provenance, the management of provenance records, and the uses of provenance information. To illustrate these requirements, we use three synthesized scenarios that encompass provenance problems faced by Web users today.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
Microwave remote sensing has high potential for soil moisture retrieval. However, the efficient retrieval of soil moisture depends on optimally choosing the soil moisture retrieval parameters. In this study first the initial evaluation of SMOS L2 product is performed and then four approaches regarding soil moisture retrieval from SMOS brightness temperature are reported. The radiative transfer equation based tau-omega rationale is used in this study for the soil moisture retrievals. The single channel algorithms (SCA) using H polarisation is implemented with modifications, which includes the effective temperatures simulated from ECMWF (downscaled using WRF-NOAH Land Surface Model (LSM)) and MODIS. The retrieved soil moisture is then utilized for soil moisture deficit (SMD) estimation using empirical relationships with Probability Distributed Model based SMD as a benchmark. The square of correlation during the calibration indicates a value of R2 =0.359 for approach 4 (WRF-NOAH LSM based LST with optimized roughness parameters) followed by the approach 2 (optimized roughness parameters and MODIS based LST) (R2 =0.293), approach 3 (WRF-NOAH LSM based LST with no optimization) (R2 =0.267) and approach 1(MODIS based LST with no optimization) (R2 =0.163). Similarly, during the validation a highest performance is reported by approach 4. The other approaches are also following a similar trend as calibration. All the performances are depicted through Taylor diagram which indicates that the H polarisation using ECMWF based LST is giving a better performance for SMD estimation than the original SMOS L2 products at a catchment scale.
Resumo:
This paper gives a first step toward a methodology to quantify the influences of regulation on short-run earnings dynamics. It also provides evidence on the patterns of wage adjustment adopted during the recent high inflationary experience in Brazil.The large variety of official wage indexation rules adopted in Brazil during the recent years combined with the availability of monthly surveys on labor markets makes the Brazilian case a good laboratory to test how regulation affects earnings dynamics. In particular, the combination of large sample sizes with the possibility of following the same worker through short periods of time allows to estimate the cross-sectional distribution of longitudinal statistics based on observed earnings (e.g., monthly and annual rates of change).The empirical strategy adopted here is to compare the distributions of longitudinal statistics extracted from actual earnings data with simulations generated from minimum adjustment requirements imposed by the Brazilian Wage Law. The analysis provides statistics on how binding were wage regulation schemes. The visual analysis of the distribution of wage adjustments proves useful to highlight stylized facts that may guide future empirical work.
Resumo:
Online geographic-databases have been growing increasingly as they have become a crucial source of information for both social networks and safety-critical systems. Since the quality of such applications is largely related to the richness and completeness of their data, it becomes imperative to develop adaptable and persistent storage systems, able to make use of several sources of information as well as enabling the fastest possible response from them. This work will create a shared and extensible geographic model, able to retrieve and store information from the major spatial sources available. A geographic-based system also has very high requirements in terms of scalability, computational power and domain complexity, causing several difficulties for a traditional relational database as the number of results increases. NoSQL systems provide valuable advantages for this scenario, in particular graph databases which are capable of modeling vast amounts of inter-connected data while providing a very substantial increase of performance for several spatial requests, such as finding shortestpath routes and performing relationship lookups with high concurrency. In this work, we will analyze the current state of geographic information systems and develop a unified geographic model, named GeoPlace Explorer (GE). GE is able to import and store spatial data from several online sources at a symbolic level in both a relational and a graph databases, where several stress tests were performed in order to find the advantages and disadvantages of each database paradigm.
Resumo:
The domain of Knowledge Discovery (KD) and Data Mining (DM) is of growing importance in a time where more and more data is produced and knowledge is one of the most precious assets. Having explored both the existing underlying theory, the results of the ongoing research in academia and the industry practices in the domain of KD and DM, we have found that this is a domain that still lacks some systematization. We also found that this systematization exists to a greater degree in the Software Engineering and Requirements Engineering domains, probably due to being more mature areas. We believe that it is possible to improve and facilitate the participation of enterprise stakeholders in the requirements engineering for KD projects by systematizing requirements engineering process for such projects. This will, in turn, result in more projects that end successfully, that is, with satisfied stakeholders, including in terms of time and budget constraints. With this in mind and based on all information found in the state-of-the art, we propose SysPRE - Systematized Process for Requirements Engineering in KD projects. We begin by proposing an encompassing generic description of the KD process, where the main focus is on the Requirements Engineering activities. This description is then used as a base for the application of the Design and Engineering Methodology for Organizations (DEMO) so that we can specify a formal ontology for this process. The resulting SysPRE ontology can serve as a base that can be used not only to make enterprises become aware of their own KD process and requirements engineering process in the KD projects, but also to improve such processes in reality, namely in terms of success rate.
Resumo:
The increasing use of fossil fuels in line with cities demographic explosion carries out to huge environmental impact in society. For mitigate these social impacts, regulatory requirements have positively influenced the environmental consciousness of society, as well as, the strategic behavior of businesses. Along with this environmental awareness, the regulatory organs have conquered and formulated new laws to control potentially polluting activities, mostly in the gas stations sector. Seeking for increasing market competitiveness, this sector needs to quickly respond to internal and external pressures, adapting to the new standards required in a strategic way to get the Green Badge . Gas stations have incorporated new strategies to attract and retain new customers whom present increasingly social demand. In the social dimension, these projects help the local economy by generating jobs and income distribution. In this survey, the present research aims to align the social, economic and environmental dimensions to set the sustainable performance indicators at Gas Stations sector in the city of Natal/RN. The Sustainable Balanced Scorecard (SBSC) framework was create with a set of indicators for mapping the production process of gas stations. This mapping aimed at identifying operational inefficiencies through multidimensional indicators. To carry out this research, was developed a system for evaluating the sustainability performance with application of Data Envelopment Analysis (DEA) through a quantitative method approach to detect system s efficiency level. In order to understand the systemic complexity, sub organizational processes were analyzed by the technique Network Data Envelopment Analysis (NDEA) figuring their micro activities to identify and diagnose the real causes of overall inefficiency. The sample size comprised 33 Gas stations and the conceptual model included 15 indicators distributed in the three dimensions of sustainability: social, environmental and economic. These three dimensions were measured by means of classical models DEA-CCR input oriented. To unify performance score of individual dimensions, was designed a unique grouping index based upon two means: arithmetic and weighted. After this, another analysis was performed to measure the four perspectives of SBSC: learning and growth, internal processes, customers, and financial, unifying, by averaging the performance scores. NDEA results showed that no company was assessed with excellence in sustainability performance. Some NDEA higher efficiency Gas Stations proved to be inefficient under certain perspectives of SBSC. In the sequence, a comparative sustainable performance and assessment analyzes among the gas station was done, enabling entrepreneurs evaluate their performance in the market competitors. Diagnoses were also obtained to support the decision making of entrepreneurs in improving the management of organizational resources and promote guidelines the regulators. Finally, the average index of sustainable performance was 69.42%, representing the efforts of the environmental suitability of the Gas station. This results point out a significant awareness of this segment, but it still needs further action to enhance sustainability in the long term
Resumo:
The objective of this study was to evaluate the protein requirements for hand-rearing Blue-fronted Amazon parrots (Amazona aestiva). Forty hatchlings were fed semi-purified diets containing one of four (as-fed basis) protein levels: 13%, 18%, 23% and 28%. The experiment was carried out in a randomized block design with the initial weight of the nestling as the blocking factor and 10 parrots per protein level. Regression analysis was used to determine relationships between protein level and biometric measurements. The data indicated that 13% crude protein supported nestling growth with 18% being the minimum tested level required for maximum development. The optimal protein concentration for maximum weight gain was 24.4% (p = 0.08; r(2) = 0.25), tail length 23.7% (p = 0.09; r(2) = 0.19), wing length 23.0% (p = 0.07; r(2) = 0.17), tarsus length 21.3% (p = 0.06; r(2) = 0.10) and tarsus width 21.4% (p = 0.07; r(2) = 0.09). Tarsus measurements were larger in males (p < 0.05), indicating that sex must be considered when studying developing psittacines. These results were obtained using a highly digestible protein and a diet with moderate metabolizable energy levels.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Soil aggregation is an index of soil structure measured by mean weight diameter (MWD) or scaling factors often interpreted as fragmentation fractal dimensions (D-f). However, the MWD provides a biased estimate of soil aggregation due to spurious correlations among aggregate-size fractions and scale-dependency. The scale-invariant D-f is based on weak assumptions to allow particle counts and sensitive to the selection of the fractal domain, and may frequently exceed a value of 3, implying that D-f is a biased estimate of aggregation. Aggregation indices based on mass may be computed without bias using compositional analysis techniques. Our objective was to elaborate compositional indices of soil aggregation and to compare them to MWD and D-f using a published dataset describing the effect of 7 cropping systems on aggregation. Six aggregate-size fractions were arranged into a sequence of D-1 balances of building blocks that portray the process of soil aggregation. Isometric log-ratios (ilrs) are scale-invariant and orthogonal log contrasts or balances that possess the Euclidean geometry necessary to compute a distance between any two aggregation states, known as the Aitchison distance (A(x,y)). Close correlations (r>0.98) were observed between MWD, D-f, and the ilr when contrasting large and small aggregate sizes. Several unbiased embedded ilrs can characterize the heterogeneous nature of soil aggregates and be related to soil properties or functions. Soil bulk density and penetrater resistance were closely related to A(x,y) with reference to bare fallow. The A(x,y) is easy to implement as unbiased index of soil aggregation using standard sieving methods and may allow comparisons between studies. (C) 2012 Elsevier B.V. All rights reserved.