988 resultados para soil data requirements


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the last decade the principle of Open Access to publicly funded research has been getting a growing support from policy makers and funders across Europe, both at national level and within the European Union context. At European level some of the first relevant steps taken by the European Research Council (ERC) with a statement supporting Open Access (2006), shortly followed by guidelines for researchers funded by the ERC (2007) stating that all peer-reviewed publications from ERC funded projects should be made openly accessible shortly after their publication. Those guidelines were revised in October 2013, reinforcing the mandatory character of the requirements and expanding them to monographs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study presents a two stage process to determine suitable areas to grow fuel crops: i) FAO Agro Ecological Zones (AEZ) procedure is applied to four Indian states of different geographical characteristics; and ii) Modelling the growth of candidate crops with GEPIC water and nutrient model, which is used to determine potential yield of candidate crops in areas where irrigation water is brackish or soil is saline. Absence of digital soil maps, paucity of readily available climate data and knowledge of detailed requirements of candidate crops are some of the major problems, of which, a series of detailed maps will evaluate true potential of biofuels in India.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The data files give the basic field and laboratory data on five ponds in the northeast Siberian Arctic tundra on Samoylov. The files contain water and soil temperature data of the ponds, methane fluxes, measured with closed chambers in the centres without vascular plants and the margins with vascular plants, the contribution of plant mediated fluxes on total methane fluxes, the gas concentrations (methane and dissolved inorganic carbon, oxygen) in the soil and the water column of the ponds, microbial activities (methane production, methane oxidation, aerobic and anaerobic carbon dioxide production), total carbon pools in the different horizons of the bottom soils, soil bulk density, soil substance density, and soil porosity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A compositional multivariate approach is used to analyse regional scale soil geochemical data obtained as part of the Tellus Project generated by the Geological Survey Northern Ireland (GSNI). The multi-element total concentration data presented comprise XRF analyses of 6862 rural soil samples collected at 20cm depths on a non-aligned grid at one site per 2 km2. Censored data were imputed using published detection limits. Using these imputed values for 46 elements (including LOI), each soil sample site was assigned to the regional geology map provided by GSNI initially using the dominant lithology for the map polygon. Northern Ireland includes a diversity of geology representing a stratigraphic record from the Mesoproterozoic, up to and including the Palaeogene. However, the advance of ice sheets and their meltwaters over the last 100,000 years has left at least 80% of the bedrock covered by superficial deposits, including glacial till and post-glacial alluvium and peat. The question is to what extent the soil geochemistry reflects the underlying geology or superficial deposits. To address this, the geochemical data were transformed using centered log ratios (clr) to observe the requirements of compositional data analysis and avoid closure issues. Following this, compositional multivariate techniques including compositional Principal Component Analysis (PCA) and minimum/maximum autocorrelation factor (MAF) analysis method were used to determine the influence of underlying geology on the soil geochemistry signature. PCA showed that 72% of the variation was determined by the first four principal components (PC’s) implying “significant” structure in the data. Analysis of variance showed that only 10 PC’s were necessary to classify the soil geochemical data. To consider an improvement over PCA that uses the spatial relationships of the data, a classification based on MAF analysis was undertaken using the first 6 dominant factors. Understanding the relationship between soil geochemistry and superficial deposits is important for environmental monitoring of fragile ecosystems such as peat. To explore whether peat cover could be predicted from the classification, the lithology designation was adapted to include the presence of peat, based on GSNI superficial deposit polygons and linear discriminant analysis (LDA) undertaken. Prediction accuracy for LDA classification improved from 60.98% based on PCA using 10 principal components to 64.73% using MAF based on the 6 most dominant factors. The misclassification of peat may reflect degradation of peat covered areas since the creation of superficial deposit classification. Further work will examine the influence of underlying lithologies on elemental concentrations in peat composition and the effect of this in classification analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The use of secondary data in health care research has become a very important issue over the past few years. Data from the treatment context are being used for evaluation of medical data for external quality assurance, as well as to answer medical questions in the form of registers and research databases. Additionally, the establishment of electronic clinical systems like data warehouses provides new opportunities for the secondary use of clinical data. Because health data is among the most sensitive information about an individual, the data must be safeguarded from disclosure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Agricultural soils are a major source of nitrous oxide (N2O) emissions and an understanding of factors regulating such emissions across contrasting soil types is critical for improved estimation through modelling and mitigation of N2O. In this study we investigated the role of soil texture and its interaction with plants in regulating the N2O fluxes in agricultural systems. A measurement system that combined weighing lysimeters with automated chambers was used to directly compare continuously measured surface N2O fluxes, leaching losses of water and nitrogen and evapotranspiration in three contrasting soils types of the Riverine Plain, NSW, Australia. The soils comprised a deep sand, a loam and a clay loam with and without the presence of wheat plants. All soils were under the same fertilizer management and irrigation was applied according to plant water requirements. In fallow soils, texture significantly affected N2O emissions in the order clay loam > loam > sand. However, when planted, the difference in N2O emissions among the three soils types became less pronounced. Nitrous oxide emissions were 6.2 and 2.4 times higher from fallow clay loam and loam cores, respectively, compared with cores planted with wheat. This is considered to be due to plant uptake of water and nitrogen which resulted in reduced amounts of soil water and available nitrogen, and therefore less favourable soil conditions for denitrification. The effect of plants on N2O emissions was not apparent in the coarse textured sandy soil probably because of aerobic soil conditions, likely caused by low water holding capacity and rapid drainage irrespective of plant presence resulting in reduced denitrification activity. More than 90% of N2O emissions were derived from denitrification in the fine-textured clay loam-determined for a two week period using K15NO3 fertilizer. The proportion of N2O that was not derived from K15NO3 was higher in the coarse-textured sand and loam, which may have been derived from soil N through nitrification or denitrification of mineralized N. Water filled pore space was a poorer predictor of N2O emissions compared with volumetric water content because of variable bulk density among soil types. The data may better inform the calibration of greenhouse gas prediction models as soil texture is one of the primary factors that explain spatial variation in N2O emissions by regulating soil oxygen. Defining the significance of N2O emissions between planted and fallow soils may enable improved yield scaled N2O emission assessment, water and nitrogen scheduling in the pre-watering phase during early crop establishment and within rotations of irrigated arable cropping systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliable budget/cost estimates for road maintenance and rehabilitation are subjected to uncertainties and variability in road asset condition and characteristics of road users. The CRC CI research project 2003-029-C ‘Maintenance Cost Prediction for Road’ developed a method for assessing variation and reliability in budget/cost estimates for road maintenance and rehabilitation. The method is based on probability-based reliable theory and statistical method. The next stage of the current project is to apply the developed method to predict maintenance/rehabilitation budgets/costs of large networks for strategic investment. The first task is to assess the variability of road data. This report presents initial results of the analysis in assessing the variability of road data. A case study of the analysis for dry non reactive soil is presented to demonstrate the concept in analysing the variability of road data for large road networks. In assessing the variability of road data, large road networks were categorised into categories with common characteristics according to soil and climatic conditions, pavement conditions, pavement types, surface types and annual average daily traffic. The probability distributions, statistical means, and standard deviation values of asset conditions and annual average daily traffic for each type were quantified. The probability distributions and the statistical information obtained in this analysis will be used to asset the variation and reliability in budget/cost estimates in later stage. Generally, we usually used mean values of asset data of each category as input values for investment analysis. The variability of asset data in each category is not taken into account. This analysis method demonstrated that it can be used for practical application taking into account the variability of road data in analysing large road networks for maintenance/rehabilitation investment analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The requirement to monitor the rapid pace of environmental change due to global warming and to human development is producing large volumes of data but placing much stress on the capacity of ecologists to store, analyse and visualise that data. To date, much of the data has been provided by low level sensors monitoring soil moisture, dissolved nutrients, light intensity, gas composition and the like. However, a significant part of an ecologist’s work is to obtain information about species diversity, distributions and relationships. This task typically requires the physical presence of an ecologist in the field, listening and watching for species of interest. It is an extremely difficult task to automate because of the higher order difficulties in bandwidth, data management and intelligent analysis if one wishes to emulate the highly trained eyes and ears of an ecologist. This paper is concerned with just one part of the bigger challenge of environmental monitoring – the acquisition and analysis of acoustic recordings of the environment. Our intention is to provide helpful tools to ecologists – tools that apply information technologies and computational technologies to all aspects of the acoustic environment. The on-line system which we are building in conjunction with ecologists offers an integrated approach to recording, data management and analysis. The ecologists we work with have different requirements and therefore we have adopted the toolbox approach, that is, we offer a number of different web services that can be concatenated according to need. In particular, one group of ecologists is concerned with identifying the presence or absence of species and their distributions in time and space. Another group, motivated by legislative requirements for measuring habitat condition, are interested in summary indices of environmental health. In both case, the key issues are scalability and automation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is currently a strong focus worldwide on the potential of large-scale Electronic Health Record (EHR) systems to cut costs and improve patient outcomes through increased efficiency. This is accomplished by aggregating medical data from isolated Electronic Medical Record databases maintained by different healthcare providers. Concerns about the privacy and reliability of Electronic Health Records are crucial to healthcare service consumers. Traditional security mechanisms are designed to satisfy confidentiality, integrity, and availability requirements, but they fail to provide a measurement tool for data reliability from a data entry perspective. In this paper, we introduce a Medical Data Reliability Assessment (MDRA) service model to assess the reliability of medical data by evaluating the trustworthiness of its sources, usually the healthcare provider which created the data and the medical practitioner who diagnosed the patient and authorised entry of this data into the patient’s medical record. The result is then expressed by manipulating health record metadata to alert medical practitioners relying on the information to possible reliability problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electronic Health Record (EHR) systems are being introduced to overcome the limitations associated with paper-based and isolated Electronic Medical Record (EMR) systems. This is accomplished by aggregating medical data and consolidating them in one digital repository. Though an EHR system provides obvious functional benefits, there is a growing concern about the privacy and reliability (trustworthiness) of Electronic Health Records. Security requirements such as confidentiality, integrity, and availability can be satisfied by traditional hard security mechanisms. However, measuring data trustworthiness from the perspective of data entry is an issue that cannot be solved with traditional mechanisms, especially since degrees of trust change over time. In this paper, we introduce a Time-variant Medical Data Trustworthiness (TMDT) assessment model to evaluate the trustworthiness of medical data by evaluating the trustworthiness of its sources, namely the healthcare organisation where the data was created and the medical practitioner who diagnosed the patient and authorised entry of this data into the patient’s medical record, with respect to a certain period of time. The result can then be used by the EHR system to manipulate health record metadata to alert medical practitioners relying on the information to possible reliability problems.