231 resultados para Soil vapor extraction
Resumo:
We present an overview of the QUT plant classification system submitted to LifeCLEF 2014. This system uses generic features extracted from a convolutional neural network previously used to perform general object classification. We examine the effectiveness of these features to perform plant classification when used in combination with an extremely randomised forest. Using this system, with minimal tuning, we obtained relatively good results with a score of 0:249 on the test set of LifeCLEF 2014.
Resumo:
Semantic perception and object labeling are key requirements for robots interacting with objects on a higher level. Symbolic annotation of objects allows the usage of planning algorithms for object interaction, for instance in a typical fetchand-carry scenario. In current research, perception is usually based on 3D scene reconstruction and geometric model matching, where trained features are matched with a 3D sample point cloud. In this work we propose a semantic perception method which is based on spatio-semantic features. These features are defined in a natural, symbolic way, such as geometry and spatial relation. In contrast to point-based model matching methods, a spatial ontology is used where objects are rather described how they "look like", similar to how a human would described unknown objects to another person. A fuzzy based reasoning approach matches perceivable features with a spatial ontology of the objects. The approach provides a method which is able to deal with senor noise and occlusions. Another advantage is that no training phase is needed in order to learn object features. The use-case of the proposed method is the detection of soil sample containers in an outdoor environment which have to be collected by a mobile robot. The approach is verified using real world experiments.
Resumo:
One of the objectives of this study was to evaluate soil testing equipment based on its capability of measuring in-place stiffness or modulus values. As design criteria transition from empirical to mechanistic-empirical, soil test methods and equipment that measure properties such as stiffness and modulus and how they relate to Florida materials are needed. Requirements for the selected equipment are that they be portable, cost effective, reliable, a ccurate, and repeatable. A second objective is that the selected equipment measures soil properties without the use of nuclear materials.The current device used to measure soil compaction is the nuclear density gauge (NDG). Equipment evaluated in this research included lightweight deflectometers (LWD) from different manufacturers, a dynamic cone penetrometer (DCP), a GeoGauge, a Clegg impact soil tester (CIST), a Briaud compaction device (BCD), and a seismic pavement analyzer (SPA). Evaluations were conducted over ranges of measured densities and moistures.Testing (Phases I and II) was conducted in a test box and test pits. Phase III testing was conducted on materials found on five construction projects located in the Jacksonville, Florida, area. Phase I analyses determined that the GeoGauge had the lowest overall coefficient of variance (COV). In ascending order of COV were the accelerometer-type LWD, the geophone-type LWD, the DCP, the BCD, and the SPA which had the highest overall COV. As a result, the BCD and the SPA were excluded from Phase II testing.In Phase II, measurements obtained from the selected equipment were compared to the modulus values obtained by the static plate load test (PLT), the resilient modulus (MR) from laboratory testing, and the NDG measurements. To minimize soil and moisture content variability, the single spot testing sequence was developed. At each location, test results obtained from the portable equipment under evaluation were compared to the values from adjacent NDG, PLT, and laboratory MR measurements. Correlations were developed through statistical analysis. Target values were developed for various soils for verification on similar soils that were field tested in Phase III. The single spot testing sequence also was employed in Phase III, field testing performed on A-3 and A-2-4 embankments, limerock-stabilized subgrade, limerock base, and graded aggregate base found on Florida Department of Transportation construction projects. The Phase II and Phase III results provided potential trend information for future research—specifically, data collection for in-depth statistical analysis for correlations with the laboratory MR for specific soil types under specific moisture conditions. With the collection of enough data, stronger relationships could be expected between measurements from the portable equipment and the MR values. Based on the statistical analyses and the experience gained from extensive use of the equipment, the combination of the DCP and the LWD was selected for in-place soil testing for compaction control acceptance. Test methods and developmental specifications were written for the DCP and the LWD. The developmental specifications include target values for the compaction control of embankment, subgrade, and base materials.
Resumo:
2,4,6-trinitrotoluene (TNT) is one of the most commonly used nitro aromatic explosives in landmine, military and mining industry. This article demonstrates rapid and selective identification of TNT by surface-enhanced Raman spectroscopy (SERS) using 6-aminohexanethiol (AHT) as a new recognition molecule. First, Meisenheimer complex formation between AHT and TNT is confirmed by the development of pink colour and appearance of new band around 500 nm in UV-visible spectrum. Solution Raman spectroscopy study also supported the AHT:TNT complex formation by demonstrating changes in the vibrational stretching of AHT molecule between 2800-3000 cm−1. For surface enhanced Raman spectroscopy analysis, a self-assembled monolayer (SAM) of AHT is formed over the gold nanostructure (AuNS) SERS substrate in order to selectively capture TNT onto the surface. Electrochemical desorption and X-ray photoelectron studies are performed over AHT SAM modified surface to examine the presence of free amine groups with appropriate orientation for complex formation. Further, AHT and butanethiol (BT) mixed monolayer system is explored to improve the AHT:TNT complex formation efficiency. Using a 9:1 AHT:BT mixed monolayer, a very low detection limit (LOD) of 100 fM TNT was realized. The new method delivers high selectivity towards TNT over 2,4 DNT and picric acid. Finally, real sample analysis is demonstrated by the extraction and SERS detection of 302 pM of TNT from spiked.
Resumo:
Erythropoietin (EPO), a glycoprotein hormone of ∼34 kDa, is an important hematopoietic growth factor, mainly produced in the kidney and controls the number of red blood cells circulating in the blood stream. Sensitive and rapid recombinant human EPO (rHuEPO) detection tools that improve on the current laborious EPO detection techniques are in high demand for both clinical and sports industry. A sensitive aptamer-functionalized biosensor (aptasensor) has been developed by controlled growth of gold nanostructures (AuNS) over a gold substrate (pAu/AuNS). The aptasensor selectively binds to rHuEPO and, therefore, was used to extract and detect the drug from horse plasma by surface enhanced Raman spectroscopy (SERS). Due to the nanogap separation between the nanostructures, the high population and distribution of hot spots on the pAu/AuNS substrate surface, strong signal enhancement was acquired. By using wide area illumination (WAI) setting for the Raman detection, a low RSD of 4.92% over 150 SERS measurements was achieved. The significant reproducibility of the new biosensor addresses the serious problem of SERS signal inconsistency that hampers the use of the technique in the field. The WAI setting is compatible with handheld Raman devices. Therefore, the new aptasensor can be used for the selective extraction of rHuEPO from biological fluids and subsequently screened with handheld Raman spectrometer for SERS based in-field protein detection.
Resumo:
Objective This paper presents an automatic active learning-based system for the extraction of medical concepts from clinical free-text reports. Specifically, (1) the contribution of active learning in reducing the annotation effort, and (2) the robustness of incremental active learning framework across different selection criteria and datasets is determined. Materials and methods The comparative performance of an active learning framework and a fully supervised approach were investigated to study how active learning reduces the annotation effort while achieving the same effectiveness as a supervised approach. Conditional Random Fields as the supervised method, and least confidence and information density as two selection criteria for active learning framework were used. The effect of incremental learning vs. standard learning on the robustness of the models within the active learning framework with different selection criteria was also investigated. Two clinical datasets were used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Results The annotation effort saved by active learning to achieve the same effectiveness as supervised learning is up to 77%, 57%, and 46% of the total number of sequences, tokens, and concepts, respectively. Compared to the Random sampling baseline, the saving is at least doubled. Discussion Incremental active learning guarantees robustness across all selection criteria and datasets. The reduction of annotation effort is always above random sampling and longest sequence baselines. Conclusion Incremental active learning is a promising approach for building effective and robust medical concept extraction models, while significantly reducing the burden of manual annotation.
Resumo:
This paper presents a new active learning query strategy for information extraction, called Domain Knowledge Informativeness (DKI). Active learning is often used to reduce the amount of annotation effort required to obtain training data for machine learning algorithms. A key component of an active learning approach is the query strategy, which is used to iteratively select samples for annotation. Knowledge resources have been used in information extraction as a means to derive additional features for sample representation. DKI is, however, the first query strategy that exploits such resources to inform sample selection. To evaluate the merits of DKI, in particular with respect to the reduction in annotation effort that the new query strategy allows to achieve, we conduct a comprehensive empirical comparison of active learning query strategies for information extraction within the clinical domain. The clinical domain was chosen for this work because of the availability of extensive structured knowledge resources which have often been exploited for feature generation. In addition, the clinical domain offers a compelling use case for active learning because of the necessary high costs and hurdles associated with obtaining annotations in this domain. Our experimental findings demonstrated that 1) amongst existing query strategies, the ones based on the classification model’s confidence are a better choice for clinical data as they perform equally well with a much lighter computational load, and 2) significant reductions in annotation effort are achievable by exploiting knowledge resources within active learning query strategies, with up to 14% less tokens and concepts to manually annotate than with state-of-the-art query strategies.
Resumo:
An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required. The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.
Resumo:
Currently we are facing an overburdening growth of the number of reliable information sources on the Internet. The quantity of information available to everyone via Internet is dramatically growing each year [15]. At the same time, temporal and cognitive resources of human users are not changing, therefore causing a phenomenon of information overload. World Wide Web is one of the main sources of information for decision makers (reference to my research). However our studies show that, at least in Poland, the decision makers see some important problems when turning to Internet as a source of decision information. One of the most common obstacles raised is distribution of relevant information among many sources, and therefore need to visit different Web sources in order to collect all important content and analyze it. A few research groups have recently turned to the problem of information extraction from the Web [13]. The most effort so far has been directed toward collecting data from dispersed databases accessible via web pages (related to as data extraction or information extraction from the Web) and towards understanding natural language texts by means of fact, entity, and association recognition (related to as information extraction). Data extraction efforts show some interesting results, however proper integration of web databases is still beyond us. Information extraction field has been recently very successful in retrieving information from natural language texts, however it is still lacking abilities to understand more complex information, requiring use of common sense knowledge, discourse analysis and disambiguation techniques.
Resumo:
We present an empirical evaluation and comparison of two content extraction methods in HTML: absolute XPath expressions and relative XPath expressions. We argue that the relative XPath expressions, although not widely used, should be used in preference to absolute XPath expressions in extracting content from human-created Web documents. Evaluation of robustness covers four thousand queries executed on several hundred webpages. We show that in referencing parts of real world dynamic HTML documents, relative XPath expressions are on average significantly more robust than absolute XPath ones.
Resumo:
Management of sodic soils under irrigation often requires application of chemical ameliorants to improve permeability combined with leaching of excess salts. Modeling irrigation, soil treatments, and leaching in these sodic soils requires a model that can adequately represent the physical and chemical changes in the soil associated with the amelioration process. While there are a number of models that simulate reactive solute transport, UNSATCHEM and HYDRUS-1D are currently the only models that also include an ability to simulate the impacts of soil chemistry on hydraulic conductivity. Previous researchers have successfully applied these models to simulate amelioration experiments on a sodic loam soil. To further gauge their applicability, we extended the previous work by comparing HYDRUS simulations of sodic soil amelioration with the results from recently published laboratory experiments on a more reactive, repacked sodic clay soil. The general trends observed in the laboratory experiments were able to be simulated using HYDRUS. Differences between measured and simulated results were attributed to the limited flexibility of the function that represents chemistry-dependent hydraulic conductivity in HYDRUS. While improvements in the function could be made, the present work indicates that HYDRUS-UNSATCHEM captures the key changes in soil hydraulic properties that occur during sodic clay soil amelioration and thus extends the findings of previous researchers studying sodic loams.
Resumo:
Amelioration of sodic soils is commonly achieved by applying gypsum, which increases soil hydraulic conductivity by altering soil chemistry. The magnitude of hydraulic conductivity increases expected in response to gypsum applications depends on soil properties including clay content, clay mineralogy, and bulk density. The soil analyzed in this study was a kaolinite rich sodic clay soil from an irrigated area of the Lower Burdekin coastal floodplain in tropical North Queensland, Australia. The impact of gypsum amelioration was investigated by continuously leaching soil columns with a saturated gypsum solution, until the hydraulic conductivity and leachate chemistry stabilized. Extended leaching enabled the full impacts of electrolyte effects and cation exchange to be determined. For the columns packed to 1.4 g/cm3, exchangeable sodium concentrations were reduced from 5.0 ± 0.5 mEq/100 g to 0.41 ± 0.06 mEq/100 g, exchangeable magnesium concentrations were reduced from 13.9 ± 0.3 mEq/100 g to 4.3 ± 2.12 mEq/100 g, and hydraulic conductivity increased to 0.15 ± 0.04 cm/d. For the columns packed to 1.3 g/cm3, exchangeable sodium concentrations were reduced from 5.0 ± 0.5 mEq/100 g to 0.51 ± 0.03 mEq/100 g, exchangeable magnesium concentrations were reduced from 13.9 ± 0.3 mEq/100 g to 0.55 ± 0.36 mEq/100 g, and hydraulic conductivity increased to 0.96 ± 0.53 cm/d. The results of this study highlight that both sodium and magnesium need to be taken into account when determining the suitability of water quality for irrigation of sodic soils and that soil bulk density plays a major role in controlling the extent of reclamation that can be achieved using gypsum applications.
Resumo:
The use of nitrification inhibitors, in combination with ammonium based fertilisers, has been promoted recently as an effective method to reduce nitrous oxide (N2O) emissions from fertilised agricultural fields, whilst increasing yield and nitrogen use efficiency. Vegetable cropping systems are often characterised by high inputs of nitrogen fertiliser and consequently elevated emissions of nitrous oxide (N2O) can be expected. However, to date only limited data is available on the use of nitrification inhibitors in sub-tropical vegetable systems. A field experiment investigated the effect of the nitrification inhibitors (DMPP & 3MP+TZ) on N2O emissions and yield from a typical vegetable production system in sub-tropical Australia. Soil N2O fluxes were monitored continuously over an entire year with a fully automated system. Measurements were taken from three subplots for each treatment within a randomized complete blocks design. There was a significant inhibition effect of DMPP and 3MP+TZ on N2O emissions and soil mineral N content directly following the application of the fertiliser over the vegetable cropping phase. However this mitigation was offset by elevated N2O emissions from the inhibitor treatments over the post-harvest fallow period. Cumulative annual N2O emissions amounted to 1.22 kg-N/ha, 1.16 kg-N/ha, 1.50 kg-N/ha and 0.86 kg-N/ha in the conventional fertiliser (CONV), the DMPP treatment, the 3MP+TZ treatment and the zero fertiliser (0N) respectively. Corresponding fertiliser induced emission factors (EFs) were low with only 0.09 - 0.20% of the total applied fertiliser lost as N2O. There was no significant effect of the nitrification inhibitors on yield compared to the CONV treatment for the three vegetable crops (green beans, broccoli, lettuce) grown over the experimental period. This study highlights that N2O emissions from such vegetable cropping system are primarily controlled by post-harvest emissions following the incorporation of vegetable crop residues into the soil. It also shows that the use of nitrification inhibitors can lead to elevated N2O emissions by storing N in the soil profile that is available to soil microbes during the decomposition of the vegetable residues over the post-harvest phase. Hence the use of nitrification inhibitors in vegetable systems has to be treated carefully and fertiliser rates need to be adjusted to avoid excess soil nitrogen during the postharvest phase.