124 resultados para approximate membership extraction
Resumo:
This paper establishes sufficient conditions to bound the error in perturbed conditional mean estimates derived from a perturbed model (only the scalar case is shown in this paper but a similar result is expected to hold for the vector case). The results established here extend recent stability results on approximating information state filter recursions to stability results on the approximate conditional mean estimates. The presented filter stability results provide bounds for a wide variety of model error situations.
Resumo:
Approximate clone detection is the process of identifying similar process fragments in business process model collections. The tool presented in this paper can efficiently cluster approximate clones in large process model repositories. Once a repository is clustered, users can filter and browse the clusters using different filtering parameters. Our tool can also visualize clusters in the 2D space, allowing a better understanding of clusters and their member fragments. This demonstration will be useful for researchers and practitioners working on large process model repositories, where process standardization is a critical task for increasing the consistency and reducing the complexity of the repository.
Resumo:
Coal Seam Gas (CSG) production is achieved by extracting groundwater to depressurize coal seam aquifers in order to promote methane gas desorption from coal micropores. CSG waters are characteristically alkaline, have a neutral pH (~7), are of the Na-HCO3-Cl type, and exhibit brackish salinity. In 2004, a CSG exploration company carried out a gas flow test in an exploration well located in Maramarua (Waikato Region, New Zealand). This resulted in 33 water samples exhibiting noteworthy chemical variations induced by pumping. This research identifies the main causes of hydrochemical variations in CSG water, makes recommendations to manage this effect, and discusses potential environmental implications. Hydrochemical variations were studied using Factor Analysis and this was supported with hydrochemical modelling and a laboratory experiment. This reveals carbon dioxide (CO2) degassing as the principal source of hydrochemical variability (about 33%). Factor Analysis also shows that major ion variations could also reflect changes in hydrochemical composition induced by different pumping regimes. Subsequent chloride, calcium, and TDS variations could be a consequence of analytical errors potentially committed during laboratory determinations. CSG water chemical variations due to degassing during pumping can be minimized with good completion and production techniques; variations due to sample degassing can be controlled by taking precautions during sampling, transit, storage and analysis. In addition, the degassing effect observed in CSG waters can lead to an underestimation of their potential environmental effect. Calcium precipitation due to exposure to normal atmospheric pressure results in a 23% increase in SAR values from Maramarua CSG water samples.
Resumo:
A building information model (BIM) provides a rich representation of a building's design. However, there are many challenges in getting construction-specific information from a BIM, limiting the usability of BIM for construction and other downstream processes. This paper describes a novel approach that utilizes ontology-based feature modeling, automatic feature extraction based on ifcXML, and query processing to extract information relevant to construction practitioners from a given BIM. The feature ontology generically represents construction-specific information that is useful for a broad range of construction management functions. The software prototype uses the ontology to transform the designer-focused BIM into a construction-specific feature-based model (FBM). The formal query methods operate on the FBM to further help construction users to quickly extract the necessary information from a BIM. Our tests demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.
Resumo:
The Beauty Leaf tree (Calophyllum inophyllum) is a potential source of non-edible vegetable oil for producing future generation biodiesel because of its ability to grow in a wide range of climate conditions, easy cultivation, high fruit production rate, and the high oil content in the seed. This plant naturally occurs in the coastal areas of Queensland and the Northern Territory in Australia, and is also widespread in south-east Asia, India and Sri Lanka. Although Beauty Leaf is traditionally used as a source of timber and orientation plant, its potential as a source of second generation biodiesel is yet to be exploited. In this study, the extraction process from the Beauty Leaf oil seed has been optimised in terms of seed preparation, moisture content and oil extraction methods. The two methods that have been considered to extract oil from the seed kernel are mechanical oil extraction using an electric powered screw press, and chemical oil extraction using n-hexane as an oil solvent. The study found that seed preparation has a significant impact on oil yields, especially in the screw press extraction method. Kernels prepared to 15% moisture content provided the highest oil yields for both extraction methods. Mechanical extraction using the screw press can produce oil from correctly prepared product at a low cost, however overall this method is ineffective with relatively low oil yields. Chemical extraction was found to be a very effective method for oil extraction for its consistence performance and high oil yield, but cost of production was relatively higher due to the high cost of solvent. However, a solvent recycle system can be implemented to reduce the production cost of Beauty Leaf biodiesel. The findings of this study are expected to serve as the basis from which industrial scale biodiesel production from Beauty Leaf can be made.
Resumo:
In this paper we present a new simulation methodology in order to obtain exact or approximate Bayesian inference for models for low-valued count time series data that have computationally demanding likelihood functions. The algorithm fits within the framework of particle Markov chain Monte Carlo (PMCMC) methods. The particle filter requires only model simulations and, in this regard, our approach has connections with approximate Bayesian computation (ABC). However, an advantage of using the PMCMC approach in this setting is that simulated data can be matched with data observed one-at-a-time, rather than attempting to match on the full dataset simultaneously or on a low-dimensional non-sufficient summary statistic, which is common practice in ABC. For low-valued count time series data we find that it is often computationally feasible to match simulated data with observed data exactly. Our particle filter maintains $N$ particles by repeating the simulation until $N+1$ exact matches are obtained. Our algorithm creates an unbiased estimate of the likelihood, resulting in exact posterior inferences when included in an MCMC algorithm. In cases where exact matching is computationally prohibitive, a tolerance is introduced as per ABC. A novel aspect of our approach is that we introduce auxiliary variables into our particle filter so that partially observed and/or non-Markovian models can be accommodated. We demonstrate that Bayesian model choice problems can be easily handled in this framework.
Resumo:
High-performance liquid chromatography coupled with solid phase extraction method was developed for determination of isofraxidin in rat plasma after oral administration of Acanthopanax senticosus extract (ASE), and pharmacokinetic parameters of isofraxidin either in ASE or pure compound were measured. The HPLC analysis was performed on a Dikma Diamonsil RP(18) column (4.6 mm x 150 mm, 5 microm) with the isocratic elution of solvent A (acetonitrile) and solvent B (0.1% aqueous phosphoric acid, v/v) (A : B = 22 : 78) and the detection wavelength was set at 343 nm. The calibration curve was linear over the range of 0.156-15.625 microg/ml. The limit of detection was 60 ng/ml. The intra-day precision was 5.8%, and the inter-day precision was 6.0%. The recovery was 87.30+/-1.73%. When the dosage of ASE is equal to pure compound caculated by the amount of isofraxidin, it has been found to have two maximum concentrations in plasma while the pure compound only showed one peak in the plasma concentration-time curve. The determined content of isofraxidin in plasma after oral administration of ASE is the total contents of free isofraxidin and its precursors in ASE in vitro. The pharmacokinetic characteristics of ASE showed the priority of the extract and the properities of traditional Chinese medicine.
Resumo:
High performance liquid chromatography (HPLC) coupled with the solid phase extraction method was developed for determining cimifugin (a coumarin derivative; one of Saposhnikovia divaricatae's constituents) in rat plasma after oral administration of Saposhnikovia divaricatae extract (SDE), and the pharmacokinetics of cimifugin either in SDE or as a single compound was investigated. The HPLC analysis was performed on a commercially available column (4.6 mm x 200 mm, 5 pm) with the isocratic elution of solvent A (Methanol) and solvent B (Water) (A:B=60:40) and the detection wavelength was set at 250 nm. The calibration curve was linear over the range of 0.100-10.040 microg/mL. The limit of detection was 30 ng/mL. At the rat plasma concentrations of 0.402, 4.016, 10.040 microg/mL, the intra-day precision was 6.21%, 3.98%, and 2.23%; the inter-day precision was 7.59%, 4.26%, and 2.09%, respectively. The absolute recovery was 76.58%, 76.61%, and 77.67%, respectively. When the dosage of SDE was equal to the pure compound calculated by the amount of cimifugin, it was found to have two maximum peaks while the pure compound only showed one peak in the plasma concentration-time curve. The pharmacokinetic characteristics of SDE showed the superiority of the extract and the properties of traditional Chinese medicine.
Resumo:
Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.
Resumo:
As of today, opinion mining has been widely used to iden- tify the strength and weakness of products (e.g., cameras) or services (e.g., services in medical clinics or hospitals) based upon people's feed- back such as user reviews. Feature extraction is a crucial step for opinion mining which has been used to collect useful information from user reviews. Most existing approaches only find individual features of a product without the structural relationships between the features which usually exists. In this paper, we propose an approach to extract features and feature relationship, represented as tree structure called a feature hi- erarchy, based on frequent patterns and associations between patterns derived from user reviews. The generated feature hierarchy profiles the product at multiple levels and provides more detailed information about the product. Our experiment results based on some popularly used review datasets show that the proposed feature extraction approach can identify more correct features than the baseline model. Even though the datasets used in the experiment are about cameras, our work can be ap- plied to generate features about a service such as the services in hospitals or clinics.
Resumo:
Guaranteeing the quality of extracted features that describe relevant knowledge to users or topics is a challenge because of the large number of extracted features. Most popular existing term-based feature selection methods suffer from noisy feature extraction, which is irrelevant to the user needs (noisy). One popular method is to extract phrases or n-grams to describe the relevant knowledge. However, extracted n-grams and phrases usually contain a lot of noise. This paper proposes a method for reducing the noise in n-grams. The method first extracts more specific features (terms) to remove noisy features. The method then uses an extended random set to accurately weight n-grams based on their distribution in the documents and their terms distribution in n-grams. The proposed approach not only reduces the number of extracted n-grams but also improves the performance. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms the state-of-art methods underpinned by Okapi BM25, tf*idf and Rocchio.
Resumo:
Changes in the molecular structure of polymer antioxidants such as hindered amine light stabilisers (HALS) is central to their efficacy in retarding polymer degradation and therefore requires careful monitoring during their in-service lifetime. The HALS, bis-(1-octyloxy-2,2,6,6-tetramethyl-4-piperidinyl) sebacate (TIN123) and bis-(1,2,2,6,6-pentamethyl-4-piperidinyl) sebacate (TIN292), were formulated in different polymer systems and then exposed to various curing and ageing treatments to simulate in-service use. Samples of these coatings were then analysed directly using liquid extraction surface analysis (LESA) coupled with a triple quadrupole mass spectrometer. Analysis of TIN123 formulated in a cross-linked polyester revealed that the polymer matrix protected TIN123 from undergoing extensive thermal degradation that would normally occur at 292 degrees C, specifically, changes at the 1- and 4-positions of the piperidine groups. The effect of thermal versus photo-oxidative degradation was also compared for TIN292 formulated in polyacrylate films by monitoring the in situ conversion of N-CH3 substituted piperidines to N-H. The analysis confirmed that UV light was required for the conversion of N-CH3 moieties to N-H - a major pathway in the antioxidant protection of polymers - whereas this conversion was not observed with thermal degradation. The use of tandem mass spectrometric techniques, including precursor-ion scanning, is shown to be highly sensitive and specific for detecting molecular-level changes in HALS compounds and, when coupled with LESA, able to monitor these changes in situ with speed and reproducibility. (C) 2013 Elsevier B. V. All rights reserved.
Resumo:
This is a discussion of the journal article: "Construcing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation". The article and discussion have appeared in the Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Resumo:
We present a novel approach for developing summary statistics for use in approximate Bayesian computation (ABC) algorithms using indirect infer- ence. We embed this approach within a sequential Monte Carlo algorithm that is completely adaptive. This methodological development was motivated by an application involving data on macroparasite population evolution modelled with a trivariate Markov process. The main objective of the analysis is to compare inferences on the Markov process when considering two di®erent indirect mod- els. The two indirect models are based on a Beta-Binomial model and a three component mixture of Binomials, with the former providing a better ¯t to the observed data.
Resumo:
Two sources of uncertainty in the X ray computed tomography imaging of polymer gel dosimeters are investigated in the paper.The first cause is a change in postirradiation density, which is proportional to the computed tomography signal and is associated with a volume change. The second cause of uncertainty is reconstruction noise.A simple technique that increases the residual signal to noise ratio by almost two orders of magnitude is examined.