996 resultados para approximate membership extraction
Resumo:
The seed oil from Nitraria tangutorum samples was obtained by supercritical carbon dioxide extraction methods. The extraction parameters for this methodology, including pressure, temperature, particle size and extraction time, were optimized. The free fatty acids in the seed oil were separated with a pre-column derivation method and 1,2-benzo-3,4-dihydrocarbazole-9-ethyl-p-toluenesulfonate (BDETS) as a labeling regent, followed by high-performance liquid chromatography (HPLC) with fluorescence detection. The target compounds were identified by mass spectrometry with atmospheric pressure chemical ionization (APCI in positive-ion mode). HPLC analysis shows that the main compositions of the seed oil samples were free fatty acids (FFAs) in high to low concentrations as follows: linoleic acid, oleic acid, hexadecanoic acid and octadecanoic acid. The assay detection limits (at signal-to-noise of 3:1) were 3.378-6.572 nmol/L. Excellent linear responses were observed, with correlation coefficients greater than 0.999. The facile BDETS derivatization coupled with mass spectrometry detection allowed the development of a highly sensitive method for analyzing free fatty acids in seed oil by supercritical CO2 extraction. The established method is highly efficient for seed oil extraction and extremely sensitive for fatty acid profile determination. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The modeling formula based on seismic wavelet can well simulate zero - phase wavelet and hybrid-phase wavelet, and approximate maximal - phase and minimal - phase wavelet in a certain sense. The modeling wavelet can be used as wavelet function after suitable modification item added to meet some conditions. On the basis of the modified Morlet wavelet, the derivative wavelet function has been derived. As a basic wavelet, it can be sued for high resolution frequency - division processing and instantaneous feature extraction, in acoordance with the signal expanding characters in time and scale domains by each wavelet structured. Finally, an application example proves the effectiveness and reasonability of the method. Based on the analysis of SVD (Singular Value Decomposition) filter, by taking wavelet as basic wavelet and combining SVD filter and wavelet transform, a new de - noising method, which is Based on multi - dimension and multi-space de - noising method, is proposed. The implementation of this method is discussed the detail. Theoretical analysis and modeling show that the method has strong capacity of de - noising and keeping attributes of effective wave. It is a good tool for de - noising when the S/N ratio is poor. To give prominence to high frequency information of reflection event of important layer and to take account of other frequency information under processing seismic data, it is difficult for deconvolution filter to realize this goal. A filter from Fourier Transform has some problems for realizing the goal. In this paper, a new method is put forward, that is a method of processing seismic data in frequency division from wavelet transform and reconstruction. In ordinary seismic processing methods for resolution improvement, deconvolution operator has poor part characteristics, thus influencing the operator frequency. In wavelet transform, wavelet function has very good part characteristics. Frequency - division data processing in wavelet transform also brings quite good high resolution data, but it needs more time than deconvolution method does. On the basis of frequency - division processing method in wavelet domain, a new technique is put forward, which involves 1) designing filter operators equivalent to deconvolution operator in time and frequency domains in wavelet transform, 2) obtaining derivative wavelet function that is suitable to high - resolution seismic data processing, and 3) processing high resolution seismic data by deconvolution method in time domain. In the method of producing some instantaneous characteristic signals by using Hilbert transform, Hilbert transform is very sensitive to high - frequency random noise. As a result, even though there exist weak high - frequency noises in seismic signals, the obtained instantaneous characteristics of seismic signals may be still submerged by the noises. One method for having instantaneous characteristics of seismic signals in wavelet domain is put forward, which obtains directly the instantaneous characteristics of seismic signals by taking the characteristics of both the real part (real signals, namely seismic signals) and the imaginary part (the Hilbert transfom of real signals) of wavelet transform. The method has the functions of frequency division and noise removal. What is more, the weak wave whose frequency is lower than that of high - frequency random noise is retained in the obtained instantaneous characteristics of seismic signals, and the weak wave may be seen in instantaneous characteristic sections (such as instantaneous frequency, instantaneous phase and instantaneous amplitude). Impedance inversion is one of tools in the description of oil reservoir. one of methods in impedance inversion is Generalized Linear Inversion. This method has higher precision of inversion. But, this method is sensitive to noise of seismic data, so that error results are got. The description of oil reservoir in researching important geological layer, in order to give prominence to geological characteristics of the important layer, not only high frequency impedance to research thin sand layer, but other frequency impedance are needed. It is difficult for some impedance inversion method to realize the goal. Wavelet transform is very good in denoising and processing in frequency division. Therefore, in the paper, a method of impedance inversion is put forward based on wavelet transform, that is impedance inversion in frequency division from wavelet transform and reconstruction. in this paper, based on wavelet transform, methods of time - frequency analysis is given. Fanally, methods above are in application on real oil field - Sansan oil field.
Resumo:
Urinary 8-hydroxydeoxyguanosine (80HdG) has been considered as an excellent marker of individuals at high risk of developing cancer. Until now, urinary 80HdG has largely been measured by high-performance liquid chromatography with electrochemical detection. A new method for the analysis of urinary 80HdG by high-performance capillary electrophoresis has been developed and optimized in our laboratory. A single step solid-phase extraction procedure was optimized and used for extracting 80HdG from human urine. Separations were performed in an uncoated silica capillary (50 cm x 50 tm i.d.) using a P/ACE MDQ system with UV detection. The separation of 80HdG from interfering urinary matrix components is optimized with regard to pH, applied voltage, pressure injection time and concentration of SDS in running buffer. The detection limit of this method is 0.4 mug/ml, the linear range is 0.8-500 mug/ml, the correlation coefficients levels is better than 0.999. The developed method is simple, fast and good reproducibility, furthermore, it requires a very small injection volumes and low costs of analysis, which makes it possible to provide a new noninvasive assay for an indirect measurement of oxidative DNA damage.
Resumo:
Weighted graph matching is a good way to align a pair of shapes represented by a set of descriptive local features; the set of correspondences produced by the minimum cost of matching features from one shape to the features of the other often reveals how similar the two shapes are. However, due to the complexity of computing the exact minimum cost matching, previous algorithms could only run efficiently when using a limited number of features per shape, and could not scale to perform retrievals from large databases. We present a contour matching algorithm that quickly computes the minimum weight matching between sets of descriptive local features using a recently introduced low-distortion embedding of the Earth Mover's Distance (EMD) into a normed space. Given a novel embedded contour, the nearest neighbors in a database of embedded contours are retrieved in sublinear time via approximate nearest neighbors search. We demonstrate our shape matching method on databases of 10,000 images of human figures and 60,000 images of handwritten digits.
Resumo:
Information representation is a critical issue in machine vision. The representation strategy in the primitive stages of a vision system has enormous implications for the performance in subsequent stages. Existing feature extraction paradigms, like edge detection, provide sparse and unreliable representations of the image information. In this thesis, we propose a novel feature extraction paradigm. The features consist of salient, simple parts of regions bounded by zero-crossings. The features are dense, stable, and robust. The primary advantage of the features is that they have abstract geometric attributes pertaining to their size and shape. To demonstrate the utility of the feature extraction paradigm, we apply it to passive navigation. We argue that the paradigm is applicable to other early vision problems.
Resumo:
A method with carbon nanotubes functioning both as the adsorbent of solid-phase extraction (SPE) and the matrix for matrix assisted laser desorption/ ionization mass spectrometry (MALDI-MS) to analyze small molecules in solution has been developed. In this method, 10 muL suspensions of carbon nanotubes in 50% (vol/vol) methanol were added to the sample solution to extract analytes onto surface of carbon nanotubes because of their dramatic hydrophobicity. Carbon nanotubes in solution are deposited onto the bottom of tube with centrifugation. After removing the supernatant fluid, carbon nanotubes are suspended again with dispersant and pipetted directly onto the sample target of the MALDI-MS to perform a mass spectrometric analysis. It was demonstrated by analysis of a variety of small molecules that the resolution of peaks and the efficiency of desorption/ ionization on the carbon nanotubes are better than those on the activated carbon. It is found that with the addition of glycerol and sucrose to the dispersant, the intensity, the ratio of signal to noise (S/N), and the resolution of peaks for analytes by mass spectrometry increased greatly. Compared with the previously reported method by depositing sample solution onto thin layer of carbon nanotubes, it is observed that the detection limit for analytes can be enhanced about 10 to 100 times due to solid-phase extraction of analytes in solution by carbon nanotubes. An acceptable result of simultaneously quantitative analysis of three analytes in solution has been achieved. The application in determining drugs spiked into urine has also been realized. (C) 2004 American Society for Mass Spectrometry.
Resumo:
Organophosphorus pesticides (OPPs) in vegetables were determined by stir bar sorptive extraction (SBSE) and capillary gas chromatography with thermionic specific detection (TSD). Hydroxy-terminated polydimethylsioxane (PDMS) prepared by sol-gel method was used as extraction phase. The effects of extraction temperature, salting out, extraction time on extraction efficiency were studied. The detection limits of OPPs in water were <= 1.2 ng/l. This method was also applied to the analysis of OPPs in vegetable samples and matrix effect was studied. Linear ranges of OPPs in vegetable samples were 0.05-50 ng/g with detection limits <= 0. 15 ng/g and the repeatability of the method was less than 20% relative standard deviation. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Li, Longzhuang, Liu, Yonghuai, Obregon, A., Weatherston, M. Visual Segmentation-Based Data Record Extraction From Web Documents. Proceedings of IEEE International Conference on Information Reuse and Integration, 2007, pp. 502-507. Sponsorship: IEEE
Resumo:
This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.
Resumo:
A simple procedure for the isolation of caffeine from energy drinks by solid phase extraction on a C18 cartridge. Quantitative analysis of the amount of caffeine by LC/MS is determined by referencing a standard curve.
Resumo:
This paper introduces BoostMap, a method that can significantly reduce retrieval time in image and video database systems that employ computationally expensive distance measures, metric or non-metric. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. Embedding construction is formulated as a machine learning task, where AdaBoost is used to combine many simple, 1D embeddings into a multidimensional embedding that preserves a significant amount of the proximity structure in the original space. Performance is evaluated in a hand pose estimation system, and a dynamic gesture recognition system, where the proposed method is used to retrieve approximate nearest neighbors under expensive image and video similarity measures. In both systems, BoostMap significantly increases efficiency, with minimal losses in accuracy. Moreover, the experiments indicate that BoostMap compares favorably with existing embedding methods that have been employed in computer vision and database applications, i.e., FastMap and Bourgain embeddings.