457 resultados para Fingerprint chromatogram
Resumo:
A completely validated method based on HPLC coupled with photodiode array detector (HPLC-UV) was described for evaluating and controlling quality of Yin Chen Hao Tang extract (YCHTE). First, HPLC-UV fingerprint chromatogram of YCHTE was established for preliminarily elucidating amount and chromatographic trajectory of chemical constituents in YCHTE. Second, for the first time, five mainly bioactive constituents in YCHTE were simultaneously determined based on fingerprint chromatogram for furthermore controlling the quality of YCHTE quantitatively. The developed method was applied to analyze 12 batches of YCHTE samples which consisted of herbal drugs from different places of production, showed acceptable linearity, intraday (RSD <5%), interday precision (RSD <4.80%), and accuracy (RSD <2.80%). As a result, fingerprint chromatogram determined 15 representative general fingerprint peaks, and the fingerprint chromatogram resemblances are all better than 0.9996. The contents of five analytes in different batches of YCHTE samples do not indicate significant difference. So, it is concluded that the developed HPLC-UV method is a more fully validated and complete method for evaluating and controlling the quality of YCHTE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Purpose: To develop a high-performance liquid chromatography (HPLC) fingerprint method for the quality control and origin discrimination of Gastrodiae rhizoma . Methods: Twelve batches of G. rhizoma collected from Sichuan, Guizhou and Shanxi provinces in china were used to establish the fingerprint. The chromatographic peak (gastrodin) was taken as the reference peak, and all sample separation was performed on a Agilent C18 (250 mm×4.6 mmx5 μm) column with a column temperature of 25 °C. The mobile phase was acetonitrile/0.8 % phosphate water solution (in a gradient elution mode) and the flow rate of 1 mL/min. The detection wavelength was 270 nm. The method was validated as per the guidelines of Chinese Pharmacopoeia. Results: The chromatograms of the samples showed 11 common peaks, of which no. 4 was identified as that of Gastrodin. Data for the samples were analyzed statistically using similarity analysis and hierarchical cluster analysis (HCA). The similarity index between reference chromatogram and samples’ chromatograms were all > 0.80. The similarity index of G. rhizoma from Guizhou, Shanxi and Sichuan is evident as follows: 0.854 - 0.885, 0.915 - 0.930 and 0.820 - 0.848, respectively. The samples could be divided into three clusters at a rescaled distance of 7.5: S1 - S4 as cluster 1; S5 - S8 cluster 2, and others grouped into cluster 3. Conclusion: The findings indicate that HPLC fingerprinting technology is appropriate for quality control and origin discrimination of G. rhizoma.
An Algorithm for Reducing the Effect of Compression/Decompression Techniques on Fingerprint Minutiae
Resumo:
Chromatographic fingerprints of 46 Eucommia Bark samples were obtained by liquid chromatography-diode array detector (LC-DAD). These samples were collected from eight provinces in China, with different geographical locations, and climates. Seven common LC peaks that could be used for fingerprinting this common popular traditional Chinese medicine were found, and six were identified as substituted resinols (4 compounds), geniposidic acid and chlorogenic acid by LC-MS. Principal components analysis (PCA) indicated that samples from the Sichuan, Hubei, Shanxi and Anhui—the SHSA provinces, clustered together. The other objects from the four provinces, Guizhou, Jiangxi, Gansu and Henan, were discriminated and widely scattered on the biplot in four province clusters. The SHSA provinces are geographically close together while the others are spread out. Thus, such results suggested that the composition of the Eucommia Bark samples was dependent on their geographic location and environment. In general, the basis for discrimination on the PCA biplot from the original 46 objects× 7 variables data matrix was the same as that for the SHSA subset (36 × 7 matrix). The seven marker compound loading vectors grouped into three sets: (1) three closely correlating substituted resinol compounds and chlorogenic acid; (2) the fourth resinol compound identified by the OCH3 substituent in the R4 position, and an unknown compound; and (3) the geniposidic acid, which was independent of the set 1 variables, and which negatively correlated with the set 2 ones above. These observations from the PCA biplot were supported by hierarchical cluster analysis, and indicated that Eucommia Bark preparations may be successfully compared with the use of the HPLC responses from the seven marker compounds and chemometric methods such as PCA and the complementary hierarchical cluster analysis (HCA).
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This paper presents two algorithms for smoothing and feature extraction for fingerprint classification. Deutsch's(2) Thinning algorithm (rectangular array) is used for thinning the digitized fingerprint (binary version). A simple algorithm is also suggested for classifying the fingerprints. Experimental results obtained using such algorithms are presented.
Resumo:
Fingerprints are used for identification in forensics and are classified into Manual and Automatic. Automatic fingerprint identification system is classified into Latent and Exemplar. A novel Exemplar technique of Fingerprint Image Verification using Dictionary Learning (FIVDL) is proposed to improve the performance of low quality fingerprints, where Dictionary learning method reduces the time complexity by using block processing instead of pixel processing. The dynamic range of an image is adjusted by using Successive Mean Quantization Transform (SMQT) technique and the frequency domain noise is reduced using spectral frequency Histogram Equalization. Then, an adaptive nonlinear dynamic range adjustment technique is utilized to determine the local spectral features on corresponding fingerprint ridge frequency and orientation. The dictionary is constructed using spatial fundamental frequency that is determined from the spectral features. These dictionaries help in removing the spurious noise present in fingerprints and reduce the time complexity by using block processing instead of pixel processing. Further, dictionaries are used to reconstruct the image for matching. The proposed FIVDL is verified on FVC database sets and Experimental result shows an improvement over the state-of-the-art techniques. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Features of homologous relationship of proteins can provide us a general picture of protein universe, assist protein design and analysis, and further our comprehension of the evolution of organisms. Here we carried Out a Study of the evolution Of protein molecules by investigating homologous relationships among residue segments. The motive was to identify detailed topological features of homologous relationships for short residue segments in the whole protein universe. Based on the data of a large number of non-redundant Proteins, the universe of non-membrane polypeptide was analyzed by considering both residue mutations and structural conservation. By connecting homologous segments with edges, we obtained a homologous relationship network of the whole universe of short residue segments, which we named the graph of polypeptide relationships (GPR). Since the network is extremely complicated for topological transitions, to obtain an in-depth understanding, only subgraphs composed of vital nodes of the GPR were analyzed. Such analysis of vital subgraphs of the GPR revealed a donut-shaped fingerprint. Utilization of this topological feature revealed the switch sites (where the beginning of exposure Of previously hidden "hot spots" of fibril-forming happens, in consequence a further opportunity for protein aggregation is Provided; 188-202) of the conformational conversion of the normal alpha-helix-rich prion protein PrPC to the beta-sheet-rich PrPSc that is thought to be responsible for a group of fatal neurodegenerative diseases, transmissible spongiform encephalopathies. Efforts in analyzing other proteins related to various conformational diseases are also introduced. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Biofingerprinting chromatogram, analysis, which is defined as the comparison of fingerprinting chromatograms of the extract of traditional Chinese medicines (TCMs) before and after the interaction with biological systems (DNA, protein. cell. etc.), was proposed for screening and analysis of the multiple bioactive compounds in TCMs. A method of microdialysis sampling combined with high performance liquid chromatography (HPLC) was applied to the study of DNA-binding property for the extracts of TCMs. Seven compounds were found to bind to calf thymus DNA (ct-DNA) from the TCMs of Coptis chinensis Franch (Coptis), but only three ones from Phellodendron amurense Rupr. (Phellodendron) and none from Sophoraflavescens Ait. (Sophora) to bind to ct-DNA. respectively. Three of them were identified as berberine, palmatine and jatrorrhizine and their association constants (K) to ct-DNA were determined by microdialysis/HPLC. Competitive binding behaviors of them to ct-DNA were also investigated. © 2005 Elsevier B.V. All rights reserved.
Resumo:
Elliott, G. N., Worgan, H., Broadhurst, D. I., Draper, J. H., Scullion, J. (2007). Soil differentiation using fingerprint Fourier transform infrared spectroscopy, chemometrics and genetic algorithm-based feature selection. Soil Biology & Biochemistry, 39 (11), 2888-2896. Sponsorship: BBSRC / NERC RAE2008
Resumo:
Manfred Beckmann, David P. Enot, David P. Overy, and John Draper (2007). Representation, comparison, and interpretation of metabolome fingerprint data for total composition analysis and quality trait investigation in potato cultivars. Journal of Agricultural and Food Chemistry, 55 (9) pp.3444-3451 RAE2008
Resumo:
This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.