282 resultados para FINGERPRINTS
Resumo:
Mode of access: Internet.
Resumo:
A visualization plot of a data set of molecular data is a useful tool for gaining insight into a set of molecules. In chemoinformatics, most visualization plots are of molecular descriptors, and the statistical model most often used to produce a visualization is principal component analysis (PCA). This paper takes PCA, together with four other statistical models (NeuroScale, GTM, LTM, and LTM-LIN), and evaluates their ability to produce clustering in visualizations not of molecular descriptors but of molecular fingerprints. Two different tasks are addressed: understanding structural information (particularly combinatorial libraries) and relating structure to activity. The quality of the visualizations is compared both subjectively (by visual inspection) and objectively (with global distance comparisons and local k-nearest-neighbor predictors). On the data sets used to evaluate clustering by structure, LTM is found to perform significantly better than the other models. In particular, the clusters in LTM visualization space are consistent with the relationships between the core scaffolds that define the combinatorial sublibraries. On the data sets used to evaluate clustering by activity, LTM again gives the best performance but by a smaller margin. The results of this paper demonstrate the value of using both a nonlinear projection map and a Bernoulli noise model for modeling binary data.
Resumo:
Background: Pigeonpea ( Cajanus cajan L. Millsp.) is a drought tolerant legume of the Fabaceae family and the only cultivated species in the genus Cajanus. It is mainly cultivated in the semi-arid tropics of Asia and Oceania, Africa and America. In Malawi, it is grown as a source of food and income and for soil improvement in intercropping systems. However, varietal contamination due to natural outcrossing causes significant quality reduction and yield losses. In this study, 48 polymorphic SSR markers were used to assess the diversity among all pigeonpea varieties cultivated in Malawi to determine if a genetic fingerprint could be identified to distinguish the popular varieties. Results: A total of 212 alleles were observed with an average of 5.58 alleles per marker and a maximum of 14 alleles produced by CCttc019 (Marker 40). Polymorphic information content (PIC), ranged from 0.03 to 0.89 with an average of 0.30. A neighbor-joining tree produced 4 clusters. The most commonly cultivated varieties, which include released varieties and cultivated land races, were well-spread across all the clusters observed, indicating that they generally represented the genetic diversity available in Malawi, although substantial variation was evident that can still be exploited through further breeding. Conclusion: Screening of the allelic data associated with the five most popular cultivated varieties, revealed 6 markers – CCB1, CCB7, Ccac035, CCttc003, Ccac026 and CCttc019 – which displayed unique allelic profiles for each of the five varieties. This genetic fingerprint can potentially be applied for seed certification to confirm the genetic purity of seeds that are delivered to Malawi farmers.
Resumo:
Purpose: To investigate the spectrum-effect relationships between high performance liquid chromatography (HPLC) fingerprints and duodenum contractility of charred areca nut (CAN) on rats. Methods: An HPLC method was used to establish the fingerprint of charred areca nut (CAN). The promoting effect on contractility of intestinal smooth was carried out to evaluate the duodenum contractility of CAN in vitro. In addition, the spectrum-effect relationships between HPLC fingerprints and bioactivities of CAN were investigated using multiple linear regression analysis (backward method). Results: Fourteen common peaks were detected and peak 3 (5-Hydroxymethyl-2-furfural, 5-HMF) was selected as the reference peak to calculate the relative retention time of 13 other common peaks. In addition, the equation of spectrum-effect relationships {Y = 3.818 - 1.126X1 + 0.817X2 - 0.045X4 - 0.504X5 + 0.728X6 - 0.056X8 + 1.122X9 - 0.247X13 - 0.978X14 (p < 0.05, R2 = 1)} was established in the present study by the multiple linear regression analysis (backward method). According to the equation, the absolute value of the coefficient before X1, X2, X4, X5, X6, X8, X9, X13, X14 was the coefficient between the component and the parameter. Conclusion: The model presented in this study successfully unraveled the spectrum-effect relationship of CAN, which provides a promising strategy for screening effective constituents of areca nut.
Resumo:
We studied the electrical transport properties of Au-seeded germanium nanowires with radii ranging from 11 to 80 nm at ambient conditions. We found a non-trivial dependence of the electrical conductivity, mobility and carrier density on the radius size. In particular, two regimes were identified for large (lightly doped) and small (stronger doped) nanowires in which the charge-carrier drift is dominated by electron-phonon and ionized-impurity scattering, respectively. This goes in hand with the finding that the electrostatic properties for radii below ca. 37 nm have quasi one-dimensional character as reflected by the extracted screening lengths.
Resumo:
The molecular and metal profile fingerprints were obtained from a complex substance, Atractylis chinensis DC—a traditional Chinese medicine (TCM), with the use of the high performance liquid chromatography (HPLC) and inductively coupled plasma atomic emission spectroscopy (ICP-AES) techniques. This substance was used in this work as an example of a complex biological material, which has found application as a TCM. Such TCM samples are traditionally processed by the Bran, Cut, Fried and Swill methods, and were collected from five provinces in China. The data matrices obtained from the two types of analysis produced two principal component biplots, which showed that the HPLC fingerprint data were discriminated on the basis of the methods for processing the raw TCM, while the metal analysis grouped according to the geographical origin. When the two data matrices were combined into a one two-way matrix, the resulting biplot showed a clear separation on the basis of the HPLC fingerprints. Importantly, within each different grouping the objects separated according to their geographical origin, and they ranked approximately in the same order in each group. This result suggested that by using such an approach, it is possible to derive improved characterisation of the complex TCM materials on the basis of the two kinds of analytical data. In addition, two supervised pattern recognition methods, K-nearest neighbors (KNNs) method, and linear discriminant analysis (LDA), were successfully applied to the individual data matrices—thus, supporting the PCA approach.
Resumo:
Chromatographic fingerprints of 46 Eucommia Bark samples were obtained by liquid chromatography-diode array detector (LC-DAD). These samples were collected from eight provinces in China, with different geographical locations, and climates. Seven common LC peaks that could be used for fingerprinting this common popular traditional Chinese medicine were found, and six were identified as substituted resinols (4 compounds), geniposidic acid and chlorogenic acid by LC-MS. Principal components analysis (PCA) indicated that samples from the Sichuan, Hubei, Shanxi and Anhui—the SHSA provinces, clustered together. The other objects from the four provinces, Guizhou, Jiangxi, Gansu and Henan, were discriminated and widely scattered on the biplot in four province clusters. The SHSA provinces are geographically close together while the others are spread out. Thus, such results suggested that the composition of the Eucommia Bark samples was dependent on their geographic location and environment. In general, the basis for discrimination on the PCA biplot from the original 46 objects× 7 variables data matrix was the same as that for the SHSA subset (36 × 7 matrix). The seven marker compound loading vectors grouped into three sets: (1) three closely correlating substituted resinol compounds and chlorogenic acid; (2) the fourth resinol compound identified by the OCH3 substituent in the R4 position, and an unknown compound; and (3) the geniposidic acid, which was independent of the set 1 variables, and which negatively correlated with the set 2 ones above. These observations from the PCA biplot were supported by hierarchical cluster analysis, and indicated that Eucommia Bark preparations may be successfully compared with the use of the HPLC responses from the seven marker compounds and chemometric methods such as PCA and the complementary hierarchical cluster analysis (HCA).
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Buildings and infrastructure represent principal assets of any national economy as well as prime sources of environmental degradation. Making them more sustainable represents a key challenge for the construction, planning and design industries and governments at all levels; and the rapid urbanisation of the 21st century has turned this into a global challenge. This book embodies the results of a major research programme by members of the Australia Co-operative Research Centre for Construction Innovation and its global partners, presented for an international audience of construction researchers, senior professionals and advanced students. It covers four themes, applied to regeneration as well as to new build, and within the overall theme of Innovation: Sustainable Materials and Manufactures, focusing on building material products, their manufacture and assembly – and the reduction of their ecological ‘fingerprints’, the extension of their service lives, and their re-use and recyclability. It also explores the prospects for applying the principles of the assembly line. Virtual Design, Construction and Management, viewed as increasing sustainable development through automation, enhanced collaboration (such as virtual design teams), real time BL performance assessment during design, simulation of the construction process, life-cycle management of project information (zero information loss) risk minimisation, and increased potential for innovation and value adding. Integrating Design, Construction and Facility Management over the Project Life Cycle, by converging ICT, design science engineering and sustainability science. Integration across spatial scales, enabling building–infrastructure synergies (such as water and energy efficiency). Convergences between IT and design and operational processes are also viewed as a key platform increased sustainability.
Resumo:
A UPLC/Q-TOF-MS/MS method for analyzing the constituents in rat plasma after oral administration of Yin Chen Hao Tang (YCHT), a traditional Chinese medical formula, has been established. The UPLC/MS fingerprints of the samples were established first in vitro and in vivo, with 45 compounds in YCHT and 21 compounds in rat plasma after oral administration of YCHT were detected. Of the 45 detected compounds in vitro, 30 were identified, and all of the 21 compounds detected in rat plasma were identified either by comparing the retention time and mass spectrometry data with that of reference compounds or by mass spectrometry analysis and retrieving the reference literatures. Of the identified 21 compounds in rat plasma, 19 were the original form of compounds absorbed from the 45 detected compounds in vitro, 2 were the metabolites of the compounds existed in YCHT. It is concluded that a rapid and validated method has been developed based on UPLC-MS/MS, which shows high sensitivity and resolution that is more suitable for identifying the bioactive constituents in plasma after oral administration of Chinese herbal medicines, and provides helpful chemical information for further pharmacology and active mechanism research on the Chinese medical formula.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
In this chapter, we discuss four related areas of cryptology, namely, authentication, hashing, message authentication codes (MACs), and digital signatures. These topics represent active and growing research topics in cryptology. Space limitations allow us to concentrate only on the essential aspects of each topic. The bibliography is intended to supplement our survey. We have selected those items which providean overview of the current state of knowledge in the above areas. Authentication deals with the problem of providing assurance to a receiver that a communicated message originates from a particular transmitter, and that the received message has the same content as the transmitted message. A typical authentication scenario occurs in computer networks, where the identity of two communicating entities is established by means of authentication. Hashing is concerned with the problem of providing a relatively short digest–fingerprint of a much longer message or electronic document. A hashing function must satisfy (at least) the critical requirement that the fingerprints of two distinct messages are distinct. Hashing functions have numerous applications in cryptology. They are often used as primitives to construct other cryptographic functions. MACs are symmetric key primitives that provide message integrity against active spoofing by appending a cryptographic checksum to a message that is verifiable only by the intended recipient of the message. Message authentication is one of the most important ways of ensuring the integrity of information that is transferred by electronic means. Digital signatures provide electronic equivalents of handwritten signatures. They preserve the essential features of handwritten signatures and can be used to sign electronic documents. Digital signatures can potentially be used in legal contexts.
Resumo:
A combination of laser plasma ablation and strain control in CdO/ZnO heterostructures is used to produce and stabilize a metastable wurtzite CdO nanophase. According to the Raman selection rules, this nanophase is Raman-active whereas the thermodynamically preferred rocksalt phase is inactive. The wurtzite-specific and thickness/strain-dependent Raman fingerprints and phonon modes are identified and can be used for reliable and inexpensive nanophase detection. The wurtzite nanophase formation is also confirmed by x-ray diffractometry. The demonstrated ability of the metastable phase and phonon mode control in CdO/ZnO heterostructures is promising for the development of next-generation light emitting sources and exciton-based laser diodes.
Resumo:
Wi-Fi is a commonly available source of localization information in urban environments but is challenging to integrate into conventional mapping architectures. Current state of the art probabilistic Wi-Fi SLAM algorithms are limited by spatial resolution and an inability to remove the accumulation of rotational error, inherent limitations of the Wi-Fi architecture. In this paper we leverage the low quality sensory requirements and coarse metric properties of RatSLAM to localize using Wi-Fi fingerprints. To further improve performance, we present a novel sensor fusion technique that integrates camera and Wi-Fi to improve localization specificity, and use compass sensor data to remove orientation drift. We evaluate the algorithms in diverse real world indoor and outdoor environments, including an office floor, university campus and a visually aliased circular building loop. The algorithms produce topologically correct maps that are superior to those produced using only a single sensor modality.
Resumo:
Samples of Forsythia suspensa from raw (Laoqiao) and ripe (Qingqiao) fruit were analyzed with the use of HPLC-DAD and the EIS-MS techniques. Seventeen peaks were detected, and of these, twelve were identified. Most were related to the glucopyranoside molecular fragment. Samples collected from three geographical areas (Shanxi, Henan and Shandong Provinces), were discriminated with the use of hierarchical clustering analysis (HCA), discriminant analysis (DA), and principal component analysis (PCA) models, but only PCA was able to provide further information about the relationships between objects and loadings; eight peaks were related to the provinces of sample origin. The supervised classification models-K-nearest neighbor (KNN), least squares support vector machines (LS-SVM), and counter propagation artificial neural network (CP-ANN) methods, indicated successful classification but KNN produced 100% classification rate. Thus, the fruit were discriminated on the basis of their places of origin.