31 resultados para PATTERN-RECOGNITION RECEPTOR
Resumo:
Multi-element analysis of honey samples was carried out with the aim of developing a reliable method of tracing the origin of honey. Forty-two chemical elements were determined (Al, Cu, Pb, Zn, Mn, Cd, Tl, Co, Ni, Rb, Ba, Be, Bi, U, V, Fe, Pt, Pd, Te, Hf, Mo, Sn, Sb, P, La, Mg, I, Sm, Tb, Dy, Sd, Th, Pr, Nd, Tm, Yb, Lu, Gd, Ho, Er, Ce, Cr) by inductively coupled plasma mass spectrometry (ICP-MS). Then, three machine learning tools for classification and two for attribute selection were applied in order to prove that it is possible to use data mining tools to find the region where honey originated. Our results clearly demonstrate the potential of Support Vector Machine (SVM), Multilayer Perceptron (MLP) and Random Forest (RF) chemometric tools for honey origin identification. Moreover, the selection tools allowed a reduction from 42 trace element concentrations to only 5. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Resumo:
Duchenne muscular dystrophy (DMD) is a recessive X-linked form of muscular dystrophy characterized by progressive and irreversible degeneration of the muscles. The mdx mouse is the classical animal model for DMD, showing similar molecular and protein defects. The mdx mouse, however, does not show significant muscle weakness, and the diaphragm muscle is significantly more degenerated than skeletal muscles. In this work, magnetic resonance spectroscopy (MRS) was used to study the metabolic profile of quadriceps and diaphragm muscles from mdx and control mice. Using principal components analysis (PCA), the animals were separated into groups according to age and lineages. The classification was compared to histopathological analysis. Among the 24 metabolites identified from the nuclear MR spectra, only 19 were used by the PCA program for classification purposes. These can be important key biomarkers associated with the progression of degeneration in mdx muscles and with natural aging in control mice. Glutamate, glutamine, succinate, isoleucine, acetate, alanine and glycerol were increased in mdx samples as compared to control mice, in contrast to carnosine, taurine, glycine, methionine and creatine that were decreased. These results suggest that MRS associated with pattern recognition analysis can be a reliable tool to assess the degree of pathological and metabolic alterations in the dystrophic tissue, thereby affording the possibility of evaluation of beneficial effects of putative therapies. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
In this article we propose an efficient and accurate method for fault location in underground distribution systems by means of an Optimum-Path Forest (OPF) classifier. We applied the time domains reflectometry method for signal acquisition, which was further analyzed by OPF and several other well-known pattern recognition techniques. The results indicated that OPF and support vector machines outperformed artificial neural networks and a Bayesian classifier, but OPF was much more efficient than all classifiers for training, and the second fastest for classification.
Resumo:
Mannan-binding lectin (MBL) is an important protein of the innate immune system and protects the body against infection through opsonization and activation of the complement system on surfaces with an appropriate presentation of carbohydrate ligands. The quaternary structure of human MBL is built from oligomerization of structural units into polydisperse complexes typically with three to eight structural units, each containing three lectin domains. Insight into the connection between the structure and ligand-binding properties of these oligomers has been lacking. In this article, we present an analysis of the binding to neoglycoprotein-coated surfaces by size-fractionated human MBL oligomers studied with small-angle x-ray scattering and surface plasmon resonance spectroscopy. The MBL oligomers bound to these surfaces mainly in two modes, with dissociation constants in the micro to nanomolar order. The binding kinetics were markedly influenced by both the density of ligands and the number of ligand-binding domains in the oligomers. These findings demonstrated that the MBL-binding kinetics are critically dependent on structural characteristics on the nanometer scale, both with regard to the dimensions of the oligomer, as well as the ligand presentation on surfaces. Therefore, our work suggested that the surface binding of MBL involves recognition of patterns with dimensions on the order of 10-20 nm. The recent understanding that the surfaces of many microbes are organized with structural features on the nanometer scale suggests that these properties of MBL ligand recognition potentially constitute an important part of the pattern-recognition ability of these polyvalent oligomers. The Journal of Immunology, 2012, 188: 1292-1306.
Resumo:
The development of new statistical and computational methods is increasingly making it possible to bridge the gap between hard sciences and humanities. In this study, we propose an approach based on a quantitative evaluation of attributes of objects in fields of humanities, from which concepts such as dialectics and opposition are formally defined mathematically. As case studies, we analyzed the temporal evolution of classical music and philosophy by obtaining data for 8 features characterizing the corresponding fields for 7 well-known composers and philosophers, which were treated with multivariate statistics and pattern recognition methods. A bootstrap method was applied to avoid statistical bias caused by the small sample data set, with which hundreds of artificial composers and philosophers were generated, influenced by the 7 names originally chosen. Upon defining indices for opposition, skewness and counter-dialectics, we confirmed the intuitive analysis of historians in that classical music evolved according to a master apprentice tradition, while in philosophy changes were driven by opposition. Though these case studies were meant only to show the possibility of treating phenomena in humanities quantitatively, including a quantitative measure of concepts such as dialectics and opposition, the results are encouraging for further application of the approach presented here to many other areas, since it is entirely generic.
Resumo:
Although nontechnical losses automatic identification has been massively studied, the problem of selecting the most representative features in order to boost the identification accuracy and to characterize possible illegal consumers has not attracted much attention in this context. In this paper, we focus on this problem by reviewing three evolutionary-based techniques for feature selection, and we also introduce one of them in this context. The results demonstrated that selecting the most representative features can improve a lot of the classification accuracy of possible frauds in datasets composed by industrial and commercial profiles.
Resumo:
Gunshot residues (GSR) can be used in forensic evaluations to obtain information about the type of gun and ammunition used in a crime. In this work, we present our efforts to develop a promising new method to discriminate the type of gun [four different guns were used: two handguns (0.38 revolver and 0.380 pistol) and two long-barrelled guns (12-calibre pump-action shotgun and 0.38 repeating rifle)] and ammunition (five different types: normal, semi-jacketed, full-jacketed, green, and 3T) used by a suspect. The proposed approach is based on information obtained from cyclic voltammograms recorded in solutions containing GSR collected from the hands of the shooters, using a gold microelectrode; the information was further analysed by non-supervised pattern-recognition methods [(Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA)]. In all cases (gun and ammunition discrimination), good separation among different samples in the score plots and dendrograms was achieved. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The analysis of spatial relations among objects in an image is an important vision problem that involves both shape analysis and structural pattern recognition. In this paper, we propose a new approach to characterize the spatial relation along, an important feature of spatial configurations in space that has been overlooked in the literature up to now. We propose a mathematical definition of the degree to which an object A is along an object B, based on the region between A and B and a degree of elongatedness of this region. In order to better fit the perceptual meaning of the relation, distance information is included as well. In order to cover a more wide range of potential applications, both the crisp and fuzzy cases are considered. In the crisp case, the objects are represented in terms of 2D regions or ID contours, and the definition of the alongness between them is derived from a visibility notion and from the region between the objects. However, the computational complexity of this approach leads us to the proposition of a new model to calculate the between region using the convex hull of the contours. On the fuzzy side, the region-based approach is extended. Experimental results obtained using synthetic shapes and brain structures in medical imaging corroborate the proposed model and the derived measures of alongness, thus showing that they agree with the common sense. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Fractal theory presents a large number of applications to image and signal analysis. Although the fractal dimension can be used as an image object descriptor, a multiscale approach, such as multiscale fractal dimension (MFD), increases the amount of information extracted from an object. MFD provides a curve which describes object complexity along the scale. However, this curve presents much redundant information, which could be discarded without loss in performance. Thus, it is necessary the use of a descriptor technique to analyze this curve and also to reduce the dimensionality of these data by selecting its meaningful descriptors. This paper shows a comparative study among different techniques for MFD descriptors generation. It compares the use of well-known and state-of-the-art descriptors, such as Fourier, Wavelet, Polynomial Approximation (PA), Functional Data Analysis (FDA), Principal Component Analysis (PCA), Symbolic Aggregate Approximation (SAX), kernel PCA, Independent Component Analysis (ICA), geometrical and statistical features. The descriptors are evaluated in a classification experiment using Linear Discriminant Analysis over the descriptors computed from MFD curves from two data sets: generic shapes and rotated fish contours. Results indicate that PCA, FDA, PA and Wavelet Approximation provide the best MFD descriptors for recognition and classification tasks. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Abstract Background Transcript enumeration methods such as SAGE, MPSS, and sequencing-by-synthesis EST "digital northern", are important high-throughput techniques for digital gene expression measurement. As other counting or voting processes, these measurements constitute compositional data exhibiting properties particular to the simplex space where the summation of the components is constrained. These properties are not present on regular Euclidean spaces, on which hybridization-based microarray data is often modeled. Therefore, pattern recognition methods commonly used for microarray data analysis may be non-informative for the data generated by transcript enumeration techniques since they ignore certain fundamental properties of this space. Results Here we present a software tool, Simcluster, designed to perform clustering analysis for data on the simplex space. We present Simcluster as a stand-alone command-line C package and as a user-friendly on-line tool. Both versions are available at: http://xerad.systemsbiology.net/simcluster. Conclusion Simcluster is designed in accordance with a well-established mathematical framework for compositional data analysis, which provides principled procedures for dealing with the simplex space, and is thus applicable in a number of contexts, including enumeration-based gene expression data.
Resumo:
This work presents a methodology to the morphology analysis and characterization of nanostructured material images acquired from FEG-SEM (Field Emission Gun-Scanning Electron Microscopy) technique. The metrics were extracted from the image texture (mathematical surface) by the volumetric fractal descriptors, a methodology based on the Bouligand-Minkowski fractal dimension, which considers the properties of the Minkowski dilation of the surface points. An experiment with galvanostatic anodic titanium oxide samples prepared in oxalyc acid solution using different conditions of applied current, oxalyc acid concentration and solution temperature was performed. The results demonstrate that the approach is capable of characterizing complex morphology characteristics such as those present in the anodic titanium oxide.
Resumo:
This work proposes a novel texture descriptor based on fractal theory. The method is based on the Bouligand- Minkowski descriptors. We decompose the original image recursively into four equal parts. In each recursion step, we estimate the average and the deviation of the Bouligand-Minkowski descriptors computed over each part. Thus, we extract entropy features from both average and deviation. The proposed descriptors are provided by concatenating such measures. The method is tested in a classification experiment under well known datasets, that is, Brodatz and Vistex. The results demonstrate that the novel technique achieves better results than classical and state-of-the-art texture descriptors, such as Local Binary Patterns, Gabor-wavelets and co-occurrence matrix.
Resumo:
This work proposes the application of fractal descriptors to the analysis of nanoscale materials under different experimental conditions. We obtain descriptors for images from the sample applying a multiscale transform to the calculation of fractal dimension of a surface map of such image. Particularly, we have used the Bouligand-Minkowski fractal dimension. We applied these descriptors to discriminate between two titanium oxide films prepared under different experimental conditions. Results demonstrate the discrimination power of proposed descriptors in such kind of application.
Resumo:
In this paper,we present a novel texture analysis method based on deterministic partially self-avoiding walks and fractal dimension theory. After finding the attractors of the image (set of pixels) using deterministic partially self-avoiding walks, they are dilated in direction to the whole image by adding pixels according to their relevance. The relevance of each pixel is calculated as the shortest path between the pixel and the pixels that belongs to the attractors. The proposed texture analysis method is demonstrated to outperform popular and state-of-the-art methods (e.g. Fourier descriptors, occurrence matrix, Gabor filter and local binary patterns) as well as deterministic tourist walk method and recent fractal methods using well-known texture image datasets.