25 resultados para 080109 Pattern Recognition and Data Mining
Resumo:
Texture is one of the most important visual attributes used in image analysis. It is used in many content-based image retrieval systems, where it allows the identification of a larger number of images from distinct origins. This paper presents a novel approach for image analysis and retrieval based on complexity analysis. The approach consists of a texture segmentation step, performed by complexity analysis through BoxCounting fractal dimension, followed by the estimation of complexity of each computed region by multiscale fractal dimension. Experiments have been performed with MRI database in both pattern recognition and image retrieval contexts. Results show the accuracy of the method and also indicate how the performance changes as the texture segmentation process is altered.
Resumo:
Texture is an important visual attribute used to describe the pixel organization in an image. As well as it being easily identified by humans, its analysis process demands a high level of sophistication and computer complexity. This paper presents a novel approach for texture analysis, based on analyzing the complexity of the surface generated from a texture, in order to describe and characterize it. The proposed method produces a texture signature which is able to efficiently characterize different texture classes. The paper also illustrates a novel method performance on an experiment using texture images of leaves. Leaf identification is a difficult and complex task due to the nature of plants, which presents a huge pattern variation. The high classification rate yielded shows the potential of the method, improving on traditional texture techniques, such as Gabor filters and Fourier analysis.
Resumo:
The peritoneal cavity (PerC) is a singular compartment where many cell populations reside and interact. Despite the widely adopted experimental approach of intraperitoneal (i.p.) inoculation, little is known about the behavior of the different cell populations within the PerC. To evaluate the dynamics of peritoneal macrophage (Mempty set) subsets, namely small peritoneal Mempty set (SPM) and large peritoneal Mempty set (LPM), in response to infectious stimuli, C57BL/6 mice were injected i.p. with zymosan or Trypanosoma cruzi. These conditions resulted in the marked modification of the PerC myelo-monocytic compartment characterized by the disappearance of LPM and the accumulation of SPM and monocytes. In parallel, adherent cells isolated from stimulated PerC displayed reduced staining for beta-galactosidase, a biomarker for senescence. Further, the adherent cells showed increased nitric oxide (NO) and higher frequency of IL-12-producing cells in response to subsequent LPS and IFN-gamma stimulation. Among myelo-monocytic cells, SPM rather than LPM or monocytes, appear to be the central effectors of the activated PerC; they display higher phagocytic activity and are the main source of IL-12. Thus, our data provide a first demonstration of the consequences of the dynamics between peritoneal Mempty set subpopulations by showing that substitution of LPM by a robust SPM and monocytes in response to infectious stimuli greatly improves PerC effector activity.
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
Pattern recognition methods have been successfully applied in several functional neuroimaging studies. These methods can be used to infer cognitive states, so-called brain decoding. Using such approaches, it is possible to predict the mental state of a subject or a stimulus class by analyzing the spatial distribution of neural responses. In addition it is possible to identify the regions of the brain containing the information that underlies the classification. The Support Vector Machine (SVM) is one of the most popular methods used to carry out this type of analysis. The aim of the current study is the evaluation of SVM and Maximum uncertainty Linear Discrimination Analysis (MLDA) in extracting the voxels containing discriminative information for the prediction of mental states. The comparison has been carried out using fMRI data from 41 healthy control subjects who participated in two experiments, one involving visual-auditory stimulation and the other based on bimanual fingertapping sequences. The results suggest that MLDA uses significantly more voxels containing discriminative information (related to different experimental conditions) to classify the data. On the other hand, SVM is more parsimonious and uses less voxels to achieve similar classification accuracies. In conclusion, MLDA is mostly focused on extracting all discriminative information available, while SVM extracts the information which is sufficient for classification. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.
Resumo:
There is a family of well-known external clustering validity indexes to measure the degree of compatibility or similarity between two hard partitions of a given data set, including partitions with different numbers of categories. A unified, fully equivalent set-theoretic formulation for an important class of such indexes was derived and extended to the fuzzy domain in a previous work by the author [Campello, R.J.G.B., 2007. A fuzzy extension of the Rand index and other related indexes for clustering and classification assessment. Pattern Recognition Lett., 28, 833-841]. However, the proposed fuzzy set-theoretic formulation is not valid as a general approach for comparing two fuzzy partitions of data. Instead, it is an approach for comparing a fuzzy partition against a hard referential partition of the data into mutually disjoint categories. In this paper, generalized external indexes for comparing two data partitions with overlapping categories are introduced. These indexes can be used as general measures for comparing two partitions of the same data set into overlapping categories. An important issue that is seldom touched in the literature is also addressed in the paper, namely, how to compare two partitions of different subsamples of data. A number of pedagogical examples and three simulation experiments are presented and analyzed in details. A review of recent related work compiled from the literature is also provided. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
Human transthyretin (TTR) is a homotetrameric protein involved in several amyloidoses. Zn(2+) enhances TTR aggregation in vitro, and is a component of ex vivo TTR amyloid fibrils. We report the first crystal structure of human TTR in complex with Zn(2+) at pH 4.6-7.5. All four structures reveal three tetra-coordinated Zn(2+)-binding sites (ZBS 1-3) per monomer, plus a fourth site (ZBS 4) involving amino acid residues from a symmetry-related tetramer that is not visible in solution by NMR.Zn(2+) binding perturbs loop E-alpha-helix-loop F, the region involved in holo-retinol-binding protein (holo-RBP) recognition, mainly at acidic pH; TTR affinity for holo-RBP decreases similar to 5-fold in the presence of Zn(2+). Interestingly, this same region is disrupted in the crystal structure of the amyloidogenic intermediate of TTR formed at acidic pH in the absence of Zn(2+). HNCO and HNCA experiments performed in solution at pH 7.5 revealed that upon Zn(2+) binding, although the alpha-helix persists, there are perturbations in the resonances of the residues that flank this region, suggesting an increase in structural flexibility. While stability of the monomer of TTR decreases in the presence of Zn(2+), which is consistent with the tertiary structural perturbation provoked by Zn(2+) binding, tetramer stability is only marginally affected by Zn(2+). These data highlight structural and functional roles of Zn(2+) in TTR-related amyloidoses, as well as in holo-RBP recognition and vitamin A homeostasis.
Resumo:
This paper proposes a method to locate and track people by combining evidence from multiple cameras using the homography constraint. The proposed method use foreground pixels from simple background subtraction to compute evidence of the location of people on a reference ground plane. The algorithm computes the amount of support that basically corresponds to the ""foreground mass"" above each pixel. Therefore, pixels that correspond to ground points have more support. The support is normalized to compensate for perspective effects and accumulated on the reference plane for all camera views. The detection of people on the reference plane becomes a search for regions of local maxima in the accumulator. Many false positives are filtered by checking the visibility consistency of the detected candidates against all camera views. The remaining candidates are tracked using Kalman filters and appearance models. Experimental results using challenging data from PETS`06 show good performance of the method in the presence of severe occlusion. Ground truth data also confirms the robustness of the method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time. (C) 2009 Elsevier Ltd. All rights reserved.