977 resultados para Classification theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A short summary of the theory of symmetric group and symmetric functions needed to follow the theory of Schur functions and plethysms is presented. One then defines plethysm, gives its properties and presents a procedure for its calculation. Finally, some aplications in atomic physics and nuclear structure are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial analysis and fuzzy classification techniques were used to estimate the spatial distributions of heavy metals in soil. The work was applied to soils in a coastal region that is characterized by intense urban occupation and large numbers of different industries. Concentrations of heavy metals were determined using geostatistical techniques and classes of risk were defined using fuzzy classification. The resulting prediction mappings identify the locations of high concentrations of Pb, Zn, Ni, and Cu in topsoils of the study area. The maps show that areas of high pollution of Ni and Cu are located at the northeast, where there is a predominance of industrial and agricultural activities; Pb and Zn also occur in high concentrations in the northeast, but the maps also show significant concentrations of Pb and Zn in other areas, mainly in the central and southeastern parts, where there are urban leisure activities and trade centers. Maps were also prepared showing levels of pollution risk. These maps show that (1) Cu presents a large pollution risk in the north-northwest, midwest, and southeast sectors, (2) Pb represents a moderate risk in most areas, (3) Zn generally exhibits low risk, and (4) Ni represents either low risk or no risk in the studied area. This study shows that combining geostatistics with fuzzy theory can provide results that offer insight into risk assessment for environmental pollution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

De Sousa′s comprehensive two-part review of a diversity of contemporary approaches to the study of consciousness is highly welcome. He makes us aware of a proliferation of theoretical and empirical approaches targeting a common theme, but diverging in many ways. He skilfully accomplishes a classification of kinds of approach, identification of the main representatives, their contributions, and respective limitations. However, he does not show how the desired integration could be accomplished. Besides summarising de Sousa′s efficient analytical work, I make critical comments and briefly report my contribution for the integration project.© MSM 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The President of Brazil established an Interministerial Work Group in order to “evaluate the model of classification and valuation of disabilities used in Brazil and to define the elaboration and adoption of a unique model for all the country”. Eight Ministries and/or Secretaries participated in the discussion over a period of 10 months, concluding that a proposed model should be based on the United Nations Convention on the Rights of Person with Disabilities, the International Classification of Functioning, Disability and Health, and the ‘support theory’, and organizing a list of recommendations and necessary actions for a Classification, Evaluation and Certification Network with national coverage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper,we present a novel texture analysis method based on deterministic partially self-avoiding walks and fractal dimension theory. After finding the attractors of the image (set of pixels) using deterministic partially self-avoiding walks, they are dilated in direction to the whole image by adding pixels according to their relevance. The relevance of each pixel is calculated as the shortest path between the pixel and the pixels that belongs to the attractors. The proposed texture analysis method is demonstrated to outperform popular and state-of-the-art methods (e.g. Fourier descriptors, occurrence matrix, Gabor filter and local binary patterns) as well as deterministic tourist walk method and recent fractal methods using well-known texture image datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The alternative classification system for personality disorders in DSM-5 features a hierarchical model of maladaptive personality traits. This trait model comprises five broad trait domains and 25 specific trait facets that can be reliably assessed using the Personality Inventory for DSM-5 (PID-5). Although there is a steadily growing literature on the validity of the PID-5, issues of temporal stability and situational influences on test scores are currently unexplored. We addressed these issues using a sample of 611 research participants who completed the PID-5 three times, with time intervals of two months. Latent state-trait (LST) analyses for each of the 25 PID-5 trait facets showed that, on average, 79.5% of the variance was due to stable traits (i.e., consistency), and 7.7% of the variance was due to situational factors (i.e., occasion specificity). Our findings suggest that the PID-5 trait facets predominantly capture individual differences that are stable across time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well accepted that tumorigenesis is a multi-step procedure involving aberrant functioning of genes regulating cell proliferation, differentiation, apoptosis, genome stability, angiogenesis and motility. To obtain a full understanding of tumorigenesis, it is necessary to collect information on all aspects of cell activity. Recent advances in high throughput technologies allow biologists to generate massive amounts of data, more than might have been imagined decades ago. These advances have made it possible to launch comprehensive projects such as (TCGA) and (ICGC) which systematically characterize the molecular fingerprints of cancer cells using gene expression, methylation, copy number, microRNA and SNP microarrays as well as next generation sequencing assays interrogating somatic mutation, insertion, deletion, translocation and structural rearrangements. Given the massive amount of data, a major challenge is to integrate information from multiple sources and formulate testable hypotheses. This thesis focuses on developing methodologies for integrative analyses of genomic assays profiled on the same set of samples. We have developed several novel methods for integrative biomarker identification and cancer classification. We introduce a regression-based approach to identify biomarkers predictive to therapy response or survival by integrating multiple assays including gene expression, methylation and copy number data through penalized regression. To identify key cancer-specific genes accounting for multiple mechanisms of regulation, we have developed the integIRTy software that provides robust and reliable inferences about gene alteration by automatically adjusting for sample heterogeneity as well as technical artifacts using Item Response Theory. To cope with the increasing need for accurate cancer diagnosis and individualized therapy, we have developed a robust and powerful algorithm called SIBER to systematically identify bimodally expressed genes using next generation RNAseq data. We have shown that prediction models built from these bimodal genes have the same accuracy as models built from all genes. Further, prediction models with dichotomized gene expression measurements based on their bimodal shapes still perform well. The effectiveness of outcome prediction using discretized signals paves the road for more accurate and interpretable cancer classification by integrating signals from multiple sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the fusion of probabilistic knowledge-based classification rules and learning automata theory is proposed and as a result we present a set of probabilistic classification rules with self-learning capability. The probabilities of the classification rules change dynamically guided by a supervised reinforcement process aimed at obtaining an optimum classification accuracy. This novel classifier is applied to the automatic recognition of digital images corresponding to visual landmarks for the autonomous navigation of an unmanned aerial vehicle (UAV) developed by the authors. The classification accuracy of the proposed classifier and its comparison with well-established pattern recognition methods is finally reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual classification is the way we relate to different images in our environment as if they were the same, while relating differently to other collections of stimuli (e.g., human vs. animal faces). It is still not clear, however, how the brain forms such classes, especially when introduced with new or changing environments. To isolate a perception-based mechanism underlying class representation, we studied unsupervised classification of an incoming stream of simple images. Classification patterns were clearly affected by stimulus frequency distribution, although subjects were unaware of this distribution. There was a common bias to locate class centers near the most frequent stimuli and their boundaries near the least frequent stimuli. Responses were also faster for more frequent stimuli. Using a minimal, biologically based neural-network model, we demonstrate that a simple, self-organizing representation mechanism based on overlapping tuning curves and slow Hebbian learning suffices to ensure classification. Combined behavioral and theoretical results predict large tuning overlap, implicating posterior infero-temporal cortex as a possible site of classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a novel filter for feature selection. Such filter relies on the estimation of the mutual information between features and classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rényi entropy, and the subsequent approximation of the Shannon one. The complexity of such bypassing process does not depend on the number of dimensions but on the number of patterns/samples, and thus the curse of dimensionality is circumvented. We show that it is then possible to outperform a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification.