126 resultados para Emerging pattern mining
em Indian Institute of Science - Bangalore - Índia
Suite of tools for statistical N-gram language modeling for pattern mining in whole genome sequences
Resumo:
Genome sequences contain a number of patterns that have biomedical significance. Repetitive sequences of various kinds are a primary component of most of the genomic sequence patterns. We extended the suffix-array based Biological Language Modeling Toolkit to compute n-gram frequencies as well as n-gram language-model based perplexity in windows over the whole genome sequence to find biologically relevant patterns. We present the suite of tools and their application for analysis on whole human genome sequence.
Resumo:
Frequent episode discovery is one of the methods used for temporal pattern discovery in sequential data. An episode is a partially ordered set of nodes with each node associated with an event type. For more than a decade, algorithms existed for episode discovery only when the associated partial order is total (serial episode) or trivial (parallel episode). Recently, the literature has seen algorithms for discovering episodes with general partial orders. In frequent pattern mining, the threshold beyond which a pattern is inferred to be interesting is typically user-defined and arbitrary. One way of addressing this issue in the pattern mining literature has been based on the framework of statistical hypothesis testing. This paper presents a method of assessing statistical significance of episode patterns with general partial orders. A method is proposed to calculate thresholds, on the non-overlapped frequency, beyond which an episode pattern would be inferred to be statistically significant. The method is first explained for the case of injective episodes with general partial orders. An injective episode is one where event-types are not allowed to repeat. Later it is pointed out how the method can be extended to the class of all episodes. The significance threshold calculations for general partial order episodes proposed here also generalize the existing significance results for serial episodes. Through simulations studies, the usefulness of these statistical thresholds in pruning uninteresting patterns is illustrated. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Most pattern mining methods yield a large number of frequent patterns, and isolating a small relevant subset of patterns is a challenging problem of current interest. In this paper, we address this problem in the context of discovering frequent episodes from symbolic time-series data. Motivated by the Minimum Description Length principle, we formulate the problem of selecting relevant subset of patterns as one of searching for a subset of patterns that achieves best data compression. We present algorithms for discovering small sets of relevant non-redundant episodes that achieve good data compression. The algorithms employ a novel encoding scheme and use serial episodes with inter-event constraints as the patterns. We present extensive simulation studies with both synthetic and real data, comparing our method with the existing schemes such as GoKrimp and SQS. We also demonstrate the effectiveness of these algorithms on event sequences from a composable conveyor system; this system represents a new application area where use of frequent patterns for compressing the event sequence is likely to be important for decision support and control.
Resumo:
In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Bangalore is experiencing unprecedented urbanisation and sprawl in recent times due to concentrated developmental activities with impetus on industrialisation for the economic development of the region. This concentrated growth has resulted in the increase in population and consequent pressure on infrastructure, natural resources and ultimately giving rise to a plethora of serious challenges such as climate change, enhanced green-house gases emissions, lack of appropriate infrastructure, traffic congestion, and lack of basic amenities (electricity, water, and sanitation) in many localities, etc. This study shows that there has been a growth of 632% in urban areas of Greater Bangalore across 37 years (1973 to 2009). Urban heat island phenomenon is evident from large number of localities with higher local temperatures. The study unravels the pattern of growth in Greater Bangalore and its implication on local climate (an increase of ~2 to 2.5 ºC during the last decade) and also on the natural resources (76% decline in vegetation cover and 79% decline in water bodies), necessitating appropriate strategies for the sustainable management.
Resumo:
Rapid urbanisation in India has posed serious challenges to the decision makers in regional planning involving plethora of issues including provision of basic amenities (like electricity, water, sanitation, transport, etc.). Urban planning entails an understanding of landscape and urban dynamics with causal factors. Identifying, delineating and mapping landscapes on temporal scale provide an opportunity to monitor the changes, which is important for natural resource management and sustainable planning activities. Multi-source, multi-sensor, multi-temporal, multi-frequency or multi-polarization remote sensing data with efficient classification algorithms and pattern recognition techniques aid in capturing these dynamics. This paper analyses the landscape dynamics of Greater Bangalore by: (i) characterisation of direct impervious surface, (ii) computation of forest fragmentation indices and (iii) modeling to quantify and categorise urban changes. Linear unmixing is used for solving the mixed pixel problem of coarse resolution super spectral MODIS data for impervious surface characterisation. Fragmentation indices were used to classify forests – interior, perforated, edge, transitional, patch and undetermined. Based on this, urban growth model was developed to determine the type of urban growth – Infill, Expansion and Outlying growth. This helped in visualising urban growth poles and consequence of earlier policy decisions that can help in evolving strategies for effective land use policies.
Resumo:
Data mining is concerned with analysing large volumes of (often unstructured) data to automatically discover interesting regularities or relationships which in turn lead to better understanding of the underlying processes. The field of temporal data mining is concerned with such analysis in the case of ordered data streams with temporal interdependencies. Over the last decade many interesting techniques of temporal data mining were proposed and shown to be useful in many applications. Since temporal data mining brings together techniques from different fields such as statistics, machine learning and databases, the literature is scattered among many different sources. In this article, we present an overview of techniques of temporal data mining.We mainly concentrate on algorithms for pattern discovery in sequential data streams.We also describe some recent results regarding statistical analysis of pattern discovery methods.
Resumo:
Background: Tuberculosis (TB) is an enduring health problem worldwide and the emerging threat of multidrug resistant (MDR) TB and extensively drug resistant (XDR) TB is of particular concern. A better understanding of biomarkers associated with TB will aid to guide the development of better targets for TB diagnosis and for the development of improved TB vaccines. Methods: Recombinant proteins (n = 7) and peptide pools (n = 14) from M. tuberculosis (M.tb) antigens associated with M.tb pathogenicity, modification of cell lipids or cellular metabolism, were used to compare T cell immune responses defined by IFN-gamma production using a whole blood assay (WBA) from i) patients with TB, ii) individuals recovered from TB and iii) individuals exposed to TB without evidence of clinical TB infection from Minsk, Belarus. Results: We identified differences in M.tb target peptide recognition between the test groups, i.e. a frequent recognition of antigens associated with lipid metabolism, e.g. cyclopropane fatty acyl phospholipid synthase. The pattern of peptide recognition was broader in blood from healthy individuals and those recovered from TB as compared to individuals suffering from pulmonary TB. Detection of biologically relevant M.tb targets was confirmed by staining for intracellular cytokines (IL-2, TNF-alpha and IFN-gamma) in T cells from non-human primates (NHPs) after BCG vaccination. Conclusions: PBMCs from healthy individuals and those recovered from TB recognized a broader spectrum of M.tb antigens as compared to patients with TB. The nature of the pattern recognition of a broad panel of M.tb antigens will devise better strategies to identify improved diagnostics gauging previous exposure to M.tb; it may also guide the development of improved TB-vaccines.
Resumo:
Data clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning and data mining. Clustering is grouping of a data set or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common trait according to some defined distance measure. In this paper we present the genetically improved version of particle swarm optimization algorithm which is a population based heuristic search technique derived from the analysis of the particle swarm intelligence and the concepts of genetic algorithms (GA). The algorithm combines the concepts of PSO such as velocity and position update rules together with the concepts of GA such as selection, crossover and mutation. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The performance of our method is better than k-means and PSO algorithm.
Resumo:
The work studies the extent of asymmetric flow in water models of continuous casting molds of two different configurations. In the molds where fluid is discharged through multiple holes at the bottom, the flow pattern in the lower portion depends on the size of the lower two recirculating domains. If they reach the mold bottom, the flow pattern in the lower portion is symmetrical about the central plane; otherwise, it is asymmetrical. On the other hand, in the molds where the fluid is discharged through the entire mold cross section, the flow pattern is always asymmetrical if the aspect ratio is 1:6.25 or more. The fluid jet swirls while emerging through the nozzle. The interaction of the swirling Jets with the wide sidewalls of the mold gives rise to asymmetrical flow inside the mold. In the molds with lower aspect ratios, where the jets do not touch the wide side walls, the flow pattern is symmetrical about the central plane.
Resumo:
The mode of action of xylanase and beta-glucosidase purified from the culture filtrate of Humicola lanuginosa (Griffon and Maublanc) Bunce on the xylan extracted from sugarcane bagasse and on two commercially available larchwood and oat spelt xylans, on xylooligomers and on arabinoxylooligomers was studied. While larchwood and oat spelt xylans were hydrolyzed to the same extent in 24 h, sugarcane bagasse xylan was hydrolyzed to a lesser extent in the same period. It was found that the rate of hydrolysis of xylooligomers by xylanase increased with increase in chain length, while beta-glucosidase acted rather slowly on all the oligomers tested. Xylanase exhibited predominant ''endo'' action on xylooligomers attacking the xylan chain at random while beta-glucosidase had ''exo'' action, releasing one xylose residue at a time. On arabinoxylooligomers, however, xylanase exhibited ''exo'' action. Thus, it appears that the presence of the arabinose substituent has, in some way, rendered the terminal xylose-xylose linkage more susceptible to xylanase action. It was also observed that even after extensive hydrolysis with both the enzymes, substantial amounts of the parent arabinoxylooligomer remained unhydrolyzed together with the accumulation of arabinoxylobiose. It can therefore be concluded that the presence of the arabinose substituent in the xylan chain results in linkages that offer resistance to both xylanase and beta-glucosidase action.
Resumo:
The development of techniques for scaling up classifiers so that they can be applied to problems with large datasets of training examples is one of the objectives of data mining. Recently, AdaBoost has become popular among machine learning community thanks to its promising results across a variety of applications. However, training AdaBoost on large datasets is a major problem, especially when the dimensionality of the data is very high. This paper discusses the effect of high dimensionality on the training process of AdaBoost. Two preprocessing options to reduce dimensionality, namely the principal component analysis and random projection are briefly examined. Random projection subject to a probabilistic length preserving transformation is explored further as a computationally light preprocessing step. The experimental results obtained demonstrate the effectiveness of the proposed training process for handling high dimensional large datasets.
Resumo:
Abstract-The success of automatic speaker recognition in laboratory environments suggests applications in forensic science for establishing the Identity of individuals on the basis of features extracted from speech. A theoretical model for such a verification scheme for continuous normaliy distributed featureIss developed. The three cases of using a) single feature, b)multipliendependent measurements of a single feature, and c)multpleindependent features are explored.The number iofndependent features needed for areliable personal identification is computed based on the theoretcal model and an expklatory study of some speech featues.
Resumo:
A simple sequential thinning algorithm for peeling off pixels along contours is described. An adaptive algorithm obtained by incorporating shape adaptivity into this sequential process is also given. The distortions in the skeleton at the right-angle and acute-angle corners are minimized in the adaptive algorithm. The asymmetry of the skeleton, which is a characteristic of sequential algorithm, and is due to the presence of T-corners in some of the even-thickness pattern is eliminated. The performance (in terms of time requirements and shape preservation) is compared with that of a modern thinning algorithm.