28 resultados para tree structured business data
em Indian Institute of Science - Bangalore - Índia
Resumo:
We propose a scheme for the compression of tree structured intermediate code consisting of a sequence of trees specified by a regular tree grammar. The scheme is based on arithmetic coding, and the model that works in conjunction with the coder is automatically generated from the syntactical specification of the tree language. Experiments on data sets consisting of intermediate code trees yield compression ratios ranging from 2.5 to 8, for file sizes ranging from 167 bytes to 1 megabyte.
Resumo:
The problem of identifying user intent has received considerable attention in recent years, particularly in the context of improving the search experience via query contextualization. Intent can be characterized by multiple dimensions, which are often not observed from query words alone. Accurate identification of Intent from query words remains a challenging problem primarily because it is extremely difficult to discover these dimensions. The problem is often significantly compounded due to lack of representative training sample. We present a generic, extensible framework for learning the multi-dimensional representation of user intent from the query words. The approach models the latent relationships between facets using tree structured distribution which leads to an efficient and convergent algorithm, FastQ, for identifying the multi-faceted intent of users based on just the query words. We also incorporated WordNet to extend the system capabilities to queries which contain words that do not appear in the training data. Empirical results show that FastQ yields accurate identification of intent when compared to a gold standard.
Resumo:
Land cover (LC) changes play a major role in global as well as at regional scale patterns of the climate and biogeochemistry of the Earth system. LC information presents critical insights in understanding of Earth surface phenomena, particularly useful when obtained synoptically from remote sensing data. However, for developing countries and those with large geographical extent, regular LC mapping is prohibitive with data from commercial sensors (high cost factor) of limited spatial coverage (low temporal resolution and band swath). In this context, free MODIS data with good spectro-temporal resolution meet the purpose. LC mapping from these data has continuously evolved with advances in classification algorithms. This paper presents a comparative study of two robust data mining techniques, the multilayer perceptron (MLP) and decision tree (DT) on different products of MODIS data corresponding to Kolar district, Karnataka, India. The MODIS classified images when compared at three different spatial scales (at district level, taluk level and pixel level) shows that MLP based classification on minimum noise fraction components on MODIS 36 bands provide the most accurate LC mapping with 86% accuracy, while DT on MODIS 36 bands principal components leads to less accurate classification (69%).
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
Differences in flower success patterns in two habitat types that differed drastically with respect to rainfall, tree density and species composition were studied at Mudumalai wildlife sanctuary, India. Observations on phenological patterns of two species, Cassia fistula and Gmelina arborea, were made from April 1988 through June 1990. Quantitative data on flower-fruit ratio, insect visitation rates, pollen grain per stigma and the number of fruits per tree were recorded. Data were also collected on the number of pollen deposited on the stigma after different types of bees visited the flower. The data suggested that only carpenter bees (Xylocopa spp) effect pollination in C. fistula. The differences in fruit-flower ratios were attributed to the differences in insect visitation rates to inflorescences between sites. The low pollen number per stigma and the resultant reduction in reproductive success in C. fistula are attributed to the competing species G. arborea receiving more visitations from pollinators in the wetter site. These results suggest that pollinator limitation is another constraint in reproductive success of plants.
Resumo:
This paper is concerned with off-line signature verification. Four different types of pattern representation schemes have been implemented, viz., geometric features, moment-based representations, envelope characteristics and tree-structured Wavelet features. The individual feature components in a representation are weighed by their pattern characterization capability using Genetic Algorithms. The conclusions of the four subsystems teach depending on a representation scheme) are combined to form a final decision on the validity of signature. Threshold-based classifiers (including the traditional confidence-interval classifier), neighbourhood classifiers and their combinations were studied. Benefits of using forged signatures for training purposes have been assessed. Experimental results show that combination of the Feature-based classifiers increases verification accuracy. (C) 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Microsoft Windows uses the notion of registry to store all configuration information. The registry entries have associations and dependencies. For example, the paths to executables may be relative to some home directories. The registry being designed with faster access as one of the objectives does not explicitly capture these relations. In this paper, we explore a representation that captures the dependencies more explicitly using shared and unifying variables. This representation, called mRegistry exploits the tree-structured hierarchical nature of the registry, is concept-based and obtained in multiple stages. mRegistry captures intra-block, inter-block and ancestor-children dependencies (all leaf entries of a parent key in a registry put together as an entity constitute a block thereby making the block as the only child of the parent). In addition, it learns the generalized concepts of dependencies in the form of rules. We show that mRegistry has several applications: fault diagnosis, prediction, comparison, compression etc.
Resumo:
The coding gain in subband coding, a popular technique for achieving signal compression, depends on how the input signal spectrum is decomposed into subbands. The optimality of such decomposition is conventionally addressed by designing appropriate filter banks. The issue of optimal decomposition of the input spectrum is addressed by choosing the set of band that, for a given number of bands, will achieve maximum coding gain. A set of necessary conditions for such optimality is derived, and an algorithm to determine the optimal band edges is then proposed. These band edges along with ideal filters, achieve the upper bound of coding gain for a given number of bands. It is shown that with ideal filters, as well as with realizable filters for some given effective length, such a decomposition system performs better than the conventional nonuniform binary tree-structured decomposition in some cases for AR sources as well as images
Resumo:
Several techniques are known for searching an ordered collection of data. The techniques and analyses of retrieval methods based on primary attributes are straightforward. Retrieval using secondary attributes depends on several factors. For secondary attribute retrieval, the linear structures—inverted lists, multilists, doubly linked lists—and the recently proposed nonlinear tree structures—multiple attribute tree (MAT), K-d tree (kdT)—have their individual merits. It is shown in this paper that, of the two tree structures, MAT possesses several features of a systematic data structure for external file organisation which make it superior to kdT. Analytic estimates for the complexity of node searchers, in MAT and kdT for several types of queries, are developed and compared.
Resumo:
Sinusoidal structured light projection (SSLP) technique, specifically-phase stepping method, is in widespread use to obtain accurate, dense 3-D data. But, if the object under investigation possesses surface discontinuities, phase unwrapping (an intermediate step in SSLP) stage mandatorily require several additional images, of the object with projected fringes (of different spatial frequencies), as input to generate a reliable 3D shape. On the other hand, Color-coded structured light projection (CSLP) technique is known to require a single image as in put, but generates sparse 3D data. Thus we propose the use of CSLP in conjunction with SSLP to obtain dense 3D data with minimum number of images as input. This approach is shown to be significantly faster and reliable than temporal phase unwrapping procedure that uses a complete exponential sequence. For example, if a measurement with the accuracy obtained by interrogating the object with 32 fringes in the projected pattern is carried out with both the methods, new strategy proposed requires only 5 frames as compared to 24 frames required by the later method.
Resumo:
This paper presents a fast algorithm for data exchange in a network of processors organized as a reconfigurable tree structure. For a given data exchange table, the algorithm generates a sequence of tree configurations in which the data exchanges are to be executed. A significant feature of the algorithm is that each exchange is executed in a tree configuration in which the source and destination nodes are adjacent to each other. It has been proved in a theorem that for every pair of nodes in the reconfigurable tree structure, there always exists two and only two configurations in which these two nodes are adjacent to each other. The algorithm utilizes this fact and determines the solution so as to optimize both the number of configurations required and the time to perform the data exchanges. Analysis of the algorithm shows that it has linear time complexity, and provides a large reduction in run-time as compared to a previously proposed algorithm. This is well-confirmed from the experimental results obtained by executing a large number of randomly-generated data exchange tables. Another significant feature of the algorithm is that the bit-size of the routing information code is always two bits, irrespective of the number of nodes in the tree. This not only increases the speed of the algorithm but also results in simpler hardware inside each node.
Resumo:
In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A variety of data structures such as inverted file, multi-lists, quad tree, k-d tree, range tree, polygon tree, quintary tree, multidimensional tries, segment tree, doubly chained tree, the grid file, d-fold tree. super B-tree, Multiple Attribute Tree (MAT), etc. have been studied for multidimensional searching and related problems. Physical data base organization, which is an important application of multidimensional searching, is traditionally and mostly handled by employing inverted file. This study proposes MAT data structure for bibliographic file systems, by illustrating the superiority of MAT data structure over inverted file. Both the methods are compared in terms of preprocessing, storage and query costs. Worst-case complexity analysis of both the methods, for a partial match query, is carried out in two cases: (a) when directory resides in main memory, (b) when directory resides in secondary memory. In both cases, MAT data structure is shown to be more efficient than the inverted file method. Arguments are given to illustrate the superiority of MAT data structure in an average case also. An efficient adaptation of MAT data structure, that exploits the special features of MAT structure and bibliographic files, is proposed for bibliographic file systems. In this adaptation, suitable techniques for fixing and ranking of the attributes for MAT data structure are proposed. Conclusions and proposals for future research are presented.
Resumo:
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.
Resumo:
The dielectric properties of BaBi4Ti4O15 ceramics were investigated as a function of frequency (10(2)-10(6) Hz) at various temperatures (30 degrees C-470 degrees C), covering the phase transition temperature. Two different conduction mechanisms were obtained by fitting the complex impedance data to Cole-Cole equation. The grain and grain boundary resistivities were found to follow the Arrhenius law associated with activation energies: E-g similar to 1.12 eV below T-m and E-g similar to 0.70 eV above T-m for the grain conduction; and E-gb similar to 0.93 eV below T-m and E-gb similar to 0.71 eV above T-m for the grain boundary conduction. Relaxation times extracted using imaginary part of complex impedance Z `'(omega) and modulus M `'(omega) were also found to follow the Arrhenius law and showed an anomaly around the phase transition temperature. The frequency dependence of conductivity was interpreted in terms of the jump relaxation model and was fitted to the double power law. (C) 2010 Elsevier B. V. All rights reserved.